code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
**Introduction to Python**<br/>
Prof. Dr. Jan Kirenz <br/>
Hochschule der Medien Stuttgart
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Import-data" data-toc-modified-id="Import-data-1"><span class="toc-item-num">1 </span>Import data</a></span></li><li><span><a href="#Data-tidying" data-toc-modified-id="Data-tidying-2"><span class="toc-item-num">2 </span>Data tidying</a></span></li></ul></div>
```
import pandas as pd
```
To get more information about the Pandas syntax, download the [Pandas code cheat sheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)
### Import data
```
# Import data from GitHub (or from your local computer)
df = pd.read_csv("https://raw.githubusercontent.com/kirenz/datasets/master/wage.csv")
```
### Data tidying
First of all we want to get an overview of the data
```
# show the head (first few observations in the df)
df.head(3)
# show metadata (take a look at the level of measurement)
df.info()
```
---
**Some notes on data types (level of measurement):**
If we need to transform variables into a **numerical format**, we can transfrom the data with pd.to_numeric [see Pandas documenation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html):
If the data contains strings, we need to replace them with NaN (not a number). Otherwise we get an error message. Therefore, use errors='coerce' ...
* pandas.to_numeric(arg, errors='coerce', downcast=None)
* errors : {‘ignore’, ‘raise’, ‘coerce’}, default ‘raise’
* If ‘raise’, then invalid parsing will raise an exception
* If ‘coerce’, then invalid parsing will be set as NaN
* If ‘ignore’, then invalid parsing will return the input
To change data into **categorical** format, you can use the following codes:
df['variable'] = pd.Categorical(df['variable'])
If the data is ordinal, we use pandas [CategoricalDtype](https://pandas.pydata.org/pandas-docs/stable/categorical.html)
---
```
# show all columns in the data
df.columns
# rename variable "education" to "edu"
df = df.rename(columns={"education": "edu"})
# check levels and frequency of edu
df['edu'].value_counts()
```
Convert `edu` to ordinal variable with pandas [CategoricalDtype](https://pandas.pydata.org/pandas-docs/stable/categorical.html)
```
from pandas.api.types import CategoricalDtype
# convert to ordinal variable
cat_edu = CategoricalDtype(categories=
['1. < HS Grad',
'2. HS Grad',
'3. Some College',
'4. College Grad',
'5. Advanced Degree'],
ordered=True)
df.edu = df.edu.astype(cat_edu)
```
Now convert `race ` to a categorical variable
```
# convert to categorical variable
df['race'] = pd.Categorical(df['race'])
```
Take a look at the metadata (what happend to `edu` and `race`)?
```
df.info()
```
| github_jupyter |
## Series
```
import pandas as pd
import numpy as np
import random
first_series = pd.Series([1,2,3, np.nan ,"hello"])
first_series
series = pd.Series([1,2,3, np.nan ,"hello"], index = ['A','B','C','Unknown','String'])
series
#indexing the Series with custom values
dict = {"Python": "Fun", "C++": "Outdated","Coding":"Hmm.."}
series = pd.Series(dict)
series
# Dict to pandas Series
series[['Coding','Python']]
series.index
series.values
series.describe()
#Series is a mutable data structures and you can easily change any item’s value:
series['Coding'] = 'Awesome'
series
# add new values:
series['Java'] = 'Okay'
series
# If it is necessary to apply any mathematical operation to Series items, you may done it like below:
num_series = pd.Series([1,2,3,4,5,6,None])
num_series_changed = num_series/2
num_series_changed
# NULL/NaN checking can be performed with isnull() and notnull().
print(series.isnull())
print(num_series.notnull())
print(num_series_changed.notnull())
```
## DataFrames
```
data = {'year': [1990, 1994, 1998, 2002, 2006, 2010, 2014],
'winner': ['Germany', 'Brazil', 'France', 'Brazil','Italy', 'Spain', 'Germany'],
'runner-up': ['Argentina', 'Italy', 'Brazil','Germany', 'France', 'Netherlands', 'Argentina'],
'final score': ['1-0', '0-0 (pen)', '3-0', '2-0', '1-1 (pen)', '1-0', '1-0'] }
world_cup = pd.DataFrame(data, columns=['year', 'winner', 'runner-up', 'final score'])
world_cup
# Another way to set a DataFrame is the using of Python list of dictionaries:
data_2 = [{'year': 1990, 'winner': 'Germany', 'runner-up': 'Argentina', 'final score': '1-0'},
{'year': 1994, 'winner': 'Brazil', 'runner-up': 'Italy', 'final score': '0-0 (pen)'},
{'year': 1998, 'winner': 'France', 'runner-up': 'Brazil', 'final score': '3-0'},
{'year': 2002, 'winner': 'Brazil', 'runner-up': 'Germany', 'final score': '2-0'},
{'year': 2006, 'winner': 'Italy','runner-up': 'France', 'final score': '1-1 (pen)'},
{'year': 2010, 'winner': 'Spain', 'runner-up': 'Netherlands', 'final score': '1-0'},
{'year': 2014, 'winner': 'Germany', 'runner-up': 'Argentina', 'final score': '1-0'}
]
world_cup = pd.DataFrame(data_2)
world_cup
print("First 2 Rows: ",end="\n\n")
print (world_cup.head(2),end="\n\n")
print ("Last 2 Rows : ",end="\n\n")
print (world_cup.tail(2),end="\n\n")
print("Using slicing : ",end="\n\n")
print (world_cup[2:4])
```
### CSV
#### Reading:
`df = pd.read_csv("path\to\the\csv\file\for\reading")`
#### Writing:
`df.to_csv("path\to\the\folder\where\you\want\save\csv\file")`
### TXT file(s)
(txt file can be read as a CSV file with other separator (delimiter); we suppose below that columns are separated by tabulation):
#### Reading:
`df = pd.read_csv("path\to\the\txt\file\for\reading", sep='\t')`
#### Writing:
`df.to_csv("path\to\the\folder\where\you\want\save\txt\file", sep='\t')`
### JSON files
(an open-standard format that uses human-readable text to transmit data objects consisting of attribute–value pairs. It is the most common data format used for asynchronous browser/server communication. By its view it is very similar to Python dictionary)
#### Reading:
`df = pd.read_json("path\to\the\json\file\for\reading", sep='\t')`
#### Writing:
`df.to_json("path\to\the\folder\where\you\want\save\json\file", sep='\t')`
```
# To write world_cup Dataframe to a CSV File
world_cup.to_csv("worldcup.csv")
# To save CSV file without index use index=False attribute
print("File Written!",end="\n\n")
#To check if it was written
import os
print(os.path.exists('worldcup.csv'))
# reading from it in a new dataframe df
df = pd.read_csv('worldcup.csv')
print(df.head())
# We can also load the data without index as :
df = pd.read_csv('worldcup.csv',index_col=0)
print(df)
movies=pd.read_csv("data/movies.csv",encoding = "ISO-8859-1")
# encoding is added only for this specific dataset because it gave error with utf-8
movies['release_date'] = movies['release_date'].map(pd.to_datetime)
print(movies.head(20))
#print(movies.describe())
movies_rating = movies['rating']
# Here we are showing only one column, i.e. a Series
print ('type:', type(movies_rating))
movies_rating.head()
# Filtering data
# Let's display only women
movies_user_female = movies[movies['gender']=='F']
print(movies_user_female.head())
#to see all the different values possible for a given column
occupation_list = movies['occupation']
print(occupation_list)
```
### Work with indexes and MultiIndex option
```
import random
indexes = [random.randrange(0,100) for i in range(5)]
data = [{i:random.randint(0,10) for i in 'ABCDE'} for i in range(5)]
df = pd.DataFrame(data, index=[1,2,3,4,5])
df
movies_user_gender_male = movies[movies['gender']=='M']
movies_user_gender_male_dup = movies_user_gender_male.drop_duplicates(keep=False)
print(movies_user_gender_male.head())
# From this we can clearly see age has missing value and that from 100,000 the data reduced to 74260,
# due to filtering and removing duplicates
#gender = female and age between 30 and 40
gender_required = ['F']
filtered_df = movies[((movies['gender'] == 'F') & (movies['age'] > 30) & (movies['age'] <40))]
filtered_df
```
#### Note
In the above fragment you HAVE TO ADD parantheses to each and every argument that is being compared else you will get an error.
As you can see after filtering result tables (i.e. DataFrames) have non-ordered indexes. To fix this trouble you may write the following:
```
filtered_df = filtered_df.reset_index()
filtered_df.head(10)
# set 'user_id' 'movie_id' as index
filtered_df_new = filtered_df.set_index(['user_id','movie_id'])
filtered_df_new.head(10)
# Note that set_index takes only a list as an argument to it.
# if you remove the [] then only the first argument is set as the index.
# By default, `set_index()` returns a new DataFrame.
# so you’ll have to specify if you’d like the changes to occur in place.
# Here we used filtered_df_new to get the new dataframe and now see the type of filtererd_df_new
print(type(filtered_df_new.index))
```
Notice here that we now have a new sort of 'index' which is `MultiIndex`, which contains information about indexing of DataFrame and allows manipulating with this data.
```
filtered_df_new.index.names
# Gives you the names of the two index values we set as a FrozenList
```
Method `get_level_values()` allows to get all values for the corresponding index level.
`get_level_values(0)` corresponds to 'user_id' and `get_level_values(1)` corresponds to 'movie_id'
```
print(filtered_df_new.index.get_level_values(0))
print(filtered_df_new.index.get_level_values(1))
```
### Selection by label and position
Object selection in pandas is now supported by three types of multi-axis indexing.
* `.loc` works on labels in the index;
* `.iloc` works on the positions in the index (so it only takes integers);
The sequence of the following examples demonstrates how we can manipulate with DataFrame’s rows.
At first let’s get the first row of movies:
```
movies.loc[0]
movies.loc[1:3]
```
If you want to return specific columns then you have to specify them as a separate argument of .loc
```
movies.loc[1:3 , 'movie_title']
movies.loc[1:5 , ['movie_title','age','gender']]
# If more than one column is to be selected then you have to give the second argument of .loc as a list
# movies.iloc[1:5 , ['movie_title','age','gender']]
# Gives error as iloc only uses integer values
movies.iloc[0]
movies.iloc[1:5]
# movies.select(lambda x: x%2==0).head() is the same as :
movies.loc[movies.index.map(lambda x: x%2==0)].head()
# .select() has been deprecated for now and will be completely removed in future updates so use .loc
```
## Working with Missing Data
Pandas primarily uses the value np.nan to represent missing data (in table missed/empty value are marked by NaN). It is by default not included in computations. Missing data creates many issues at mathematical or computational tasks with DataFrames and Series and it’s important to know how fight with these values.
```
ages = movies['age']
sum(ages)
```
This is because there are so many cases where Age isn't given and hence takes on the value of np.nan.
We can use `fillna()`a very effecient pandas method for filling missing values
```
ages = movies['age'].fillna(0)
sum(ages)
```
This fills all the values with 0 and calculates the sum.
To remain only rows with non-null values you can use method `dropna()`
```
ages = movies['age'].dropna()
sum(ages)
movies_nonnull = movies.dropna()
movies_nonnull.head(20)
#14th value was dropped because it had a missing value in a column
movies_notnull = movies.dropna(how='all',subset=['age','occupation'])
#Drops all nan values from movies belonging to age and occupation
movies_notnull.info()
#Notice how age and occupation now have nearly 6000 lesser values
```
Thus, if `how='all'`, we get DataFrame, where all values in both columns from subset are NaN.
If `how='any'`, we get DataFrame, where at least one contains NaN.
```
movies.describe()
```
At first, let’s find all unique dates in `‘release_date’` column of `movies` and then select only dates in range lower than 1995.
```
movies['release_date'] = movies['release_date'].map(pd.to_datetime)
# We map it to_datetime as pandas has a set way to deal with dates and then we can effectively work with dates.
unique_dates = movies['release_date'].drop_duplicates().dropna()
# Drops duplicates and nan values
unique_dates
# find dates with year lower/equal than 1995
unique_dates_1 = filter(lambda x: x.year <= 1995, unique_dates)
# filter() takes two arguments. First one should return only boolean values and the second one is the variable over which ititerates over.
# This basically takes unique_dates and uses the lambda function (here, it returns bool values) and filters True cases.
unique_dates_1
```
Here we have used `drop_duplicates()` method to select only `unique` Series values. Then we can filter `movies` with respect to `release_date` condition. Each `datetime` Python object possesses with attributes `year`, `month`, `day`, etc. allowing to extract values of year, month, day, etc. from the date. We call the new DataFrame as `old_movies`.
```
old_movies = movies[movies['release_date'].isin(unique_dates_1)]
old_movies.head()
```
Now we may filter DataFrame `old_movies` by `age` and `rating`. Lets’ drop `timestamp`, `zip_code`
```
# get all users with age less than 25 that rated old movies higher than 3
old_movies_watch = old_movies[(old_movies['age']<25) & (old_movies['rating']>3)]
# Drop timestamp and zip_code
old_movies_watch = old_movies_watch.drop(['timestamp', 'zip_code'],axis=1)
old_movies_watch.head()
```
`Pandas` has support for accelerating certain types of binary numerical and boolean operations using the `numexpr `library (it uses smart chunking, caching, and multiple cores) and the `bottleneck` libraries (is a set of specialized cython routines that are especially fast when dealing with arrays that have NaNs). It allows one to increase pandas functionality a lot. This advantage is shown for some boolean and calculation operations. To count the time elapsed on operation performing we will use the decorator
```
# this function counts the time for a particular operation
def timer(func):
from datetime import datetime
def wrapper(*args):
start = datetime.now()
func(*args)
end = datetime.now()
return 'elapsed time = {' + str(end - start)+'}'
return wrapper
import random
n = 100
# generate rangon datasets
df_1 = pd.DataFrame({'col :'+str(i):[random.randint(-100,100) for j in range(n)]for i in range(n)})
# here we pass a dictionary to the DataFrame() constructor.
# The key is "col : i" where i can take random values and the value for those keys is i.
df_2 = pd.DataFrame({'col :'+str(i):[random.randint(-100,100) for j in range(n)] for i in range(n)})
@timer
def direct_comparison(df_1, df_2):
bool_df = pd.DataFrame({'col_{}'.format(i): [True for j in range(n)] for i in range(n)})
for i in range(len(df_1.index)):
for j in range(len(df_1.loc[i])):
if df_1.loc[i, df_1.columns[j]] >= df_2.loc[i, df_2.columns[j]]:
bool_df.loc[i,bool_df.columns[j]] = False
return bool_df
@timer
def pandas_comparison(df_1, df_2):
return df_1 < df_2
print ('direct_comparison:', (direct_comparison(df_1, df_2)))
print ('pandas_comparison:', (pandas_comparison(df_1, df_2)))
```
As you can see, the difference in speed is too noticeable.
Besides, pandas possesses methods `eq` (equal), `ne` (not equal), `lt` (less then), `gt` (greater than), `le` (less or equal) and `ge` (greater or equal) for simplifying boolean comparison
## Matrix Addition
```
df = pd.DataFrame({'A':[1,2,3],'B':[-2,-3,-4],"C":[7,8,9]})
dfa = pd.DataFrame({'A':[1,2,3],'D':[6,7,8],"C":[12,12,12]})
dfc = df + dfa
dfc
df.le(dfa)
```
You can also apply the reductions: `empty`, `any()`, `all()`, and `bool()` to provide a way to summarize a boolean result:
```
(df<0).all()
# here horyzontal direction for comparison is taking into account and we check all row’s items
(df < 0).all(axis=1)
# here vertical direction for comparison is taking into
# account and we check if just one column’s item satisfies the condition
(df < 0).any()
# here we check if all DataFrame's items satisfy the condition
(df < 0).any().any()
# here we check if DataFrame no one element
df.empty
```
### Descriptive Statistics
|Function|Description|
|--|-------------------------------|
|abs|absolute value|
|count|number of non-null observations|
|cumsum|cumulative sum (a sequence of partial sums of a given sequence)|
|sum|sum of values|
|mean|mean of values|
|mad|mean absolute deviation|
|median|arithmetic median of values|
|min|minimum value|
|max|maximum value|
|mode|mode|
|prod|product of values|
|std|unbiased standard deviation|
|var|unbiased variance|
```
print("Sum : ", movies['age'].sum())
print(df)
print("Mean : ")
print(df.mean())
print("\nMean of all Mean Values: ")
print(df.mean().mean())
print("\nMedian: ")
print(df.median())
print("\nStandard Deviation: ")
print(df.std())
print("\nVariance: ")
print(df.var())
print("\nMax: ")
print(df.max())
```
## Function Applications
When you need to make some transformations with some column’s or row’s elements, then method `map` will be helpful (it works like pure Python function `map()` ). But there is also possibility to apply some function to each DataFrame element (not to a column or a row) – method `apply(map)` aids in this case.
```
movies.loc[:, (movies.dtypes == np.int64) | (movies.dtypes == np.float64)].apply(np.mean)
# This calculates the mean of all the columns present in movies
# to print mean of all row values in movies :
movies.loc[:,(movies.dtypes==np.int64) | (movies.dtypes==np.float64)].apply(np.mean, axis = 1)
```
### Remember
The attribute axis define the horizontal `(axis=1)` or vertical direction for calculations `(axis=0)`
### Groupby with Dictionary
```
import numpy as np
import pandas as pd
d = {'id':[1,2,3],
'Column 1.1':[14,15,16],
'Column 1.2':[10,10,10],
'Column 1.3':[1,4,5],
'Column 2.1':[1,2,3],
'Column 2.2':[10,10,10],
}
df = pd.DataFrame(d)
df
groupby_dict = {'Column 1.1':'Column 1','Column 1.2':'Column 1','Column 1.3':'Column 1','Column 2.1':'Column 2','Column 2.2':'Column 2'}
df = df.set_index('id')
df=df.groupby(groupby_dict,axis=1).min()
df
import numpy as np
import pandas as pd
dict = {
"ID":[1,2,3],
"Movies":["The Godfather","Fight Club","Casablanca"],
"Week_1_Viewers":[30,30,40],
"Week_2_Viewers":[60,40,80],
"Week_3_Viewers":[40,20,20]
};
df = pd.DataFrame(dict);
df
mapping = {"Week_1_Viewers":"Total_Viewers",
"Week_2_Viewers":"Total_Viewers",
"Week_3_Viewers":"Total_Viewers",
"Movies":"Movies"
}
df = df.set_index('ID')
df=df.groupby(mapping,axis=1).sum()
df
```
### Breaking up a String into columns using regex
```
dict = {'movie_data':['The Godfather 1972 9.2',
'Bird Box 2018 6.8',
'Fight Club 1999 8.8']
}
df = pd.DataFrame(dict)
df
df['Name'] = df['movie_data'].str.extract('(\w*\s\w*)', expand=True)
df['Year'] = df['movie_data'].str.extract('(\d\d\d\d)', expand=True)
df['Rating'] = df['movie_data'].str.extract('(\d\.\d)', expand=True)
df
import re
movie_data = ["Name:The Godfather Year: 1972 Rating: 9.2",
"Name:Bird Box Year: 2018 Rating: 6.8",
"Name:Fight Club Year: 1999 Rating: 8.8"]
movies={"Name":[],
"Year":[],
"Rating":[]}
for item in movie_data:
name_field = re.search("Name:.*",item)
if name_field is not None:
name = re.search('\w*\s\w*',name_field.group())
else:
name = None
movies["Name"].append(name.group())
year_field = re.search("Year: .*",item)
if year_field is not None:
year = re.search('\s\d\d\d\d',year_field.group())
else:
year = None
movies["Year"].append(year.group().strip())
rating_field = re.search("Rating: .*",item)
if rating_field is not None:
rating = re.search('\s\d.\d',rating_field.group())
else:
rating - None
movies["Rating"].append(rating.group().strip())
movies
df = pd.DataFrame(movies)
df
```
### Ranking Rows in Pandas
```
import pandas as pd
movies = {'Name': ['The Godfather', 'Bird Box', 'Fight Club'],
'Year': ['1972', '2018', '1999'],
'Rating': ['9.2', '6.8', '8.8']}
df = pd.DataFrame(movies)
df
df['Rating_Rank'] = df['Rating'].rank(ascending=1)
df
df =df.set_index('Rating_Rank')
df
df.sort_index()
# Example 2
import pandas as pd
student_details = {'Name':['Raj','Raj','Raj','Aravind','Aravind','Aravind','John','John','John','Arjun','Arjun','Arjun'],
'Subject':['Maths','Physics','Chemistry','Maths','Physics','Chemistry','Maths','Physics','Chemistry','Maths','Physics','Chemistry'],
'Marks':[80,90,75,60,40,60,80,55,100,90,75,70]
}
df = pd.DataFrame(student_details)
df
df['Mark_Rank'] = df['Marks'].rank(ascending=0)
df = df.set_index('Mark_Rank')
df
df = df.sort_index()
df
```
| github_jupyter |
## Convolutional Neural Network Using SVM as Final Layer
```
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
from zipfile import ZipFile
filename = "Datasets.zip"
with ZipFile(filename, 'r') as zip:
zip.extractall()
print('Done')
# Convolutional Neural Network
# Importing the libraries
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
tf.__version__
# Part 1 - Data Preprocessing
# Preprocessing the Training set
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
training_set = train_datagen.flow_from_directory('/content/Datasets/Train',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
# Preprocessing the Test set
test_datagen = ImageDataGenerator(rescale = 1./255)
test_set = test_datagen.flow_from_directory('/content/Datasets/Test',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Dense
from tensorflow.keras.regularizers import l2
# Part 2 - Building the CNN
# Initialising the CNN
cnn = tf.keras.models.Sequential()
# Step 1 - Convolution
cnn.add(tf.keras.layers.Conv2D(filters=32,padding="same",kernel_size=3, activation='relu', strides=2, input_shape=[64, 64, 3]))
# Step 2 - Pooling
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2))
# Adding a second convolutional layer
cnn.add(tf.keras.layers.Conv2D(filters=32,padding='same',kernel_size=3, activation='relu'))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2))
# Step 3 - Flattening
cnn.add(tf.keras.layers.Flatten())
# Step 4 - Full Connection
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
# Step 5 - Output Layer
cnn.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# For Binary Classification
#cnn.add(Dense(1, kernel_regularizer=tf.keras.regularizers.l2(0.01),activation='linear'))
### for mulitclassification
cnn.add(Dense(3, kernel_regularizer=tf.keras.regularizers.l2(0.01),activation ='softmax'))
cnn.compile(optimizer = 'adam', loss = 'squared_hinge', metrics = ['accuracy'])
## for mulitclassification
## cnn.add(Dense(3, kernel_regularizer=tf.keras.regularizers.l2(0.01),activation ='softmax'))
## cnn.compile(optimizer = 'adam', loss = 'squared_hinge', metrics = ['accuracy'])
cnn.summary()
# Part 3 - Training the CNN
# Compiling the CNN
cnn.compile(optimizer = 'adam', loss = 'hinge', metrics = ['accuracy'])
# Training the CNN on the Training set and evaluating it on the Test set
r=cnn.fit(x = training_set, validation_data = test_set, epochs = 15)
# plot the loss
import matplotlib.pyplot as plt
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# plot the accuracy
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
# save it as a h5 file
from tensorflow.keras.models import load_model
cnn.save('model_rcat_dog.h5')
from tensorflow.keras.models import load_model
# load model
model = load_model('model_rcat_dog.h5')
model.summary()
# Part 4 - Making a single prediction
import numpy as np
from tensorflow.keras.preprocessing import image
test_image = image.load_img('/content/Datasets/Test/mercedes/32.jpg', target_size = (64,64))
test_image = image.img_to_array(test_image)
test_image=test_image/255
test_image = np.expand_dims(test_image, axis = 0)
result = cnn.predict(test_image)
result
# Part 4 - Making a single prediction
import numpy as np
from tensorflow.keras.preprocessing import image
test_image = image.load_img('/content/Datasets/Test/lamborghini/10.jpg', target_size = (64,64))
test_image = image.img_to_array(test_image)
test_image=test_image/255
test_image = np.expand_dims(test_image, axis = 0)
result = max(cnn.predict(test_image))
result
a=np.argmax(result, axis=1)
print(a)
if (a==0):
print("The predicted class is Audi")
elif (a==1):
print("The predicted class is Lamborghini")
else:
print("The predicted class is Mercedes")
```
| github_jupyter |
# [Module 2.2] 세이지 메이커 인퍼런스
본 워크샵의 모든 노트북은 `conda_python3` 추가 패키지를 설치하고 모두 이 커널 에서 작업 합니다.
- 1. 배포 준비
- 2. 로컬 앤드포인트 생성
- 3. 로컬 추론
---
이전 노트북에서 인퍼런스 테스트를 완료한 티펙트를 가져옵니다.
```
%store -r artifact_path
```
# 1. 배포 준비
```
print("artifact_path: ", artifact_path)
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "sagemaker/DEMO-pytorch-cnn-cifar10"
role = sagemaker.get_execution_role()
```
## 테스트 데이터 세트 로딩
- 로컬에서 저장된 데이터를 가져와서 데이터를 변환 합니다.
- batch_size 만큼 데이터를 로딩하는 데이터 로더를 정의 합니다.
```
import numpy as np
import torchvision, torch
import torchvision.transforms as transforms
from source.utils_cifar import imshow, classes
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)
testset = torchvision.datasets.CIFAR10(
root='../data', train=False, download=False, transform=transform
)
test_loader = torch.utils.data.DataLoader(
testset, batch_size=4, shuffle=False, num_workers=2
)
# get some random training images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(" ".join("%9s" % classes[labels[j]] for j in range(4)))
```
# 2. 엔드포인트 생성
- 이 과정은 세이지 메이커 엔드포인트를 생성합니다.
```
import os
import time
import sagemaker
from sagemaker.pytorch.model import PyTorchModel
role = sagemaker.get_execution_role()
%%time
endpoint_name = "sagemaker-endpoint-cifar10-classifier-{}".format(int(time.time()))
sm_pytorch_model = PyTorchModel(model_data=artifact_path,
role=role,
entry_point='inference.py',
source_dir = 'source',
framework_version='1.8.1',
py_version='py3',
model_server_workers=1,
)
sm_predictor = sm_pytorch_model.deploy(instance_type='ml.p2.xlarge',
initial_instance_count=1,
endpoint_name=endpoint_name,
wait=True,
)
```
# 3. 로컬 추론
- 준비된 입력 데이터로 로컬 엔드포인트에서 추론
# 엔드 포인트 추론
```
# print images
imshow(torchvision.utils.make_grid(images))
print("GroundTruth: ", " ".join("%4s" % classes[labels[j]] for j in range(4)))
outputs = sm_predictor.predict(images.numpy())
_, predicted = torch.max(torch.from_numpy(np.array(outputs)), 1)
print("Predicted: ", " ".join("%4s" % classes[predicted[j]] for j in range(4)))
```
# Clean-up
위의 엔드포인트를 삭제 합니다.
```
sm_predictor.delete_endpoint()
```
| github_jupyter |
# IPython.display
youtube url for learning https://www.youtube.com/watch?v=YPgImo9kcbg&list=PLoTScYm9O0GFVfRk_MmZt0vQXNIi36LUz&index=12
```
from IPython.display import IFrame, YouTubeVideo, SVG, HTML
```
## Display Web page
```
IFrame("https://matplotlib.org/examples/color/named_colors.html", width=800, height=300)
import pandas as pd
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
df=pd.read_csv('https://github.com/prasertcbs/tutorial/raw/master/msleep.csv')
df.sample(5)
df.vore.value_counts().plot.barh(color='blueviolet');
```
## Display PDF
https://www.datacamp.com/community/data-science-cheatsheets
```
IFrame('http://datacamp-community.s3.amazonaws.com/f9f06e72-519a-4722-9912-b5de742dbac4',
width=800, height=400)
```
## Embed YouTube
```
# https://www.youtube.com/watch?v=_NHyJBIxc40
YouTubeVideo('YPgImo9kcbg', 640, 360)
```
[Mario source:](https://en.wikipedia.org/wiki/Mario)


```
IFrame("https://upload.wikimedia.org/wikipedia/en/9/99/MarioSMBW.png", width=300, height=300)
IFrame('https://upload.wikimedia.org/wikipedia/commons/7/70/Amazon_logo_plain.svg',
width=600, height=200)
IFrame('https://upload.wikimedia.org/wikipedia/commons/f/ff/Vectorized_Apple_gray_logo.svg',
width=100, height=100)
IFrame('https://upload.wikimedia.org/wikipedia/commons/f/ff/Vectorized_Apple_gray_logo.svg',
width=400, height=400)
```
## Display HTML
```
s='''
<span style=color:red;font-size:150%>Hello</span> <span style=color:blue;font-size:200%>Python</span>
'''
HTML(s)
```
<img src=https://upload.wikimedia.org/wikipedia/en/9/99/MarioSMBW.png></img>
```
s='''
<img src=https://upload.wikimedia.org/wikipedia/en/9/99/MarioSMBW.png>
<img src="https://assets.pokemon.com/assets/cms2/img/pokedex/full/729.png" width=200 height=200>
'''
HTML(s)
s='''
<table class="table table-hover table-set col-3-center table-set-border-yellow">
<thead>
<tr>
<th></th>
<th>
<strong>ล่าสุด</strong>
</th>
<th>
<strong>เปลี่ยนแปลง</strong>
</th>
<th>
<strong>มูลค่า
<br>(ลบ.)</strong>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<a class="blacklink-u" href="https://marketdata.set.or.th/mkt/marketsummary.do?language=th&country=TH" target="_blank">SET</a>
</td>
<td style="text-align: right;" class="set-color-red">
<i class="fa fa-caret-down"></i>
1,705.33
</td>
<td style="text-align: right;" class="set-color-red">-1.19</td>
<td style="text-align: right;">61,883.85</td>
</tr>
<tr>
<td>
<a class="blacklink-u" href="https://marketdata.set.or.th/mkt/sectorquotation.do?sector=SET50&language=th&country=TH"
target="_blank">SET50</a>
</td>
<td style="text-align: right;" class="set-color-green">
<i class="fa fa-caret-up"></i>
1,091.68
</td>
<td style="text-align: right;" class="set-color-green">+0.62</td>
<td style="text-align: right;">40,293.01</td>
</tr>
<tr>
<td>
<a class="blacklink-u" href="https://marketdata.set.or.th/mkt/sectorquotation.do?sector=SET100&language=th&country=TH"
target="_blank">SET100</a>
</td>
<td style="text-align: right;" class="set-color-red">
<i class="fa fa-caret-down"></i>
2,455.09
</td>
<td style="text-align: right;" class="set-color-red">-0.58</td>
<td style="text-align: right;">48,948.34</td>
</tr>
<tr>
<td>
<a class="blacklink-u" href="https://marketdata.set.or.th/mkt/sectorquotation.do?sector=sSET&language=th&country=TH"
target="_blank">sSET</a>
</td>
<td style="text-align: right;" class="set-color-red">
<i class="fa fa-caret-down"></i>
1,095.13
</td>
<td style="text-align: right;" class="set-color-red">-11.36</td>
<td style="text-align: right;">3,471.48</td>
</tr>
<tr>
<td>
<a class="blacklink-u" href="https://marketdata.set.or.th/mkt/sectorquotation.do?sector=SETHD&language=th&country=TH"
target="_blank">SETHD</a>
</td>
<td style="text-align: right;" class="set-color-green">
<i class="fa fa-caret-up"></i>
1,262.58
</td>
<td style="text-align: right;" class="set-color-green">+9.26</td>
<td style="text-align: right;">18,259.40</td>
</tr>
</tbody>
</table>'''
HTML(s)
```
## get text file from url and display its content
```
import requests
r = requests.get('https://github.com/prasertcbs/tutorial/raw/master/mtcars.csv')
print(r.text)
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from plotnine import *
```
Leitura e visualização dos dados:
```
#carregar os dados no dataframe
df = pd.read_csv('movie_metadata.csv')
df.head()
df.shape
df.dtypes
list(df.columns)
```
Análise Exploratória
```
df['color'].value_counts()
df.drop('color', axis=1, inplace=True)
#verificando se existem valores faltantes nos dados
df.isna().any()
df.isna().sum()
df.dropna(axis=0, subset=['director_name', 'num_critic_for_reviews',
'duration','director_facebook_likes','actor_3_facebook_likes',
'actor_2_name','actor_1_facebook_likes','actor_1_name','actor_3_name',
'facenumber_in_poster','num_user_for_reviews','language','country',
'actor_2_facebook_likes','plot_keywords', 'title_year'],inplace=True)
df.shape
#classificação indicativa do filme, ex. R = livre
df['content_rating'].value_counts()
#preencher os valores faltantes dos outros filmes com a indicação livre
df['content_rating'].fillna('R', inplace = True)
#valores de tamanho de tela
df['aspect_ratio'].value_counts()
#substituindo os valores faltantes dos tamanhos de tela pela mediana dos valores
df['aspect_ratio'].fillna(df['aspect_ratio'].median(), inplace=True)
#substituindo os valores faltantes dos orçamentos dos filmes pela mediana dos valores
df['budget'].fillna(df['budget'].median(), inplace=True)
#substituindo os valores faltantes dos faturamentos dos filmes pela mediana dos valores
df['gross'].fillna(df['gross'].median(), inplace=True)
df.isna().sum()
#ter cuidado e verifiacr os dados duplicados pois eles por estarem em maiores quantidades enviesam o modelo
df.duplicated().sum()
#removendo as duplicatas
df.drop_duplicates(inplace=True)
df.shape
#verificando quais os valores da coluna 'language'
df['language'].value_counts()
df.drop('language', axis=1, inplace=True)
df['country'].value_counts()
df.drop('country', axis=1, inplace=True)
df.shape
#criando uma nova coluna na tabela
df['Profit'] = df['budget'].sub(df['gross'], axis=0)
df.head()
df['Profit_Percentage'] = (df['Profit']/df['gross'])*100
df.head()
#salvar tudo o que fiz até agora
df.to_csv('dados_imdb_dandaraleit.csv', index=False)
```
Visualização dos dados
```
#criando gráfico de correlaciona lucro e nota do IMDB
ggplot(aes(x='imdb_score', y='Profit'), data=df) +\
geom_line() +\
stat_smooth(colour='blue', span=1)
#criando gráfico de correlaciona likes no facebook do filme e nota do IMDB
(ggplot(df)+\
aes(x='imdb_score', y='movie_facebook_likes') +\
geom_line() +\
labs(title='Nota no IMDB vs likes no facebook do filme', x='Nota no IMDB', y='Likes no facebook')
)
#gráfico dos 20 filmes com melhor nota com relação aos atores principais
plt.figure(figsize=(10,8))
df= df.sort_values(by ='imdb_score' , ascending=False)
df2=df.head(20)
ax=sns.pointplot(df2['actor_1_name'], df2['imdb_score'], hue=df2['movie_title'])
ax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha="right")
plt.tight_layout()
plt.show()
```
Preparação dos dados
```
#retirando algumas colunas com dados categóricos
df.drop(columns=['director_name', 'actor_1_name', 'actor_2_name',
'actor_3_name', 'plot_keywords', 'movie_title'], axis=1, inplace=True)
#verificando os valores da coluna 'genre'
df['genres'].value_counts()
df.drop('genres', axis=1, inplace=True)
#retirando as colunas criadas
df.drop(columns=['Profit', 'Profit_Percentage'], axis=1, inplace=True)
#verificando se existem colunas fortemente correlacionadas // Método corr, usando mapa de calor
import numpy as np
corr = df.corr()
sns.set_context("notebook", font_scale=1.0, rc={"lines.linewidth": 2.5})
plt.figure(figsize=(13,7))
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask, 1)] = True
a = sns.heatmap(corr,mask=mask, annot=True, fmt='.2f')
rotx = a.set_xticklabels(a.get_xticklabels(), rotation=90)
roty = a.set_yticklabels(a.get_yticklabels(), rotation=30)
#criando uma nova coluna combinando as duas colunas muito correlacionadas
df['Other_actors_facebook_likes'] = df['actor_2_facebook_likes'] + df['actor_3_facebook_likes']
#removendo as colunas
df.drop(columns=['actor_2_facebook_likes', 'actor_3_facebook_likes',
'cast_total_facebook_likes'], axis=1, inplace=True)
#criando uma nova coluna combinando as duas colunas muito correlacionadas // Razão entre o número de críticas por reviews e o número de usuários que fizeram reviews
df['critic_review_ratio'] = df['num_critic_for_reviews']/df['num_user_for_reviews']
df.drop(columns=['num_critic_for_reviews', 'num_user_for_reviews'], axis=1, inplace=True)
#verificando se ainda existem colunas fortemente correlacionadas
corr = df.corr()
sns.set_context("notebook", font_scale=1.0, rc={"lines.linewidth": 2.5})
plt.figure(figsize=(13,7))
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask, 1)] = True
a = sns.heatmap(corr,mask=mask, annot=True, fmt='.2f')
rotx = a.set_xticklabels(a.get_xticklabels(), rotation=90)
roty = a.set_yticklabels(a.get_yticklabels(), rotation=30)
#categorizando os valores de nota do imdb
df['imdb_binned_score']=pd.cut(df['imdb_score'], bins=[0,4,6,8,10], right=True, labels=False)+1
df.head()
#criando novas colunas para transformar os valores categóricos de 'content rating' (classificação indicativa)
#em valores numéricos
df = pd.get_dummies(data = df, columns=['content_rating'], prefix=['content_rating'], drop_first=True)
df.head()
df.to_csv('dados_imdb_com_nota.csv', index=False)
```
Fornecer os dados escolhidos para o modelo a ser treinado e ver os resultados que ele vai prever. Aprendizado Supervisionado
```
#escolhendo as colunas do dataframe que serão nossos valores de entrada para o modelo
X=pd.DataFrame(columns=['duration','director_facebook_likes','actor_1_facebook_likes','gross',
'num_voted_users','facenumber_in_poster','budget','title_year','aspect_ratio',
'movie_facebook_likes','Other_actors_facebook_likes','critic_review_ratio',
'content_rating_G','content_rating_GP',
'content_rating_M','content_rating_NC-17','content_rating_Not Rated',
'content_rating_PG','content_rating_PG-13','content_rating_Passed',
'content_rating_R','content_rating_TV-14','content_rating_TV-G',
'content_rating_TV-PG','content_rating_Unrated','content_rating_X'],data=df)
#escolhendo a(s) coluna(s) do dataframe que serão a resposta do modelo
y = pd.DataFrame(columns=['imdb_binned_score'], data=df)
#importando o pacote de divisão dos dados em treinamento e teste
from sklearn.model_selection import train_test_split
#dividindo os dados em treinamento e teste
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
#normalizando os dados
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
X.isna().sum()
```
Utilização de modelo de regressão logística. Ele tenta descobrir uma função matemática que simule a distribuição dos dados. Modelo mais simples, disponível no sklearning.
```
#importando, configurando e treinando o modelo de regressão
from sklearn.linear_model import LogisticRegression
logit =LogisticRegression(verbose=1, max_iter=1000)
logit.fit(X_train,np.ravel(y_train,order='C'))
y_pred=logit.predict(X_test)
#verificando os valores preditos
y_pred
#importando o pacote de métricas e calculando a matriz de confusão
from sklearn import metrics
cnf_matrix = metrics.confusion_matrix(y_test, y_pred)
#código para melhor visualização da matriz de confusão
#alternativa:
# print(cnf_matrix)
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
#imprimindo a matriz de confusão
plot_confusion_matrix(cnf_matrix, classes=['1','2', '3', '4'],
title='Matriz de confusão não normalizada', normalize=False)
```
Modelo de Machine Learn não se dá muito bem com dados desbalanceados, ou seja, quando se tem mais dados de uma categoria, ela é que vai ser melhor prevista, como por exemplo a categoria 3 de filmes bons, foi da qual mais o modelo se aproximou, será que é esta categoria que tem mais dados? Vamos ver abaixo.
P.s: Como fazer para balancear?
```
#verificando quantos valores existem de cada categoria em 'imdb_binned_score'
df['imdb_binned_score'].value_counts()
#métricas finais, outro modo de olhar o número de amostras de cada categoria/classe
print(metrics.classification_report(y_test, y_pred, target_names=['1','2', '3', '4']))
#importante o pacote para salvar o modelo
import pickle
#definindo em qual caminho vamos salvar o modelo em uma variável para ficar mais organizado. ex: modelo_treinado.
modelo_treinado = 'modelo_imdb.sav'
#salvando o modelo
pickle.dump(logit, open(modelo_treinado, 'wb'))
#carregando o modelo treinado
modelo_carregado = pickle.load(open(modelo_treinado, 'rb'))
#Olhando o conteúdo de um vetor de teste
X_test[0]
#fazendo predição do novo dado com o modelo carregado
modelo_carregado.predict([X_test[0]])
```
O resultado deu que os filmes com os valores mais acima testados estão dentro da categoria de filme 3 (acima), que se traduz em filmes bons.
| github_jupyter |
# MNIST digit recognition Neural Network
---
# 1. Imports
---
```
import pandas as pd
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.models import Sequential
from keras.utils import np_utils
from keras.layers import Dense
```
# 2. Understanding the data
---
## 2.1. Load the dataset and split into train and test set
```
(X_train, y_train), (X_test, y_test) = mnist.load_data()
```
## 2.2. Data visualization
```
X_train.shape
```
- 60,000 training images
- Each image is 28 x 28 pixels
```
y_train.shape
```
- 60,000 arrays
- Each of size 10 (from 0-9)
- For example, 1 is represented as [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
```
X_test.shape
```
- 10,000 test images
- Each image is 28 x 28 pixels
```
y_test.shape
```
- 10,000 arrays similar to __y_train__
## 2.3. Images
```
plt.imshow(X_train[0], cmap=plt.get_cmap('gray'))
plt.imshow(X_train[1], cmap=plt.get_cmap('gray'))
plt.imshow(X_train[2], cmap=plt.get_cmap('gray'))
```
# 3. Data manipulation
---
## 3.1. Flatten 28 X 28 images into a 1 X 784 vector for each image
```
# X_train = X_train.reshape((X_train.shape[0], 28, 28, 1)).astype('float32')
# X_test = X_test.reshape((X_test.shape[0], 28, 28, 1)).astype('float32')
X_train = X_train.reshape((60000, 784))
X_train.shape
X_test = X_test.reshape((10000, 784))
X_test.shape
y_train.shape
y_test.shape
```
- y_train and y_test are of the required shape and don't need to be changed.
## 3.2. Normalize inputs from 0-255 in images to 0-1
```
X_train = X_train / 255
X_test = X_test / 255
```
## 3.3. One hot encode outputs
```
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
```
# 4. Build the model
---
## 4.1. Define model type (Neural Network)
```
model = Sequential()
```
## 4.2. Define architecture
```
model.add(Dense(784, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(10, activation='softmax'))
```
This is a dense nueral network with architecture:
| Layer | Activation function | Neurons |
| --- | --- | --- |
| 1 | ReLU | 784 |
| 2 | ReLU | 10 |
| 3 | Softmax | 10 |
## 4.3 Compile model
```
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
```
## 4.4. Training model
```
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30, batch_size=200, verbose=2)
```
## 4.5. Evaluate the model
```
scores = model.evaluate(X_test, y_test, verbose=0)
print("Test loss: ", scores[0])
print("Test Accuracy: ", (scores[1]))
print("Baseline Error: ", (100-scores[1]*100))
```
## 4.6. Save the model in a h5 file
```
model.save("model.h5")
```
# 5. Convert the model to a web friendly format
---
```
!tensorflowjs_converter --input_format keras './model.h5' '../UI/model'
```
- Uses tensorflowjs to convert the model to a format which can run on the browser
- Allows to run the model on a single page web app without a backend
| github_jupyter |
# 使用PyNative进行神经网络的训练调试体验
[](https://gitee.com/mindspore/docs/blob/master/docs/notebook/mindspore_debugging_in_pynative_mode.ipynb)
## 概述
在神经网络训练过程中,数据是否按照自己设计的神经网络运行,是使用者非常关心的事情,如何去查看数据是怎样经过神经网络,并产生变化的呢?这时候需要AI框架提供一个功能,方便使用者将计算图中的每一步变化拆开成单个算子或者深层网络拆分成多个单层来调试观察,了解分析数据在经过算子或者计算层后的变化情况,MindSpore在设计之初就提供了这样的功能模式--`PyNative_MODE`,与此对应的是`GRAPH_MODE`,他们的特点分别如下:
- PyNative模式:也称动态图模式,将神经网络中的各个算子逐一下发执行,方便用户编写和调试神经网络模型。
- Graph模式:也称静态图模式或者图模式,将神经网络模型编译成一整张图,然后下发执行。该模式利用图优化等技术提高运行性能,同时有助于规模部署和跨平台运行。
默认情况下,MindSpore处于Graph模式,可以通过`context.set_context(mode=context.PYNATIVE_MODE)`切换为PyNative模式;同样地,MindSpore处于PyNative模式时,可以通过`context.set_context(mode=context.GRAPH_MODE)`切换为Graph模式。
<br/>本次体验我们将使用一张手写数字图片跑完单次训练,在PyNative模式下,将数据在训练中经过每层神经网络的变化情况打印出来,并计算对应的loss值以及梯度值`grads`,整体流程如下:
1. 环境准备,设置PyNative模式。
2. 数据集准备,并取用单张图片数据。
3. 构建神经网络并设置每层断点打印数据。
4. 构建梯度计算函数。
5. 执行神经网络训练,查看网络各参数梯度。
> 本文档适用于GPU和Ascend环境。
## 环境准备
使用`context.set_context`将模式设置成`PYNATIVE_MODE`。
```
from mindspore import context
context.set_context(mode=context.PYNATIVE_MODE, device_target="GPU")
```
## 数据准备
### 数据集的下载
以下示例代码将数据集下载并解压到指定位置。
```
import os
import requests
requests.packages.urllib3.disable_warnings()
def download_dataset(dataset_url, path):
filename = dataset_url.split("/")[-1]
save_path = os.path.join(path, filename)
if os.path.exists(save_path):
return
if not os.path.exists(path):
os.makedirs(path)
res = requests.get(dataset_url, stream=True, verify=False)
with open(save_path, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(dataset_url), path))
train_path = "datasets/MNIST_Data/train"
test_path = "datasets/MNIST_Data/test"
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte", train_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte", train_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte", test_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte", test_path)
```
下载的数据集文件的目录结构如下:
```text
./datasets/MNIST_Data
├── test
│ ├── t10k-images-idx3-ubyte
│ └── t10k-labels-idx1-ubyte
└── train
├── train-images-idx3-ubyte
└── train-labels-idx1-ubyte
```
### 数据集的增强操作
下载下来后的数据集,需要通过`mindspore.dataset`处理成适用于MindSpore框架的数据,再使用一系列框架中提供的工具进行数据增强操作来适应LeNet网络的数据处理需求。
```
import mindspore.dataset.vision.c_transforms as CV
import mindspore.dataset.transforms.c_transforms as C
from mindspore.dataset.vision import Inter
from mindspore import dtype as mstype
import mindspore.dataset as ds
import numpy as np
def create_dataset(data_path, batch_size=32, repeat_size=1,
num_parallel_workers=1):
""" create dataset for train or test
Args:
data_path (str): Data path
batch_size (int): The number of data records in each group
repeat_size (int): The number of replicated data records
num_parallel_workers (int): The number of parallel workers
"""
# define dataset
mnist_ds = ds.MnistDataset(data_path)
# define some parameters needed for data enhancement and rough justification
resize_height, resize_width = 32, 32
rescale = 1.0 / 255.0
shift = 0.0
rescale_nml = 1 / 0.3081
shift_nml = -1 * 0.1307 / 0.3081
# according to the parameters, generate the corresponding data enhancement method
resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR)
rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
rescale_op = CV.Rescale(rescale, shift)
hwc2chw_op = CV.HWC2CHW()
type_cast_op = C.TypeCast(mstype.int32)
# using map method to apply operations to a dataset
mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=resize_op, input_columns="image", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=rescale_op, input_columns="image", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns="image", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns="image", num_parallel_workers=num_parallel_workers)
# process the generated dataset
buffer_size = 10000
mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)
mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
mnist_ds = mnist_ds.repeat(repeat_size)
return mnist_ds
```
### 数据图片的提取
本次体验我们只需要一张图片进行训练体验,所以随机选取`batch`中的第一张图片`image`和下标`label`。
```
from mindspore import Tensor
import matplotlib.pyplot as plt
train_data_path = "./datasets/MNIST_Data/train/"
ms_dataset = create_dataset(train_data_path)
dict_data = ms_dataset.create_dict_iterator()
data = next(dict_data)
images = data["image"].asnumpy()
labels = data["label"].asnumpy()
print(images.shape)
count = 1
for i in images:
plt.subplot(4, 8, count)
plt.imshow(np.squeeze(i))
plt.title('num:%s'%labels[count-1])
plt.xticks([])
count += 1
plt.axis("off")
plt.show()
```
当前batch的image数据如上图,后面的体验将提取第一张图片进行训练操作。
### 定义图像显示函数
定义一个图像显示函数`image_show`,插入LeNet5的前面4层神经网络中抽取图像数据并显示。
```
def image_show(x):
count = 1
x = x.asnumpy()
number = x.shape[1]
sqrt_number = int(np.sqrt(number))
for i in x[0]:
plt.subplot(sqrt_number, int(number/sqrt_number), count)
plt.imshow(i)
count += 1
plt.show()
```
## 构建神经网络LeNet5
在`construct`中使用`image_show`,查看每层网络后的图片变化。
> 这里只抽取了图片显示,想要查看具体的数值,可以按照自己的需要进行`print(x)`。
```
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore import dtype as mstype
from mindspore.common.initializer import Normal
class LeNet5(nn.Cell):
"""Lenet network structure."""
# define the operator required
def __init__(self, num_class=10, num_channel=1):
super(LeNet5, self).__init__()
self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))
self.relu = nn.ReLU()
self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
self.flatten = nn.Flatten()
self.switch = 1
def construct(self, x):
x = self.conv1(x)
if self.switch > 0:
print("The first layer: convolution layer")
image_show(x)
x = self.relu(x)
x = self.max_pool2d(x)
if self.switch > 0:
print("The second layer: pool layer")
image_show(x)
x = self.conv2(x)
if self.switch > 0:
print("The third layer: convolution layer")
image_show(x)
x = self.relu(x)
x = self.max_pool2d(x)
if self.switch > 0:
print("The fourth layer: pool layer")
image_show(x)
x = self.flatten(x)
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
self.switch -= 1
return x
network = LeNet5()
print("layer conv1:", network.conv1)
print("*"*40)
print("layer fc1:", network.fc1)
```
## 构建计算梯度函数GradWrap
构建梯度下降求值函数,该函数可计算网络中所有权重的梯度。
```
from mindspore import Tensor, ParameterTuple
class GradWrap(nn.Cell):
""" GradWrap definition """
def __init__(self, network):
super(GradWrap, self).__init__(auto_prefix=False)
self.network = network
self.weights = ParameterTuple(filter(lambda x: x.requires_grad, network.get_parameters()))
def construct(self, x, label):
weights = self.weights
return ops.GradOperation(get_by_list=True)(self.network, weights)(x, label)
```
## 执行训练函数
可以从网络中查看当前`batch`中第一张图片`image`的数据在神经网络中的变化,经过神经网络后,计算出其loss值,再根据loss值求参数的偏导即神经网络的梯度值,最后将梯度和loss进行优化。
- image:为当前batch的第一张图片。
- output:表示图片数据经过当前网络训练后生成的值,其张量为(1,10)。
```
from mindspore.nn import WithLossCell, Momentum
net = LeNet5()
optimizer = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.1, 0.9)
criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
net_with_criterion = WithLossCell(net, criterion)
train_network = GradWrap(net_with_criterion)
train_network.set_train()
image = images[0][0]
image = image.reshape((1, 1, 32, 32))
plt.imshow(np.squeeze(image))
plt.show()
input_data = Tensor(np.array(image).astype(np.float32))
label = Tensor(np.array([labels[0]]).astype(np.int32))
output = net(Tensor(input_data))
```
将第一层卷积层、第二层池化层、第三层卷积层和第四层池化层的图像特征打印出来后,直观地看到随着深度的增加,图像特征几乎无法用肉眼识别,但是机器可以用这些特征进行学习和识别,后续的全连接层为二维数组,无法图像显示,但可以打印出数据查看,由于数据量过大此处就不打印了,用户可以根据需求选择打印。
### 求loss值和梯度值,并进行优化
先求得loss值,后再根据loss值求梯度(偏导函数值),使用优化器`optimizer`进行优化。
- `loss_output`:即为loss值。
- `grads`:即网络中每层权重的梯度。
- `net_params`:即网络中每层权重的名称,用户可执行`print(net_params)`自行打印。
- `success`:优化参数。
```
loss_output = criterion(output, label)
grads = train_network(input_data, label)
net_params = net.trainable_params()
for i, grad in enumerate(grads):
print("{}:".format(net_params[i].name), grad.shape)
success = optimizer(grads)
loss = loss_output.asnumpy()
print("Loss_value:", loss)
```
具体每层权重的参数有多少,从打印出来的梯度张量能够看到,对应的梯度值用户可以自行选择打印。
## 总结
本次体验我们将MindSpore的数据增强后,使用了`create_dict_iterator`转化成字典,再单独取出来;使用PyNative模式将神经网络分层单独调试,提取并观察数据;用`WithLossCell`在PyNative模式下计算loss值;构造梯度函数`GradWrap`将神经网络中各个权重的梯度计算出来,以上就是本次的全部体验内容。
| github_jupyter |
```
import urllib.request
import json
import glob
import pandas as pd
import numpy as np
import datetime
```
Get data from sensors
```
# this cell gets data
URL = "http://165.227.244.213:8881/luftdatenGet/22FQ8dJEApww33p31935/9d93d9d8cv7js9sj4765s120sllkudp389cm/"
response = urllib.request.urlopen(URL)
data = json.loads(response.read())
print(data)
```
read from json data to be read into pd.DataFrame
```
columns = ['hex_int','date_time','P1','P2']
df_P = pd.DataFrame()
for values in data:
chip_id = values.get('esp8266id')
sensor_vals = values.get('sensordatavalues')
P1 = sensor_vals[0]['value']
P2 = sensor_vals[1]['value']
test_id = values.get('_id')
sub_test_id = test_id[0:8]
hex_int = int(sub_test_id, 16)
date_time = datetime.datetime.utcfromtimestamp((hex_int))# * 1000)
#date_time = datetime.((testid.substring(0, 8), 16) * 1000)
#print('test_id',test_id,'sub_test_id',sub_test_id,'hex_int',hex_int,'date_time',date_time)
row = pd.Series([hex_int,date_time,P1, P2]) #'chip_id' has been removed
df_row = pd.DataFrame(row).transpose()
#print(df_row)
df_P = pd.concat([df_P,df_row],axis=0)
df_P = pd.DataFrame(data= df_P.values, columns = columns )
df_P = df_P.sort_values(by=['hex_int'],ascending=False, inplace=False)
df_P['P1'] = df_P['P1'].astype('float64')
df_P['P2'] = df_P['P2'].astype('float64')
df_P['hex_int'] = df_P['hex_int'].astype('int64')
#print(df_P)
```
Get columns of interest # Not applicable delete cell, job done in previous cell
```
#redundant cell
```
Resample Data to current data point and means for hour, week, month, year # work only done for current and Daily
```
print('Current values for the sensor')
current_row = df_P.iloc[0,:]
current_P1 = df_P['P1'][0]
current_P2 = df_P['P2'][0]
print('current_row',current_row)
#This is the latest value
P1= df_P['P1'].values
P2= df_P['P2'].values
P1 = pd.DataFrame(data=P1,index=df_P['date_time'])
P2 = pd.DataFrame(data=P2,index=df_P['date_time'])
daily_P1 = P1.resample('D').mean()
daily_P2 = P2.resample('D').mean()
daily_P1 = daily_P1.sort_values(by=['date_time'],ascending=False, inplace=False)
daily_P2 = daily_P2.sort_values(by=['date_time'],ascending=False, inplace=False)
print(daily_P1)
print(daily_P2)
todays_P1 = daily_P1.iloc[0,:]
todays_P2 = daily_P2.iloc[0,:]
print(todays_P1)
print(todays_P2)
```
Output values from here to dashboard
```
output = {}
output.update({'sensor_id':chip_id})
output.update({'current_PM10':current_P1})
output.update({'current_PM2.5': current_P2})
output.update({'daily_PM10':todays_P1[0]})
output.update({'daily_PM2.5':todays_P2[0]})
print(output)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/RachitBansal/AppliancePower_TimeSeries/blob/master/ARIMA_Ukdale.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive',force_remount=True)
from sklearn.externals import joblib
import numpy as np
import matplotlib.pyplot as plt
eq = input("Enter equipment: ")
train_x = np.load(file='./drive/My Drive/ukdale_'+eq+'_x.npy')
train_y = np.load(file='./drive/My Drive/ukdale_'+eq+'_y.npy')
test_y = np.load(file='./drive/My Drive/ukdale_'+eq+'_ty.npy')
test_x = np.load(file='./drive/My Drive/ukdale_'+eq+'_tx.npy')
from pandas import datetime
import pandas as pd
# series = joblib.load("hour_resampled_data.pkl")
# sample = series
# sample = np.array(sample)
# sample = sample[3000:4500,1:2]
# series = np.array(series)
# series = series[:3000,1:2]
# print(series.shape)
# series = pd.DataFrame(series)
# #series.drop(axis = "index")
# print(series.head())
# equipment = int(input('equipment: '))
series = test_x[:3000, 0]
plt.plot(series)
plt.show()
from pandas import read_csv
from pandas import datetime
from matplotlib import pyplot
from pandas.plotting import autocorrelation_plot
# series = read_csv('shampoo-sales.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)
autocorrelation_plot(series)
pyplot.show()
from pandas import datetime
from pandas import DataFrame
from statsmodels.tsa.arima_model import ARIMA
from matplotlib import pyplot
import numpy as np
def parser(x):
return datetime.strptime('190'+x, '%Y-%m')
# series = read_csv('shampoo-sales.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)
# fit model
series = np.array(series)
model = ARIMA(series, order=(5,1,0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
# plot residual errors
residuals = DataFrame(model_fit.resid)
residuals.plot()
pyplot.show()
residuals.plot(kind='kde')
pyplot.show()
print(residuals.describe())
from pandas import datetime
from matplotlib import pyplot
from statsmodels.tsa.arima_model import ARIMA
from sklearn.metrics import mean_squared_error,mean_absolute_error
# equipment = 3
len(list(train_x[0].reshape(-1)))
history = list(train_x[0].reshape(-1))
for i in range(train_x.shape[0] - 1):
history.append(train_x[i+1][-1])
plt.plot(history)
history = list(train_x[0].reshape(-1))
for i in range(1000):
history.append(train_x[-1000+i][-1])
# history.append(x for x in test_x[0].reshape(-1))
model = ARIMA(history, order=(5,1,0))
model_fit = model.fit(disp=0)
history = list(test_x[0].reshape(-1))
predictions = []
# history = [x for x in test_x[i].reshape(-1) for i in range(1000)]
for t in range(1000):
model = ARIMA(history, order=(5,1,0))
model_fit = model.fit(disp=0)
output = model_fit.forecast()
yhat = output[0]
predictions.append(yhat)
obs = test_y[t][0][0]
history.append(obs)
if(t%50==0):
print('predicted=%f, expected=%f' % (yhat, obs))
predictions = np.array(predictions)
print(predictions.shape)
print(test_y.shape)
error = mean_squared_error(test_y[:1000].reshape(-1), predictions)
print('Test MSE: %.3f' % error)
print("RMSE : %.3f"%(np.sqrt(error)))
print("MAE : %.3f"%(mean_absolute_error(test_y[:1000].reshape(-1),predictions)))
# plot
pyplot.plot(test_y[:1000].reshape(-1))
pyplot.plot(predictions)
np.save(arr = np.array(predictions), file = './drive/My Drive/arima_ukdale_preds_1000_eq'+eq+'.npy')
import time
t1 = time.time()
times = []
for t in range(50):
model = ARIMA(history[t], order=(5,1,0))
model_fit = model.fit(disp=0)
t1 = time.time()
output = model_fit.forecast()
t2 = time.time()
times.append(t2-t1)
print(times)
print(sum(times))
def mean_abs_pct_error(actual_values, forecast_values):
err=0
actual_values = pd.DataFrame(actual_values)
forecast_values = pd.DataFrame(forecast_values)
for i in range(len(forecast_values)):
err += np.abs(actual_values.values[i] - forecast_values.values[i])/actual_values.values[i]
return err[0] * 100/len(forecast_values)
mean_abs_pct_error(test,predictions)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.preprocessing import StandardScaler, LabelEncoder, OrdinalEncoder
from sklearn.pipeline import make_pipeline
from category_encoders import OneHotEncoder
from sklearn.metrics import f1_score, precision_score, recall_score
from sklearn.linear_model import LogisticRegression
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier, HistGradientBoostingClassifier, RandomForestClassifier, BaggingClassifier, ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.feature_selection import RFE
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# read in data
train_values = pd.read_csv('data/Proj5_train_values.csv')
train_labels = pd.read_csv('data/Proj5_train_labels.csv')
```
## Modeling with 10% of data
- For faster processing
```
# grab first 10% of rows
train_values_10pct = train_values.head(int(len(train_values) * 0.1))
train_labels_10pct = train_labels.head(int(len(train_labels) * 0.1))
```
#### Baseline + TTS
```
# baseline model
train_labels_10pct['damage_grade'].value_counts(normalize = True)
# establish X + y
X = train_values_10pct.drop(columns = ['building_id'])
y = train_labels_10pct['damage_grade']
# tts
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = y, random_state = 123)
```
#### Modeling
```
# Random Forest
pipe_forest = make_pipeline(OneHotEncoder(use_cat_names = True), StandardScaler(), RandomForestClassifier(n_jobs = -1, random_state = 123))
params = {'randomforestclassifier__max_depth' : [6, 7, 8, 9, 10, 11],
'randomforestclassifier__max_features' : [15, 20, 30, 35]}
grid_forest = GridSearchCV(pipe_forest, param_grid = params)
grid_forest.fit(X_train, y_train)
print(f'Train Score: {grid_forest.score(X_train, y_train)}')
print(f'Test Score: {grid_forest.score(X_test, y_test)}')
grid_forest.best_params_
# grab feature importances
forest_fi_df = pd.DataFrame({'importances': grid_forest.best_estimator_.named_steps['randomforestclassifier'].feature_importances_,
'name': grid_forest.best_estimator_.named_steps['onehotencoder'].get_feature_names()}).sort_values('importances', ascending = False)
forest_fi_df[:5]
# test to ensure X_train.columns + feature_importances are same length
print(len(grid_forest.best_estimator_.named_steps['randomforestclassifier'].feature_importances_))
print(len(grid_forest.best_estimator_.named_steps['onehotencoder'].get_feature_names()))
# Extra Trees
pipe_trees = make_pipeline(OneHotEncoder(use_cat_names = True), StandardScaler(), ExtraTreesClassifier(n_jobs = -1, random_state = 123))
params = {'extratreesclassifier__max_depth' : [6, 7, 8, 9, 10, 11],
'extratreesclassifier__max_features' : [15, 20, 30, 35]}
grid_trees = GridSearchCV(pipe_trees, param_grid = params)
grid_trees.fit(X_train, y_train)
print(f'Train Score: {grid_trees.score(X_train, y_train)}')
print(f'Test Score: {grid_trees.score(X_test, y_test)}')
grid_trees.best_params_
# grab feature importances
trees_fi_df = pd.DataFrame({'importances': grid_trees.best_estimator_.named_steps['extratreesclassifier'].feature_importances_,
'name': grid_trees.best_estimator_.named_steps['onehotencoder'].get_feature_names()}).sort_values('importances', ascending = False)
trees_fi_df[:5]
# test to ensure X_train.columns + feature_importances are same length
print(len(grid_trees.best_estimator_.named_steps['extratreesclassifier'].feature_importances_))
print(len(grid_trees.best_estimator_.named_steps['onehotencoder'].get_feature_names()))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/AlsoSprachZarathushtra/Quick-Draw-Recognition/blob/master/(3_1)Stroke_LSTM_Skatch_A_Net_ipynb_.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Connect Google Drive
```
from google.colab import drive
drive.mount('/content/gdrive')
```
# Import
```
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
import os
from tensorflow import keras
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPool2D
from tensorflow.keras.layers import ReLU
from tensorflow.keras.layers import Softmax
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Lambda
from tensorflow.keras.layers import Reshape
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import concatenate
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.metrics import sparse_top_k_categorical_accuracy
from tensorflow.keras.callbacks import CSVLogger
from ast import literal_eval
```
# Parameters and Work-Space Paths
```
# parameters
BATCH_SIZE = 200
EPOCHS = 50
STEPS_PER_EPOCH = 425
VALIDATION_STEPS = 100
EVALUATE_STEPS = 850
IMAGE_SIZE = 225
LINE_SIZE = 3
# load path
TRAIN_DATA_PATH = 'gdrive/My Drive/QW/Data/Data_10000/All_classes_10000.csv'
VALID_DATA_PATH = 'gdrive/My Drive/QW/Data/My_test_data/My_test_data.csv'
LABEL_DICT_PATH = 'gdrive/My Drive/QW/Data/labels_dict.npy'
# save path
CKPT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(3-1)Stroke_LSTM-Skatch-A-Net/best_model_3_1.ckpt'
LOSS_PLOT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(3-1)Stroke_LSTM-Skatch-A-Net/loss_plot_3_1.png'
ACC_PLOT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(3-1)Stroke_LSTM-Skatch-A-Net/acc_plot_3_1.png'
LOG_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(3-1)Stroke_LSTM-Skatch-A-Net/Log_3_1.log'
print('finish!')
```
# Generator
```
def generate_data(data, batch_size, choose_recognized):
data = data.sample(frac = 1)
while 1:
# get columns' values named 'drawing', 'word' and 'recognized'
drawings = data["drawing"].values
drawing_recognized = data["recognized"].values
drawing_class = data["word"].values
# initialization
cnt = 0
data_X =[]
data_Y =[]
# generate batch
for i in range(len(drawings)):
if choose_recognized:
if drawing_recognized[i] == 'False': #Choose according to recognized value
continue
draw = drawings[i]
stroke_vec = literal_eval(draw)
l = len(stroke_vec)
stroke_set = []
if l <= 3:
if l == 1:
stroke_set =[[0],[0],[0]]
if l == 2:
stroke_set =[[0],[1],[1]]
if l == 3:
stroke_set =[[0],[1],[2]]
if l > 3:
a = l // 3
b = l % 3
c = (a + 1) * 3
d = c - l
n = 0
for h in range(0,d):
temp = []
for k in range(a):
temp.append(n)
n += 1
stroke_set.append(temp)
for h in range(d,3):
temp = []
for k in range(a+1):
temp.append(n)
n += 1
stroke_set.append(temp)
img = np.zeros([256, 256])
x = []
stroke_num = 0
for j in range(3):
stroke_index = stroke_set[j]
for m in list(stroke_index):
line = np.array(stroke_vec[m]).T
cv2.polylines(img, [line], False, (255-(13*min(stroke_num,10))), LINE_SIZE)
stroke_num += 1
img_copy = img.copy()
img_x = cv2.resize(img_copy,(IMAGE_SIZE,IMAGE_SIZE),interpolation = cv2.INTER_NEAREST)
x.append(img_x)
x = np.array(x)
x = x[:,:,:,np.newaxis]
label = drawing_class[i]
y = labels2nums_dict[label]
data_X.append(x)
data_Y.append(y)
cnt += 1
if cnt==batch_size: #generate a batch when cnt reaches batch_size
cnt = 0
yield (np.array(data_X), np.array(data_Y))
data_X = []
data_Y = []
print("finish!")
```
# Callbacks
```
# define a class named LossHitory
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = {'batch':[], 'epoch':[]}
self.accuracy = {'batch':[], 'epoch':[]}
self.val_loss = {'batch':[], 'epoch':[]}
self.val_acc = {'batch':[], 'epoch':[]}
def on_batch_end(self, batch, logs={}):
self.losses['batch'].append(logs.get('loss'))
self.accuracy['batch'].append(logs.get('acc'))
self.val_loss['batch'].append(logs.get('val_loss'))
self.val_acc['batch'].append(logs.get('val_acc'))
def on_epoch_end(self, batch, logs={}):
self.losses['epoch'].append(logs.get('loss'))
self.accuracy['epoch'].append(logs.get('acc'))
self.val_loss['epoch'].append(logs.get('val_loss'))
self.val_acc['epoch'].append(logs.get('val_acc'))
def loss_plot(self, loss_type, loss_fig_save_path, acc_fig_save_path):
iters = range(len(self.losses[loss_type]))
plt.figure('acc')
plt.plot(iters, self.accuracy[loss_type], 'r', label='train acc')
plt.plot(iters, self.val_acc[loss_type], 'b', label='val acc')
plt.grid(True)
plt.xlabel(loss_type)
plt.ylabel('acc')
plt.legend(loc="upper right")
plt.savefig(acc_fig_save_path)
plt.show()
plt.figure('loss')
plt.plot(iters, self.losses[loss_type], 'g', label='train loss')
plt.plot(iters, self.val_loss[loss_type], 'k', label='val loss')
plt.grid(True)
plt.xlabel(loss_type)
plt.ylabel('loss')
plt.legend(loc="upper right")
plt.savefig(loss_fig_save_path)
plt.show()
# create a object from LossHistory class
History = LossHistory()
print("finish!")
cp_callback = tf.keras.callbacks.ModelCheckpoint(
CKPT_PATH,
verbose = 1,
monitor='val_acc',
mode = 'max',
save_best_only=True)
print("finish!")
ReduceLR = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_acc', factor=0.5, patience=3,
min_delta=0.005, mode='max', cooldown=3, verbose=1)
csv_logger = CSVLogger(LOG_PATH, separator=',', append=True)
```
# Load Data
```
# load train data and valid data
# labels_dict and data path
# labels convert into nums
labels_dict = np.load(LABEL_DICT_PATH)
labels2nums_dict = {v: k for k, v in enumerate(labels_dict)}
# read csv
train_data = pd.read_csv(TRAIN_DATA_PATH)
valid_data = pd.read_csv(VALID_DATA_PATH)
print('finish!')
```
# Model
```
def Sketch_A_Net(M):
_x_input = Lambda(lambda x: x[:,M])(X_INPUT)
_x = Conv2D(64, (15,15), strides=3, padding='valid',name='Conv2D_1_{}'.format(M))(_x_input)
_x = BatchNormalization(name='BN_1_{}'.format(M))(_x)
_x = ReLU(name='ReLU_1_{}'.format(M))(_x)
_x = MaxPool2D(pool_size=(3,3),strides=2, name='Pooling_1_{}'.format(M))(_x)
_x = Conv2D(128, (5,5), strides=1, padding='valid',name='Conv2D_2_{}'.format(M))(_x)
_x = BatchNormalization(name='BN_2_{}'.format(M))(_x)
_x = ReLU(name='ReLU_2_{}'.format(M))(_x)
_x = MaxPool2D(pool_size=(3,3),strides=2, name='Pooling_2_{}'.format(M))(_x)
_x = Conv2D(256, (3,3), strides=1, padding='same',name='Conv2D_3_{}'.format(M))(_x)
_x = BatchNormalization(name='BN_3_{}'.format(M))(_x)
_x = ReLU(name='ReLU_3_{}'.format(M))(_x)
_x = Conv2D(256, (3,3), strides=1, padding='same',name='Conv2D_4_{}'.format(M))(_x)
_x = BatchNormalization(name='BN_4_{}'.format(M))(_x)
_x = ReLU(name='ReLU_4_{}'.format(M))(_x)
_x = Conv2D(256, (3,3), strides=1, padding='same',name='Conv2D_5_{}'.format(M))(_x)
_x = BatchNormalization(name='BN_5_{}'.format(M))(_x)
_x = ReLU(name='ReLU_5_{}'.format(M))(_x)
_x = MaxPool2D(pool_size=(3,3),strides=2, name='Pooling_5_{}'.format(M))(_x)
_x_shape = _x.shape[1]
_x = Conv2D(512, (int(_x_shape),int(_x_shape)), strides=1, padding='valid',name='Conv2D_FC_6_{}'.format(M))(_x)
_x = BatchNormalization(name='BN_6_{}'.format(M))(_x)
_x = Reshape((512,),name='Reshape_{}'.format(M))(_x)
return _x
X_INPUT = Input(shape=(3,IMAGE_SIZE,IMAGE_SIZE,1))
x1_output = Sketch_A_Net(0)
x2_output = Sketch_A_Net(1)
x3_output = Sketch_A_Net(2)
x = concatenate([x1_output, x2_output,x3_output],axis = 1,name='Concatenate')
x = Reshape((3,512),name='Reshape_f1')(x)
x = LSTM(512, return_sequences=True, name='LSTM_1')(x)
x = BatchNormalization(name='BN_1')(x)
x = Dropout(0.5, name='Dropout_1')(x)
x = LSTM(512, return_sequences=False, name='LSTM_2')(x)
x = BatchNormalization(name='BN_2')(x)
x = Dropout(0.5, name='Dropout_2')(x)
xx = concatenate([x,x3_output],axis = 1,name='Concatenate_last')
xx = Reshape((1024,),name='Reshape_f2')(xx)
xx = Dense(340, name='FC')(xx)
X_OUTPUT = Softmax(name='Softmax')(xx)
MODEL = keras.Model(inputs=X_INPUT, outputs= X_OUTPUT)
MODEL.summary()
```
# TPU Complie
```
model = MODEL
TPU_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
)
)
TPU_model.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.train.AdamOptimizer(learning_rate=1e-3),
metrics=['accuracy'])
print('finish')
```
# Train
```
print('start training')
# callbacks = [History, cp_callback]
history = TPU_model.fit_generator(generate_data(train_data, BATCH_SIZE, True),
steps_per_epoch = STEPS_PER_EPOCH,
epochs = EPOCHS,
validation_data = generate_data(valid_data, BATCH_SIZE, False) ,
validation_steps = VALIDATION_STEPS,
verbose = 1,
initial_epoch = 0,
callbacks = [History,cp_callback,csv_logger]
)
print("finish training")
History.loss_plot('epoch', LOSS_PLOT_PATH, ACC_PLOT_PATH)
print('finish!')
```
# Evaluate
```
def top_3_accuracy(X, Y):
return sparse_top_k_categorical_accuracy(X, Y, 3)
def top_5_accuracy(X, Y):
return sparse_top_k_categorical_accuracy(X, Y, 5)
model_E = MODEL
model_E.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.train.AdamOptimizer(learning_rate=1e-4),
metrics=['accuracy',top_3_accuracy, top_5_accuracy])
model_weights_path = CKPT_PATH
model_E.load_weights(model_weights_path)
print('finish')
result = model_E.evaluate_generator(
generate_data(valid_data, BATCH_SIZE, False),
steps = EVALUATE_STEPS,
verbose = 1
)
print('number of test samples:', len(result))
print('loss:', result[0])
print('top1 accuracy:', result[1])
print('top3 accuracy:', result[2])
print('top3 accuracy:', result[3])
```
| github_jupyter |
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
```
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
```
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
```
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
```
print(train_X[100])
print(train_y[100])
```
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
```
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
```
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100])
```
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?
**Answer:**
1. Convert to lower case
`text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())`
2. Tokenises the words
`words = text.split() # Split string into words`
3. Removes stopwords using porter stemmer
`words = [w for w in words if w not in stopwords.words("english")]`
`words = [PorterStemmer().stem(w) for w in words] # stem`
The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
```
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.
### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
```
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for r in data:
for word in r:
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_words = [item[0] for item in sorted(word_count.items(), key=lambda x: x[1], reverse=True)]
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
```
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?
**Answer:**
See below cell for top 5.
Yes this seems to make sense given the context of the problem.
```
# TODO: Use this space to determine the five most frequently appearing words in the training set.
count = 0
for word, idx in word_dict.items():
print(word)
count += 1
if count == 5:
break;
```
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
```
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
```
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
```
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
```
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
```
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
train_X[0]
```
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?
**Answer:**
Information is lost if it's not contained within the `word_dict`. Each document will also contain the same vector length. Tensorflow 2.0 has released [ragged tensors](https://www.tensorflow.org/guide/ragged_tensors) to attempt address the later problem.
## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
```
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
```
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.
## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
```
!pygmentize train/model.py
```
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
```
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
```
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
```
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
out = model.forward(batch_X)
loss = loss_fn(out, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
```
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
```
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
```
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.
### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.m4.xlarge', #'ml.p2.xlarge'
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
```
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.
```
# TODO: Deploy the trained model
pred = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
```
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, pred.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?
**Answer:**
The XGB model was able to get ~.84 accuracy so this model appears a little better.
### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
```
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
```
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
```
# TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = review_to_words(test_review)
test_data = [np.array(convert_and_pad(word_dict, test_data)[0])]
```
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
```
pred.predict(test_data)
```
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.
### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
```
estimator.delete_endpoint()
```
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
```
!pygmentize serve/predict.py
```
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.
### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
```
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
```
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
```
As an additional test, we can try sending the `test_review` that we looked at earlier.
```
predictor.predict(test_review)
```
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.
## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
```
predictor.endpoint
```
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.
Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?
**Answer:**
I've followed all the instructions however the endpoints appears unresponsive. I've tested the model independently however and it appeared to show sensible answers. For example postive for the review:
"If you like original gut wrenching laughter you will like this movie. If you are young or old then you will love this movie, hell even my mom liked it."
And negative for:
"Encouraged by the positive comments about this film on here I was looking forward to watching this film. Bad mistake. I've seen 950+ films and this is truly one of the worst of them"
### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
```
predictor.delete_endpoint()
```
| github_jupyter |
# Desafio 5
Neste desafio, vamos praticar sobre redução de dimensionalidade com PCA e seleção de variáveis com RFE. Utilizaremos o _data set_ [Fifa 2019](https://www.kaggle.com/karangadiya/fifa19), contendo originalmente 89 variáveis de mais de 18 mil jogadores do _game_ FIFA 2019.
> Obs.: Por favor, não modifique o nome das funções de resposta.
## _Setup_ geral
```
from math import sqrt
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
import statsmodels.api as sm
import statsmodels.stats as st
from sklearn.decomposition import PCA
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import RFE
from loguru import logger
# Algumas configurações para o matplotlib.
# %matplotlib inline
# from IPython.core.pylabtools import figsize
# figsize(12, 8)
# sns.set()
fifa = pd.read_csv("fifa.csv")
columns_to_drop = ["Unnamed: 0", "ID", "Name", "Photo", "Nationality", "Flag",
"Club", "Club Logo", "Value", "Wage", "Special", "Preferred Foot",
"International Reputation", "Weak Foot", "Skill Moves", "Work Rate",
"Body Type", "Real Face", "Position", "Jersey Number", "Joined",
"Loaned From", "Contract Valid Until", "Height", "Weight", "LS",
"ST", "RS", "LW", "LF", "CF", "RF", "RW", "LAM", "CAM", "RAM", "LM",
"LCM", "CM", "RCM", "RM", "LWB", "LDM", "CDM", "RDM", "RWB", "LB", "LCB",
"CB", "RCB", "RB", "Release Clause"
]
try:
fifa.drop(columns_to_drop, axis=1, inplace=True)
except KeyError:
logger.warning(f"Columns already dropped")
```
## Inicia sua análise a partir daqui
```
# verificando as 5 primeiras linhas
fifa.head()
# verificando as quantidade de linhas e colunas
fifa.shape
# analisando as primeiras estatísticas
fifa.describe()
# verificando a quantidade de dados nulos
fifa.isna().sum() / fifa.shape[0]
```
## Questão 1
Qual fração da variância consegue ser explicada pelo primeiro componente principal de `fifa`? Responda como um único float (entre 0 e 1) arredondado para três casas decimais.
```
def q1():
# instanciando o pca
pca = PCA()
# treiando e criando o objeto para variancia explicada
var_exp = pca.fit(fifa.dropna()).explained_variance_ratio_
# retorna o resultado
return round(var_exp[0], 3)
q1()
```
## Questão 2
Quantos componentes principais precisamos para explicar 95% da variância total? Responda como un único escalar inteiro.
```
def q2():
# instanciando pca com 0.95
pca = PCA(0.95)
# reduzindo a dimensionalidade
fifa_comp = pca.fit_transform(fifa.dropna())
# retornando o resultado
return fifa_comp.shape[1]
q2()
```
## Questão 3
Qual são as coordenadas (primeiro e segundo componentes principais) do ponto `x` abaixo? O vetor abaixo já está centralizado. Cuidado para __não__ centralizar o vetor novamente (por exemplo, invocando `PCA.transform()` nele). Responda como uma tupla de float arredondados para três casas decimais.
```
x = [0.87747123, -1.24990363, -1.3191255, -36.7341814,
-35.55091139, -37.29814417, -28.68671182, -30.90902583,
-42.37100061, -32.17082438, -28.86315326, -22.71193348,
-38.36945867, -20.61407566, -22.72696734, -25.50360703,
2.16339005, -27.96657305, -33.46004736, -5.08943224,
-30.21994603, 3.68803348, -36.10997302, -30.86899058,
-22.69827634, -37.95847789, -22.40090313, -30.54859849,
-26.64827358, -19.28162344, -34.69783578, -34.6614351,
48.38377664, 47.60840355, 45.76793876, 44.61110193,
49.28911284
]
def q3():
# instanciando pca
pca = PCA(n_components=2)
# treinando com dataset
pca.fit(fifa.dropna())
# retornando resultado
return tuple(pca.components_.dot(x).round(3))
q3()
```
## Questão 4
Realiza RFE com estimador de regressão linear para selecionar cinco variáveis, eliminando uma a uma. Quais são as variáveis selecionadas? Responda como uma lista de nomes de variáveis.
```
# eliminando o NAs
fifa = fifa.dropna()
# definindo x e y
X = fifa.drop(columns='Overall')
y = fifa['Overall']
def q4():
# instanciando o regressão linear
reg= LinearRegression()
# instanciando RFE
rfe = RFE(reg,5)
# treinando rfe
rfe.fit(X, y)
# retornando o resultado
return list(X.columns[rfe.support_])
q4()
```
| github_jupyter |
```
import sys
import numpy as np # linear algebra
from scipy.stats import randint
import matplotlib.pyplot as plt # this is used for the plot the graph
%matplotlib inline
from tqdm import notebook
import tensorflow as tf
from scipy import stats
from scipy.interpolate import interp1d
```
### Simulate data
```
np.random.seed(2020)
# generate weibull distribution parameter
shape=np.random.uniform(1,5,1000)
scale=np.random.uniform(0.5,2,1000)
# the full design matrix
x=np.c_[shape,scale]
y=(np.random.weibull(shape,size=1000)*scale).reshape(-1,1)
train_x=x[:700,:]
train_y=y[:700,:]
test_x=x[700:,:]
test_y=y[700:,:]
ntrain=len(train_x)
ntest=len(test_x)
```
### g-only, this is equivalent to using pre-training in under the Collaborating Network(CN) framework
```
def variables_from_scope(scope_name):
"""
Returns a list of all trainable variables in a given scope. This is useful when
you'd like to back-propagate only to weights in one part of the network
(in our case, the generator or the discriminator).
"""
return tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES, scope=scope_name)
# Graph parameters
intermediate_layer_size = 100
intermediate_layer_size2 = 80
# Training parameters
batch_size = 128
pre_iter= 40000
# g function learn the cdf
def g(yq,x):
"""
yq:quantile:,
x:input feature and treatment,
"""
z1=tf.concat([yq,x],axis=1)
hidden_layer = tf.compat.v1.layers.dense(z1, intermediate_layer_size,kernel_initializer=tf.compat.v1.initializers.random_normal(stddev=.001), name="g1", activation=tf.compat.v1.nn.elu,reuse=None)
hidden_layer_bn = tf.compat.v1.layers.batch_normalization(hidden_layer,name="g1bn")
hidden_layer2 = tf.compat.v1.layers.dense(hidden_layer_bn, intermediate_layer_size2, kernel_initializer=tf.compat.v1.initializers.random_normal(stddev=.001),name="g2", activation=tf.compat.v1.nn.elu,reuse=None)
hidden_layer2_bn = tf.compat.v1.layers.batch_normalization(hidden_layer2,name="g2bn")
gq_logit = tf.compat.v1.layers.dense(hidden_layer2_bn, 1,kernel_initializer=tf.initializers.glorot_normal, name="g3", activation=None,reuse=None)
gq_logit_bn=tf.keras.layers.BatchNormalization(axis=-1,momentum=.1)(gq_logit)
return gq_logit_bn
tf.compat.v1.disable_eager_execution()
tf.compat.v1.reset_default_graph()
# Placeholders
y_ = tf.compat.v1.placeholder(tf.float32, [None, 1])
pre_y= tf.compat.v1.placeholder(tf.float32, [None, 1])
x_=tf.compat.v1.placeholder(tf.float32, [None, x.shape[1]])
q_ = tf.compat.v1.placeholder(tf.float32, [None, 1])
ylessthan_pre= tf.cast(tf.less_equal(y_,pre_y),tf.float32)
with tf.compat.v1.variable_scope("g") as scope:
gq_logit_pre = g(pre_y,x_)
gq=tf.sigmoid(gq_logit_pre)*.99999+.00001
#pre-loss
g_loss_pre = tf.compat.v1.losses.sigmoid_cross_entropy(ylessthan_pre,gq_logit_pre)
# Optimizer
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=1e-4)
pre_step= optimizer.minimize(g_loss_pre,var_list=variables_from_scope("g"))
# Initializer
initialize_all = tf.compat.v1.global_variables_initializer()
```
### Single Evaluation
```
sess = tf.compat.v1.Session()
sess.run(initialize_all)
glpre=[]
for k in notebook.tnrange(pre_iter):
i=np.random.choice(ntrain,batch_size,replace=False)
ytmp = train_y[i,:]
xtmp= train_x[i,:]
#when we do not have f initially, we use a uniform distribution to extract points from support
pre_ytmp=np.random.uniform(-1,14,(batch_size,1))
ltmp,_=sess.run([g_loss_pre,pre_step],feed_dict={y_: ytmp,
x_:xtmp,
pre_y:pre_ytmp})
glpre.append(ltmp)
```
### P(Y>1|X)
```
#true
tsuv1=1-stats.weibull_min.cdf(1,c=test_x[:,0],scale=test_x[:,1])
#cdf estimate by g
gsuv1=1.-sess.run(gq ,feed_dict={x_:test_x,
pre_y:np.repeat(1,len(test_x)).reshape(-1,1),
}).ravel()
#np.save('gsuv_est',gsuv1)
plt.figure(figsize=(5,5))
plt.plot(tsuv1,gsuv1,'.')
plt.plot([0,1],[0,1])
```
#### Test the recover of true cdf
```
#generate sample
np.random.seed(3421)
samps=np.random.choice(len(test_x),3)
#the mean and sd for the random sample
xtmp=np.linspace(0,7,50000)
plt.figure(figsize=(20,4))
plt.subplot(131)
plt.subplot(1,3,1)
i=samps[0]
tcdf=stats.weibull_min.cdf(x=xtmp,c=test_x[i,0],scale=test_x[i,1])
cdf=sess.run(gq ,feed_dict={x_:np.tile(test_x[i,:],(50000,1)),
pre_y:xtmp[:,None]
}).ravel()
gcdf=cdf
plt.plot(xtmp,tcdf)
plt.plot(xtmp,cdf)
plt.subplot(1,3,2)
i=samps[1]
tcdf=stats.weibull_min.cdf(x=xtmp,c=test_x[i,0],scale=test_x[i,1])
cdf=sess.run(gq ,feed_dict={x_:np.tile(test_x[i,:],(50000,1)),
pre_y:xtmp[:,None]
}).ravel()
gcdf=np.c_[gcdf,cdf]
plt.plot(xtmp,tcdf)
plt.plot(xtmp,cdf)
plt.subplot(1,3,3)
i=samps[2]
tcdf=stats.weibull_min.cdf(x=xtmp,c=test_x[i,0],scale=test_x[i,1])
cdf=sess.run(gq ,feed_dict={x_:np.tile(test_x[i,:],(50000,1)),
pre_y:xtmp[:,None]
}).ravel()
gcdf=np.c_[gcdf,cdf]
plt.plot(xtmp,tcdf)
plt.plot(xtmp,cdf)
#np.save('gcdf',gcdf)
```
### Ten replications to evaluate the hard metrics
```
##function to create replication
def rep_iter(x,y,frac=0.3):
n=len(x)
ntest=int(np.floor(frac*n))
allidx=np.random.permutation(n)
trainidx= allidx[ntest:]
testidx= allidx[:ntest]
return x[trainidx],y[trainidx],x[testidx],y[testidx]
#g
gll=[]
gcal=[]
g90=[]
gmae=[]
np.random.seed(2021)
for a in range(10):
train_x,train_y,test_x,test_y=rep_iter(x,y)
ntrain=len(train_x)
ntest=len(test_x)
sess = tf.compat.v1.Session()
sess.run(initialize_all)
gl=[]
fl=[]
sess = tf.compat.v1.Session()
sess.run(initialize_all)
gl=[]
sess = tf.compat.v1.Session()
sess.run(initialize_all)
glpre=[]
for k in notebook.tnrange(pre_iter):
i=np.random.choice(ntrain,batch_size,replace=False)
ytmp = train_y[i,:]
xtmp= train_x[i,:]
#when we do not have f initially, we use a uniform distribution to extract points from support
pre_ytmp=np.random.uniform(-1,14,(batch_size,1))
ltmp,_=sess.run([g_loss_pre,pre_step],feed_dict={y_: ytmp,
x_:xtmp,
pre_y:pre_ytmp})
glpre.append(ltmp)
#####calculate metrics##############
per=np.linspace(0.02,0.98,8) #quantile to study calibration
#lower and upper bound
low=np.quantile(test_y,0.05)
high=np.quantile(test_y,0.95)
itv=np.linspace(low,high,9)
itv=np.append(-np.infty,itv)
itv=np.append(itv,np.infty)
#outcome1 belongs to which interval
id=np.zeros(ntest)
for i in range(10):
id=id+1*(test_y.ravel()>itv[i+1])
id=id.astype('int')
# estimation by g
med_est=np.array([])
ll_est=np.empty(ntest)
cal_est=np.zeros_like(per)
cover_90=0
#use interpolation to recover cdf
xtmp=np.linspace(-1,12,5000)
for i in range(ntest):
l=itv[id[i]]
r=itv[id[i]+1]
#cdf estimate by g
cdf=sess.run(gq ,feed_dict={x_:np.tile(test_x[i,:],(5000,1)),
pre_y:xtmp[:,None]
}).ravel()
cdf[0]=0
cdf[-1]=1
invcdfest=interp1d(cdf,xtmp)
cdfest=interp1d(xtmp,cdf)
#estimate the mae
med_est=np.append(med_est,invcdfest(0.5))
#estimate the loglikelihood
l=itv[id[i]]
r=itv[id[i]+1]
if(r==np.inf):
ll_est[i]=np.log(1.-cdfest(l)+1.e-10)
elif(l==-np.inf):
ll_est[i]=np.log(cdfest(r)+1.e-10)
else:
ll_est[i]=np.log(cdfest(r)-cdfest(l)+1.e-10)
#estimate the calibration
cal_est=cal_est+1.*(test_y[i]<invcdfest(0.5+per/2))*(test_y[i]>invcdfest(0.5-per/2))
#estimate 90 coverage
r=invcdfest(0.95)
l=invcdfest(0.05)
cover_90+=(test_y[i]<r)*(test_y[i]>l)
#summary
cal_est=cal_est/ntest
#cal
gcal.append(np.abs(cal_est-per).mean())
#ll
gll.append(ll_est.mean())
#90coverage
g90.append(cover_90/ntest)
#mae
gmae.append(np.abs(stats.weibull_min.ppf(0.5,c=test_x[:,0],scale=test_x[:,1])-med_est).mean())
def musd(x):
print(np.mean(x),np.std(x))
musd(gll)
musd(gcal)
musd(g90)
musd(gmae)
```
| github_jupyter |
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
print(sys.version)
##################
# Imports #
##################
# import the podaac package
import podaac.podaac as podaac
# import the podaac_utils package
import podaac.podaac_utils as utils
# import the mcc package
import podaac.mcc as mcc
#######################
# Class instantiation #
#######################
# then create an instance of the Podaac class
p = podaac.Podaac()
# then create an instance of the PodaacUtils class
u = utils.PodaacUtils()
# then create an instance of the MCC class
m = mcc.MCC()
###########################################
# Lets look at some convenience functions #
###########################################
print(u.list_all_available_extract_granule_dataset_ids())
print(u.list_all_available_extract_granule_dataset_short_names())
print(u.list_all_available_granule_search_dataset_ids())
print(u.list_all_available_granule_search_dataset_short_names())
print(u.list_available_granule_search_level2_dataset_ids())
print(u.list_available_granule_search_level2_dataset_short_names())
# Now lets take a look at using the results from above to interact with the PO.DAAC Webservices
########################
# PO.DAAC Web Services #
########################
# First lets retrieve dataset metadata
print(p.dataset_metadata(dataset_id='PODAAC-GHMG2-2PO01'))
# Lets try searching for datasets
print(p.dataset_search(keyword='modis'))
# Now retrieve dataset variables
print(p.dataset_variables(dataset_id='PODAAC-GHMDA-2PJ02'))
# Now extracting an individual granule
print(p.extract_l4_granule(dataset_id='PODAAC-AQR50-3YVAS'))
# Now retrieving granule metadata
print(p.granule_metadata(dataset_id='PODAAC-GHMG2-2PO01'), granule_name='20120912-MSG02-OSDPD-L2P-MSG02_0200Z-v01.nc')
from IPython.display import Image
from IPython.core.display import HTML
result = p.granule_preview(dataset_id='PODAAC-ASOP2-25X01')
# Additionally, we can search metadata for list of granules archived within the last 24 hours in Datacasting format.
print(p.last24hours_datacasting_granule_md(dataset_id='PODAAC-AQR50-3YVAS'))
# Now Searching for Granules
print(p.granule_search(dataset_id='PODAAC-ASOP2-25X01',bbox='0,0,180,90',start_time='2013-01-01T01:30:00Z',end_time='2014-01-01T00:00:00Z',start_index='1', pretty='True'))
######################################################
# Working with Metadata Compliance Webservices (mcc) #
######################################################
# Compliance Check a Local File
print(m.check_local_file(acdd_version='1.3', gds2_parameters='L4', file_upload='../podaac/tests/ascat_20130719_230600_metopa_35024_eps_o_250_2200_ovw.l2_subsetted_.nc', response='json'))
# Compliance Check a Remote File
print(m.check_remote_file(checkers='CF', url_upload='http://test.opendap.org/opendap/data/ncml/agg/dated/CG2006158_120000h_usfc.nc', response='json'))
# Thank you for trying out podaacpy
# That concludes the quick start. Hopefully this has been helpful in providing an overview
# of the main podaacpy features. If you have any issues with this document then please register
# them at the issue tracker - https://github.com/nasa/podaacpy/issues
# Please use labels to classify your issue.
# Thanks,
# Lewis John McGibbney
```
| github_jupyter |
```
! rm visualising_the_results/*
```
# Visualising the results
In this tutorial, we demonstrate the plotting tools built-in to `bilby` and how to extend them. First, we run a simple injection study and return the `result` object.
```
import bilby
import matplotlib.pyplot as plt
%matplotlib inline
time_duration = 4. # time duration (seconds)
sampling_frequency = 2048. # sampling frequency (Hz)
outdir = 'visualising_the_results' # directory in which to store output
label = 'example' # identifier to apply to output files
injection_parameters = dict(
mass_1=36., # source frame (non-redshifted) primary mass (solar masses)
mass_2=29., # source frame (non-redshifted) secondary mass (solar masses)
a_1=0.4, # primary dimensionless spin magnitude
a_2=0.3, # secondary dimensionless spin magnitude
tilt_1=0.5, # polar angle between primary spin and the orbital angular momentum (radians)
tilt_2=1.0, # polar angle between secondary spin and the orbital angular momentum
phi_12=1.7, # azimuthal angle between primary and secondary spin (radians)
phi_jl=0.3, # azimuthal angle between total angular momentum and orbital angular momentum (radians)
luminosity_distance=200., # luminosity distance to source (Mpc)
iota=0.4, # inclination angle between line of sight and orbital angular momentum (radians)
phase=1.3, # phase (radians)
waveform_approximant='IMRPhenomPv2', # waveform approximant name
reference_frequency=50., # gravitational waveform reference frequency (Hz)
ra=1.375, # source right ascension (radians)
dec=-1.2108, # source declination (radians)
geocent_time=1126259642.413, # reference time at geocentre (time of coalescence or peak amplitude) (GPS seconds)
psi=2.659 # gravitational wave polarisation angle
)
# set up the waveform generator
waveform_generator = bilby.gw.waveform_generator.WaveformGenerator(
sampling_frequency=sampling_frequency, duration=time_duration,
frequency_domain_source_model=bilby.gw.source.lal_binary_black_hole,
parameters=injection_parameters)
# create the frequency domain signal
hf_signal = waveform_generator.frequency_domain_strain()
# initialise an interferometer based on LIGO Hanford, complete with simulated noise and injected signal
IFOs = [bilby.gw.detector.get_interferometer_with_fake_noise_and_injection(
'H1', injection_polarizations=hf_signal, injection_parameters=injection_parameters, duration=time_duration,
sampling_frequency=sampling_frequency, outdir=outdir)]
# first, set up all priors to be equal to a delta function at their designated value
priors = injection_parameters.copy()
# then, reset the priors on the masses and luminosity distance to conduct a search over these parameters
priors['mass_1'] = bilby.core.prior.Uniform(20, 50, 'mass_1')
priors['mass_2'] = bilby.core.prior.Uniform(20, 50, 'mass_2')
priors['luminosity_distance'] = bilby.core.prior.Uniform(100, 300, 'luminosity_distance')
# compute the likelihoods
likelihood = bilby.gw.likelihood.GravitationalWaveTransient(interferometers=IFOs, waveform_generator=waveform_generator)
result = bilby.core.sampler.run_sampler(likelihood=likelihood, priors=priors, sampler='dynesty', npoints=100,
injection_parameters=injection_parameters, outdir=outdir, label=label,
walks=5)
# display the corner plot
plt.show()
```
In running this code, we already made the first plot! In the function `bilby.detector.get_interferometer_with_fake_noise_and_injection`, the ASD, detector data, and signal are plotted together. This figure is saved under `visualsing_the_results/H1_frequency_domain_data.png`. Note that `visualising_the_result` is our `outdir` where all the output of the run is stored. Let's take a quick look at that directory now:
```
!ls visualising_the_results/
```
## Corner plots
Now lets make some corner plots. You can easily generate a corner plot using `result.plot_corner()` like this:
```
result.plot_corner()
plt.show()
```
In a notebook, this figure will display. But by default the file is also saved to `visualising_the_result/example_corner.png`. If you change the label to something more descriptive then the `example` here will of course be replaced.
You may also want to plot a subset of the parameters, or perhaps add the `injection_paramters` as lines to check if you recovered them correctly. All this can be done through `plot_corner`. Under the hood, `plot_corner` uses
[chain consumer](https://samreay.github.io/ChainConsumer/index.html), and all the keyword arguments passed to `plot_corner` are passed through to [the `plot` function of chain consumer](https://samreay.github.io/ChainConsumer/chain_api.html#chainconsumer.plotter.Plotter.plot).
### Adding injection parameters to the plot
In the previous plot, you'll notice `bilby` added the injection parameters to the plot by default. You can switch this off by setting `truth=None` when you call `plot_corner`. Or to add different injection parameters to the plot, just pass this as a keyword argument for `truth`. In this example, we just add a line for the luminosity distance by passing a dictionary of the value we want to display.
```
result.plot_corner(truth=dict(luminosity_distance=201))
plt.show()
```
### Plot a subset of the corner plot
Or, to plot just a subset of parameters, just pass a list of the names you want.
```
result.plot_corner(parameters=['mass_1', 'mass_2'], filename='{}/subset.png'.format(outdir))
plt.show()
```
Notice here, we also passed in a keyword argument `filename=`, this overwrites the default filename and instead saves the file as `visualising_the_results/subset.png`. Useful if you want to create lots of different plots. Let's check what the outdir looks like now
```
!ls visualising_the_results/
```
## Alternative
If you would prefer to do the plotting yourself, you can get hold of the samples and the ordering as follows and then plot with a different module. Here is an example using the [`corner`](http://corner.readthedocs.io/en/latest/) package
```
import corner
samples = result.samples
labels = result.parameter_labels
fig = corner.corner(samples, labels=labels)
plt.show()
```
## Other plots
We also include some other types of plots which may be useful. Again, these are built on chain consumer so you may find it useful to check the [documentation](https://samreay.github.io/ChainConsumer/chain_api.html#plotter-class) to see how these plots can be extended. Below, we show just one example of these.
#### Distribution plots
These plots just show the 1D histograms for each parameter
```
result.plot_marginals()
plt.show()
```
| github_jupyter |
# Arbitrarily high order accurate explicit time integration methods
1. Chapter 5: ADER and DeC
1. [Section 1.1: DeC](#DeC)
1. [Section 1.2: ADER](#ADER)
## Deferred Correction (Defect correction/ Spectral deferred correction)<a id='DeC'></a>
Acronyms: DeC, DEC, DC, SDC
References: [Dutt et al. 2000](https://link.springer.com/article/10.1023/A:1022338906936), [Minion (implicit) 2003](https://projecteuclid.org/journals/communications-in-mathematical-sciences/volume-1/issue-3/Semi-implicit-spectral-deferred-correction-methods-for-ordinary-differential-equations/cms/1250880097.full), [Abgrall 2017 (for PDE)](https://hal.archives-ouvertes.fr/hal-01445543v2)
We study Abgrall's version (for notation)
Theory on slides!
```
# If you do not have numpy, matplotlib, scipy or nodepy, run this cell
!pip install numpy
# This is the basic package in python with all the numerical functions
!pip install scipy
# This package has some functions to deal with polynomials
!pip install matplotlib
# This package allows to plot
!pip install nodepy
# This package has some interesting features for RK methods
# We need a couple of packages in this chapter
import numpy as np
# This is the basic package in python with all the numerical functions
import matplotlib.pyplot as plt
# This package allows to plot
from nodepy import rk
#This package already implemented some functions for Runge Kutta and multistep methods
```
For the definition of the basis functions in time, we introduce different Lagrange polynomials and point distributions:
1. equispaced
1. Gauss--Legendre--Lobatto (GLB)
1. Gauss--Legendre (not in DeC, because the last point is not $t^{n+1}$)
So, we have the quadrature points $\lbrace t^m \rbrace_{m=0}^M$, the polynomials $\lbrace \varphi_m \rbrace_{m=0}^M$ such that $\varphi_j(t^m)=\delta_{j}^m$, and we are interested in computing
$$
\theta_r^m:=\int_{t^0}^{t^m} \varphi_r(t) dt
$$
To compute the integral we will use exact quadrature rules with Gauss--Lobatto (GLB) points, i.e., given the quadrature nodes and weights $t_q, w_q$ on the interval $[0,1]$ the integral is computed as
$$
\theta_r^m:=\int_{t^0}^{t^m} \varphi_r(t) dt = \sum_q \varphi_r(t^q(t^m-t^0)+t^0) w_q(t^m-t^0)
$$
In practice, at each timestep we have to loop over corrections $(k)$ and over subtimesteps $m$ and compute
$$
y^{m,(k)} = y^{m,(k-1)} - \left( y^{m,(k-1)} - y^{0} - \Delta t\sum_{r=0}^M \theta_r^m F(y^{r,(k-1)}) \right)=y^{0} + \Delta t\sum_{r=0}^M \theta_r^m F(y^{r,(k-1)})
$$
```
from scipy.interpolate import lagrange
from numpy.polynomial.legendre import leggauss
def equispaced(order):
'''
Takes input d and returns the vector of d equispaced points in [-1,1]
And the integral of the basis functions interpolated in those points
'''
nodes= np.linspace(-1,1,order)
w= np.zeros(order)
for k in range(order):
yy= np.zeros(order)
yy[k]=1.
zz=lagrange(nodes,yy)
pp=zz.integ()
w[k]=pp(1)-pp(-1)
return nodes, w
def lglnodes(n,eps=10**-15):
'''
Python translation of lglnodes.m
Computes the Legendre-Gauss-Lobatto nodes, weights and the LGL Vandermonde
matrix. The LGL nodes are the zeros of (1-x^2)*P'_N(x). Useful for numerical
integration and spectral methods.
Parameters
----------
n : integer, requesting an nth-order Gauss-quadrature rule on [-1, 1]
Returns
-------
(nodes, weights) : tuple, representing the quadrature nodes and weights.
Note: (n+1) nodes and weights are returned.
Example
-------
>>> from lglnodes import *
>>> (nodes, weights) = lglnodes(3)
>>> print(str(nodes) + " " + str(weights))
[-1. -0.4472136 0.4472136 1. ] [0.16666667 0.83333333 0.83333333 0.16666667]
Notes
-----
Reference on LGL nodes and weights:
C. Canuto, M. Y. Hussaini, A. Quarteroni, T. A. Tang, "Spectral Methods
in Fluid Dynamics," Section 2.3. Springer-Verlag 1987
Written by Greg von Winckel - 04/17/2004
Contact: gregvw@chtm.unm.edu
Translated and modified into Python by Jacob Schroder - 9/15/2018
'''
w = np.zeros((n+1,))
x = np.zeros((n+1,))
xold = np.zeros((n+1,))
# The Legendre Vandermonde Matrix
P = np.zeros((n+1,n+1))
epss = eps
# Use the Chebyshev-Gauss-Lobatto nodes as the first guess
for i in range(n+1):
x[i] = -np.cos(np.pi*i / n)
# Compute P using the recursion relation
# Compute its first and second derivatives and
# update x using the Newton-Raphson method.
xold = 2.0
for i in range(100):
xold = x
P[:,0] = 1.0
P[:,1] = x
for k in range(2,n+1):
P[:,k] = ( (2*k-1)*x*P[:,k-1] - (k-1)*P[:,k-2] ) / k
x = xold - ( x*P[:,n] - P[:,n-1] )/( (n+1)*P[:,n])
if (max(abs(x - xold).flatten()) < epss ):
break
w = 2.0 / ( (n*(n+1))*(P[:,n]**2))
return x, w
def lagrange_basis(nodes,x,k):
y=np.zeros(x.size)
for ix, xi in enumerate(x):
tmp=[(xi-nodes[j])/(nodes[k]-nodes[j]) for j in range(len(nodes)) if j!=k]
y[ix]=np.prod(tmp)
return y
def get_nodes(order,nodes_type):
if nodes_type=="equispaced":
nodes,w = equispaced(order)
elif nodes_type == "gaussLegendre":
nodes,w = leggauss(order)
elif nodes_type == "gaussLobatto":
nodes, w = lglnodes(order-1,10**-15)
nodes=nodes*0.5+0.5
w = w*0.5
return nodes, w
def compute_theta_DeC(order, nodes_type):
nodes, w = get_nodes(order,nodes_type)
int_nodes, int_w = get_nodes(order,"gaussLobatto")
# generate theta coefficients
theta = np.zeros((order,order))
beta = np.zeros(order)
for m in range(order):
beta[m] = nodes[m]
nodes_m = int_nodes*(nodes[m])
w_m = int_w*(nodes[m])
for r in range(order):
theta[r,m] = sum(lagrange_basis(nodes,nodes_m,r)*w_m)
return theta, beta
def compute_RK_from_DeC(M_sub,K_corr,nodes_type):
order=M_sub+1;
[theta,beta]=compute_theta_DeC(order,nodes_type)
bar_beta=beta[1:] # M_sub
bar_theta=theta[:,1:].transpose() # M_sub x (M_sub +1)
theta0= bar_theta[:,0] # M_sub x 1
bar_theta= bar_theta[:,1:] #M_sub x M_sub
A=np.zeros((M_sub*(K_corr-1)+1,M_sub*(K_corr-1)+1)) # (M_sub x K_corr +1)^2
b=np.zeros(M_sub*(K_corr-1)+1)
c=np.zeros(M_sub*(K_corr-1)+1)
c[1:M_sub+1]=bar_beta
A[1:M_sub+1,0]=bar_beta
for k in range(1,K_corr-1):
r0=1+M_sub*k
r1=1+M_sub*(k+1)
c0=1+M_sub*(k-1)
c1=1+M_sub*(k)
c[r0:r1]=bar_beta
A[r0:r1,0]=theta0
A[r0:r1,c0:c1]=bar_theta
b[0]=theta0[-1]
b[-M_sub:]=bar_theta[M_sub-1,:]
return A,b,c
## Deferred correction algorithm
def dec(func, tspan, y_0, M_sub, K_corr, distribution):
N_time=len(tspan)
dim=len(y_0)
U=np.zeros((dim, N_time))
u_p=np.zeros((dim, M_sub+1))
u_a=np.zeros((dim, M_sub+1))
rhs= np.zeros((dim,M_sub+1))
Theta, beta = compute_theta_DeC(M_sub+1,distribution)
U[:,0]=y_0
for it in range(1, N_time):
delta_t=(tspan[it]-tspan[it-1])
for m in range(M_sub+1):
u_a[:,m]=U[:,it-1]
u_p[:,m]=U[:,it-1]
for k in range(1,K_corr+1):
u_p=np.copy(u_a)
for r in range(M_sub+1):
rhs[:,r]=func(u_p[:,r])
for m in range(1,M_sub+1):
u_a[:,m]= U[:,it-1]+delta_t*sum([Theta[r,m]*rhs[:,r] for r in range(M_sub+1)])
U[:,it]=u_a[:,M_sub]
return tspan, U
import numpy as np
## Linear scalar Dahlquist's equation
def linear_scalar_flux(u,t=0,k_coef=10):
ff=np.zeros(np.shape(u))
ff[0]= -k_coef*u[0]
return ff
def linear_scalar_exact_solution(u0,t,k_coef=10):
return np.array([np.exp(-k_coef*u0[0]*t)])
def linear_scalar_jacobian(u,t=0,k_coef=10):
Jf=np.zeros((len(u),len(u)))
Jf[0,0]=-k_coef
return Jf
#nonlinear problem y'=-ky|y| +1
def nonlinear_scalar_flux(u,t=0,k_coef=10):
ff=np.zeros(np.shape(u))
ff[0]=-k_coef*abs(u[0])*u[0] +1
return ff
def nonlinear_scalar_exact_solution(u0,t,k_coef = 10):
sqrtk = np.sqrt(k_coef)
ustar = 1 / sqrtk
if u0[0] >= ustar:
uex=np.array([1./np.tanh(sqrtk * t + np.arctanh(1/sqrtk /u0[0])) / sqrtk])
elif u0[0] < 0 and t < - np.atan(sqrtk * u0[0]) / sqrtk:
uex=np.array([np.tan(sqrtk * t + np.arctan(sqrtk * u0[0])) / sqrtk])
else:
uex=np.array([np.tanh(sqrtk * t + np.arctanh(sqrtk * u0[0])) / sqrtk])
return uex
def nonlinear_scalar_jacobian(u,t=0,k_coef=10):
Jf=np.zeros((len(u),len(u)))
Jf[0,0]=-k_coef*abs(u[0])
return Jf
# SYSTEMS
# linear systems
def linear_system2_flux(u,t=0):
d=np.zeros(len(u))
d[0]= -5*u[0] + u[1]
d[1]= 5*u[0] -u[1]
return d
def linear_system2_exact_solution(u0,t):
A=np.array([[-5,1],[5,-1]])
u_e=u0+(1-np.exp(-6*t))/6*np.dot(A,u0)
return u_e
def linear_system2_jacobian(u,t=0):
Jf=np.array([[-5,1],[5,-1]])
return Jf
linear_system2_matrix = np.array([[-5,1],[5,-1]])
def linear_system2_production_destruction(u,t=0):
p=np.zeros((len(u),len(u)))
d=np.zeros((len(u),len(u)))
p[0,1]=u[1]
d[1,0]=u[1]
p[1,0]=5*u[0]
d[0,1]=5*u[0]
return p,d
#lin system 3 x3
def linear_system3_flux(u,t=0):
d=np.zeros(len(u))
d[0]= -u[0] + 3*u[1]
d[1]= -3*u[1] + 5*u[2]
d[2]= -5*u[2]
return d
def linear_system3_exact_solution(u0,t=0):
u_e = np.zeros(len(u0))
u_e[0] = 15.0/8.0*u0[2]*(np.exp(-5*t) - 2*np.exp(-3*t)+np.exp(-t))
u_e[1] = 5.0/2.0*u0[2]*(-np.exp(-5*t) + np.exp(-3*t))
u_e[2] = u0[2]*np.exp(-5*t)
return u_e
def linear_system3_jacobian(u,t=0):
Jf=np.zeros((len(u),len(u)))
Jf[0,0]=-1.
Jf[0,1]=3
Jf[1,1] = -3
Jf[1,2] = 5
Jf[2,2] = -5
return Jf
## Nonlinear 3x3 system production destruction
def nonlinear_system3_flux(u,t=0):
ff=np.zeros(len(u))
ff[0]= -u[0]*u[1]/(u[0]+1)
ff[1]= u[0]*u[1]/(u[0]+1) -0.3*u[1]
ff[2]= 0.3*u[1]
return ff
def nonlinear_system3_production_destruction(u,t=0):
p=np.zeros((len(u),len(u)))
d=np.zeros((len(u),len(u)))
p[1,0]=u[0]*u[1]/(u[0]+1)
d[0,1]=p[1,0]
p[2,1]=0.3*u[1]
d[1,2]=p[2,1]
return p,d
# SIR Model
def SIR_flux(u,t=0,beta=3,gamma=1):
ff=np.zeros(len(u))
N=np.sum(u)
ff[0]=-beta*u[0]*u[1]/N
ff[1]=+beta*u[0]*u[1]/N - gamma*u[1]
ff[2]= gamma*u[1]
return ff
def SIR_jacobian(u,t=0,beta=3,gamma=1):
Jf=np.zeros((len(u),len(u)))
N=np.sum(u)
Jf[0,0]=-beta*u[1]/N
Jf[0,1]=-beta*u[0]/N
Jf[1,0]= beta*u[1]/N
Jf[1,1]= beta*u[0]/N - gamma
Jf[2,1] = gamma
return Jf
def SIR_production_destruction(u,t=0,beta=3,gamma=1):
p=np.zeros((len(u),len(u)))
d=np.zeros((len(u),len(u)))
N=np.sum(u)
p[1,0]=beta*u[0]*u[1]/N
d[0,1]=p[1,0]
p[2,1]=gamma*u[1]
d[1,2]=p[2,1]
return p,d
# Nonlinear_oscillator
def nonLinearOscillator_flux(u,t=0,alpha=0.):
ff=np.zeros(np.shape(u))
n=np.sqrt(np.dot(u,u))
ff[0]=-u[1]/n-alpha*u[0]/n
ff[1]=u[0]/n - alpha*u[1]/n
return ff
def nonLinearOscillator_exact_solution(u0,t):
u_ex=np.zeros(np.shape(u0))
n=np.sqrt(np.dot(u0,u0))
u_ex[0]=np.cos(t/n)*u0[0]-np.sin(t/n)*u0[1]
u_ex[1]=np.sin(t/n)*u0[0]+np.cos(t/n)*u0[1]
return u_ex
# Non linear oscillator damped
def nonLinearOscillatorDamped_flux(u,t,alpha=0.01):
ff=np.zeros(np.shape(u))
n=np.sqrt(np.dot(u,u))
ff[0]=-u[1]/n-alpha*u[0]/n
ff[1]=u[0]/n - alpha*u[1]/n
return ff
def nonLinearOscillatorDamped_exact_solution(u0,t,alpha=0.01):
u_ex=np.zeros(np.shape(u0))
n0=np.sqrt(np.dot(u0,u0))
n=n0*np.exp(-alpha*t)
u_ex[0]=n/n0*(np.cos(t/n)*u0[0]-np.sin(t/n)*u0[1])
u_ex[1]=n/n0*(np.sin(t/n)*u0[0]+np.cos(t/n)*u0[1])
return u_ex
# pendulum
def pendulum_flux(u,t=0):
ff=np.zeros(np.shape(u))
ff[0]=u[1]
ff[1]=-np.sin(u[0])
return ff
def pendulum_jacobian(u,t=0):
Jf=np.zeros((2,2))
Jf[0,1]=1.
Jf[1,0]=np.cos(u[0])
return Jf
def pendulum_entropy(u,t=0):
return np.array(0.5*u[1]**2.-np.cos(u[0]), dtype=np.float)
def pendulum_entropy_variables(u,t=0):
v=np.zeros(np.shape(u))
v[0]=np.sin(u[0])
v[1]=u[1]
return v
# Robertson
def Robertson_flux(u,t=0,alpha=10**4,beta=0.04, gamma=3*10**7):
ff=np.zeros(np.shape(u))
ff[0] = alpha*u[1]*u[2]-beta*u[0]
ff[1] = beta*u[0]-alpha*u[1]*u[2] - gamma*u[1]**2
ff[2] = gamma*u[1]**2
return ff
def Robertson_jacobian(u,t=0,alpha=10**4,beta=0.04, gamma=3*10**7):
Jf=np.zeros((3,3))
Jf[0,0]= -beta
Jf[0,1]= alpha*u[2]
Jf[0,2]= alpha*u[1]
Jf[1,0]= beta
Jf[1,1]= -alpha*u[2]-2*gamma*u[1]
Jf[1,2]= -alpha*u[1]
Jf[2,1] = 2*gamma*u[1]
return Jf
def Robertson_production_destruction(u,t=0,alpha=10**4,beta=0.04, gamma=3*10**7):
p=np.zeros((len(u),len(u)))
d=np.zeros((len(u),len(u)))
p[0,1]=alpha*u[1]*u[2]
d[1,0]=p[0,1]
p[1,0]=beta*u[0]
d[0,1]=p[1,0]
p[2,1]=gamma*u[1]**2
d[1,2]=p[2,1]
return p,d
# Lotka:
def lotka_flux(u,t=0,alpha=1,beta=0.2,delta=0.5,gamma=0.2):
ff=np.zeros(np.shape(u))
ff[0]=alpha*u[0]-beta*u[0]*u[1]
ff[1]=delta*beta*u[0]*u[1]-gamma*u[1]
return ff
def lotka_jacobian(u,t=0,alpha=1,beta=0.2,delta=0.5,gamma=0.2):
Jf=np.zeros((2,2))
Jf[0,0] = alpha -beta*u[1]
Jf[0,1] = -beta*u[0]
Jf[1,0] = delta*beta*u[1]
Jf[1,1] = delta*beta*u[0] -gamma
return Jf
#3 bodies problem in 2D: U=(x_1,x_2,v_1,v_2,y_1,y_2,w_1,w_2,z_1,z_2,s_1,s_2)
# where x is the 2D position of body1 and v is speed body1 sun
# y, w are position and velocity body2 earth
# z, s are position and velocity body3 mars
def threeBodies_flux(u,t=0):
m1=1.98892*10**30
m2=5.9722*10**24
m3=6.4185*10**23
G=6.67*10**(-11)
f=np.zeros(np.shape(u))
x=u[0:2]
v=u[2:4]
y=u[4:6]
w=u[6:8]
z=u[8:10]
s=u[10:12]
dxy3=np.linalg.norm(x-y)**3
dxz3=np.linalg.norm(x-z)**3
dyz3=np.linalg.norm(y-z)**3
f[0:2]=v
f[2:4]=-m2*G/dxy3*(x-y)-m3*G/dxz3*(x-z)
f[4:6]=w
f[6:8]=-m1*G/dxy3*(y-x)-m3*G/dyz3*(y-z)
f[8:10]=s
f[10:12]=-m1*G/dxz3*(z-x)-m2*G/dyz3*(z-y)
return f
class ODEproblem:
def __init__(self,name):
self.name=name
if self.name=="linear_scalar":
self.u0 = np.array([1.])
self.T_fin= 2.
self.k_coef=10
self.matrix=np.array([-self.k_coef])
elif self.name=="nonlinear_scalar":
self.k_coef=10
self.u0 = np.array([1.1/np.sqrt(self.k_coef)])
self.T_fin= 1.
elif self.name=="linear_system2":
self.u0 = np.array([0.9,0.1])
self.T_fin= 1.
self.matrix = np.array([[-5,1],[5,-1]])
elif self.name=="linear_system3":
self.u0 = np.array([0,0.,10.])
self.T_fin= 10.
elif self.name=="nonlinear_system3":
self.u0 = np.array([9.98,0.01,0.01])
self.T_fin= 30.
elif self.name=="SIR":
self.u0 = np.array([1000.,1,10**-20])
self.T_fin= 10.
elif self.name=="nonLinearOscillator":
self.u0 = np.array([1.,0.])
self.T_fin= 50
elif self.name=="nonLinearOscillatorDamped":
self.u0 = np.array([1.,0.])
self.T_fin= 50
elif self.name=="pendulum":
self.u0 = np.array([2.,0.])
self.T_fin= 50
elif self.name=="Robertson":
self.u0 = np.array([1.,10**-20,10**-20])
self.T_fin= 10.**10.
elif self.name=="lotka":
self.u0 = np.array([1.,2.])
self.T_fin= 100.
elif self.name=="threeBodies":
self.u0 = np.array([0,0,0,0,149*10**9,0,0,30*10**3,-226*10**9,0,0,-24.0*10**3])
self.T_fin= 10.**8.
else:
raise ValueError("Problem not defined")
def flux(self,u,t=0):
if self.name=="linear_scalar":
return linear_scalar_flux(u,t,self.k_coef)
elif self.name=="nonlinear_scalar":
return nonlinear_scalar_flux(u,t,self.k_coef)
elif self.name=="linear_system2":
return linear_system2_flux(u,t)
elif self.name=="linear_system3":
return linear_system3_flux(u,t)
elif self.name=="nonlinear_system3":
return nonlinear_system3_flux(u,t)
elif self.name=="SIR":
return SIR_flux(u,t)
elif self.name=="nonLinearOscillator":
return nonLinearOscillator_flux(u,t)
elif self.name=="nonLinearOscillatorDamped":
return nonLinearOscillatorDamped_flux(u,t)
elif self.name=="pendulum":
return pendulum_flux(u,t)
elif self.name=="Robertson":
return Robertson_flux(u,t)
elif self.name=="lotka":
return lotka_flux(u,t)
elif self.name=="threeBodies":
return threeBodies_flux(u,t)
else:
raise ValueError("Flux not defined for this problem")
def jacobian(self,u,t=0):
if self.name=="linear_scalar":
return linear_scalar_jacobian(u,t,self.k_coef)
elif self.name=="nonlinear_scalar":
return nonlinear_scalar_jacobian(u,t,self.k_coef)
elif self.name=="linear_system2":
return linear_system2_jacobian(u,t)
elif self.name=="linear_system3":
return linear_system3_jacobian(u,t)
elif self.name=="pendulum":
return pendulum_jacobian(u,t)
elif self.name=="SIR":
return SIR_jacobian(u,t)
elif self.name=="Robertson":
return Robertson_jacobian(u,t)
elif self.name=="lotka":
return lotka_jacobian(u,t)
else:
raise ValueError("Jacobian not defined for this problem")
def exact(self,u,t):
if self.name=="linear_scalar":
return linear_scalar_exact_solution(u,t,self.k_coef)
elif self.name=="nonlinear_scalar":
return nonlinear_scalar_exact_solution(u,t,self.k_coef)
elif self.name=="linear_system2":
return linear_system2_exact_solution(u,t)
elif self.name=="linear_system3":
return linear_system3_exact_solution(u,t)
elif self.name=="nonLinearOscillator":
return nonLinearOscillator_exact_solution(u,t)
elif self.name=="nonLinearOscillatorDamped":
return nonLinearOscillatorDamped_exact_solution(u,t)
else:
raise ValueError("Exact solution not defined for this problem")
def exact_solution_times(self,u0,tt):
exact_solution=np.zeros((len(u0),len(tt)))
for it, t in enumerate(tt):
exact_solution[:,it]=self.exact(u0,t)
return exact_solution
def prod_dest(self,u,t=0):
if self.name=="linear_system2":
return linear_system2_production_destruction(u,t)
if self.name=="nonlinear_system3":
return nonlinear_system3_production_destruction(u,t)
elif self.name=="Robertson":
return Robertson_production_destruction(u,t)
elif self.name=="SIR":
return SIR_production_destruction(u,t)
else:
raise ValueError("Prod Dest not defined for this problem")
pr=ODEproblem("threeBodies")
tt=np.linspace(0,pr.T_fin,1000)
tt,U=dec(pr.flux,tt,pr.u0,4,5,"gaussLobatto")
plt.figure()
plt.plot(U[0,:],U[1,:],'*',label="sun")
plt.plot(U[4,:],U[5,:],label="earth")
plt.plot(U[8,:],U[9,:],label="Mars")
plt.legend()
plt.show()
plt.figure()
plt.title("Distance from the original position of the sun")
plt.semilogy(tt,U[4,:]**2+U[5,:]**2,label="earth")
plt.semilogy(tt,U[8,:]**2+U[9,:]**2, label="mars")
plt.legend()
plt.show()
#Test convergence
pr=ODEproblem("linear_system2")
tt=np.linspace(0,pr.T_fin,10)
tt,uu=dec(pr.flux, tt, pr.u0, 7, 8, "equispaced")
plt.plot(tt,uu[0,:])
plt.plot(tt,uu[1,:])
plt.show()
def compute_integral_error(c,c_exact): # c is dim x times
times=np.shape(c)[1]
error=0.
for t in range(times):
error = error + np.linalg.norm(c[:,t]-c_exact[:,t],2)**2.
error = np.sqrt(error/times)
return error
NN=5
dts=[pr.T_fin/2.0**k for k in range(3,3+NN)]
errorsDeC=np.zeros(len(dts))
for order in range(2,10):
for k in range(NN):
dt0=dts[k]
tt=np.arange(0,pr.T_fin,dt0)
t2,U2=dec(pr.flux, tt, pr.u0, order-1, order, "gaussLobatto")
u_exact=pr.exact_solution_times(pr.u0,tt)
errorsDeC[k]=compute_integral_error(U2,u_exact)
plt.loglog(dts,errorsDeC,"--",label="DeC%d"%(order))
plt.loglog(dts,[dt**order*errorsDeC[2]/dts[2]**order for dt in dts],":",label="ref %d"%(order))
plt.title("DeC error convergence")
plt.legend()
#plt.savefig("convergence_DeC.pdf")
plt.show()
for order in range(2,10):
A,b,c=compute_RK_from_DeC(order-1,order,"equispaced")
rkDeC = rk.ExplicitRungeKuttaMethod(A,b)
rkDeC.name="DeC"+str(order)
rkDeC.plot_stability_region(bounds=[-5,3,-7,7])
for order in range(2,14):
A,b,c=compute_RK_from_DeC(order-1,order,"equispaced")
rkDeC = rk.ExplicitRungeKuttaMethod(A,b)
rkDeC.name="DeC"+str(order)
print(rkDeC.name+" has order "+str(rkDeC.order()))
pr=ODEproblem("lotka")
tt=np.linspace(0,pr.T_fin,150)
t2,U2=dec(pr.flux, tt, pr.u0, 1, 2, "gaussLobatto")
t8,U8=dec(pr.flux, tt, pr.u0, 7, 8, "gaussLobatto")
tt=np.linspace(0,pr.T_fin,2000)
tref,Uref=dec(pr.flux, tt, pr.u0, 4,5, "gaussLobatto")
plt.figure(figsize=(12,6))
plt.subplot(211)
plt.plot(t2,U2[0,:],label="dec2")
plt.plot(t8,U8[0,:],label="dec8")
plt.plot(tref,Uref[0,:], ":",linewidth=2,label="ref")
plt.legend()
plt.title("Prey")
plt.subplot(212)
plt.plot(t2,U2[1,:],label="dec2")
plt.plot(t8,U8[1,:],label="dec8")
plt.plot(tref,Uref[1,:],":", linewidth=2,label="ref")
plt.legend()
plt.title("Predator")
```
### Pro exercise: implement the implicit DeC presented in the slides
* You need to pass also a function of the Jacobian of the flux in input
* The Jacobian can be evaluated only once per timestep $\partial_y F(y^n)$ and used to build the matrix that must be inverted at each correction
* For every subtimestep the matrix to be inverted changes a bit ($\beta^m \Delta t$ factor in front of the Jacobian)
* One can invert these $M$ matrices only once per time step
* Solve the system at each subtimestep and iteration
$$
y^{m,(k)}-\beta^m \Delta t \partial_y F(y^0)y^{m,(k)} = y^{m,(k-1)}-\beta^m \Delta t \partial_y F(y^0)y^{m,(k-1)} - \left( y^{m,(k-1)} - y^{0} - \Delta t\sum_{r=0}^M \theta_r^m F(y^{r,(k-1)}) \right)
$$
defining $M^{m}=I+\beta^m \Delta t \partial_y F(y^0)$, we can simplify it as
$$
y^{m,(k)}=y^{m,(k-1)} - (M^m)^{-1}\left( y^{m,(k-1)} - y^{0} - \Delta t\sum_{r=0}^M \theta_r^m F(y^{r,(k-1)}) \right)
$$
```
def decImplicit(func,jac_stiff, tspan, y_0, M_sub, K_corr, distribution):
N_time=len(tspan)
dim=len(y_0)
U=np.zeros((dim, N_time))
u_p=np.zeros((dim, M_sub+1))
u_a=np.zeros((dim, M_sub+1))
u_help= np.zeros(dim)
rhs= np.zeros((dim,M_sub+1))
Theta, beta = compute_theta_DeC(M_sub+1,distribution)
invJac=np.zeros((M_sub+1,dim,dim))
U[:,0]=y_0
for it in range(1, N_time):
delta_t=(tspan[it]-tspan[it-1])
for m in range(M_sub+1):
u_a[:,m]=U[:,it-1]
u_p[:,m]=U[:,it-1]
SS=jac_stiff(u_p[:,0])
for m in range(1,M_sub+1):
invJac[m,:,:]=np.linalg.inv(np.eye(dim) - delta_t*beta[m]*SS)
for k in range(1,K_corr+1):
u_p=np.copy(u_a)
for r in range(M_sub+1):
rhs[:,r]=func(u_p[:,r])
for m in range(1,M_sub+1):
u_a[:,m]= u_p[:,m]+delta_t*np.matmul(invJac[m,:,:],\
(-(u_p[:,m]-u_p[:,0])/delta_t\
+sum([Theta[r,m]*rhs[:,r] for r in range(M_sub+1)])))
U[:,it]=u_a[:,M_sub]
return tspan, U
# Test on Robertson problem
pr=ODEproblem("Robertson")
Nt=100
tt = np.array([np.exp(k) for k in np.linspace(-14,np.log(pr.T_fin),Nt)])
tt,yy=decImplicit(pr.flux,pr.jacobian, tt, pr.u0, 5,6,"gaussLobatto")
plt.semilogx(tt,yy[0,:])
plt.semilogx(tt,yy[1,:]*10**4)
plt.semilogx(tt,yy[2,:])
Nt=1000
tt = np.array([np.exp(k) for k in np.linspace(-14,np.log(pr.T_fin),Nt)])
tt,yy=dec(pr.flux, tt, pr.u0, 5,6,"gaussLobatto")
plt.semilogx(tt,yy[0,:])
plt.semilogx(tt,yy[1,:]*10**4)
plt.semilogx(tt,yy[2,:])
plt.ylim([-0.05,1.05])
```
## ADER <a id='ADER'></a>
Can be interpreted as a finite element method in time solved in an iterative manner.
\begin{align*}
\def\L{\mathcal{L}}
\def\bc{\boldsymbol{c}}
\def\bbc{\underline{\mathbf{c}}}
\def\bphi{\underline{\phi}}
\newcommand{\ww}[1]{\underline{#1}}
\renewcommand{\vec}[1]{\ww{#1}}
\def\M{\underline{\underline{\mathrm{M}}}}
\def\R{\underline{\underline{\mathrm{R}}}}
\L^2(\bbc ):=& \int_{T^n} \bphi(t) \partial_t \bphi(t)^T \bbc dt + \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt =\\
&\bphi(t^{n+1}) \bphi(t^{n+1})^T \bbc - \bphi(t^{n}) \bc^n - \int_{T^n} \partial_t \bphi(t) \bphi(t)^T \bbc - \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt \\
&\M = \bphi(t^{n+1}) \bphi(t^{n+1})^T -\int_{T^n} \partial_t \bphi(t) \bphi(t)^T \\
& \vec{r}(\bbc) = \bphi(t^{n}) \bc^n + \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt\\
&\M \bbc = \vec{r}(\bbc)
\end{align*}
Iterative procedure to solve the problem for each time step
\begin{equation}\label{fix:point}
\bbc^{(k)}=\M^{-1}\vec{r}(\bbc^{(k-1)}),\quad k=1,\dots, K \text{ (convergence)}
\end{equation}
with $\bbc^{(0)}=\bc(t^n)$.
Reconstruction step
\begin{equation*}
\bc(t^{n+1}) = \bphi(t^{n+1})^T \bbc^{(K)}.
\end{equation*}
### What can be precomputed?
* $\M$
* $$\vec{r}(\bbc) = \bphi(t^{n}) \bc^n + \int_{T^n} \bphi(t) F(\bphi(t)^T\bbc) dt\approx \bphi(t^{n}) \bc^n + \int_{T^n} \bphi(t)\bphi(t)^T dt F(\bbc) = \bphi(t^{n}) \bc^n+ \R \bbc$$
$\R$ can be precomputed
* $$ \bc(t^{n+1}) = \bphi(t^{n+1})^T \bbc^{(K)} $$
$\bphi(t^{n+1})^T$ can be precomputed
```
from scipy.interpolate import lagrange
def lagrange_poly(nodes,k):
interpVal=np.zeros(np.size(nodes))
interpVal[k] = 1.
pp=lagrange(nodes,interpVal)
return pp
def lagrange_basis(nodes,x,k):
pp=lagrange_poly(nodes,k)
return pp(x)
def lagrange_deriv(nodes,x,k):
pp=lagrange_poly(nodes,k)
dd=pp.deriv()
return dd(x)
def get_nodes(order,nodes_type):
if nodes_type=="equispaced":
nodes,w = equispaced(order)
elif nodes_type == "gaussLegendre":
nodes,w = leggauss(order)
elif nodes_type == "gaussLobatto":
nodes, w = lglnodes(order-1,10**-15)
nodes=nodes*0.5+0.5
w = w*0.5
return nodes, w
def getADER_matrix(order, nodes_type):
nodes_poly, w_poly = get_nodes(order,nodes_type)
if nodes_type=="equispaced":
quad_order=order
nodes_quad, w = get_nodes(quad_order,"gaussLegendre")
else:
quad_order=order
nodes_quad, w = get_nodes(quad_order,nodes_type)
# generate mass matrix
M = np.zeros((order,order))
for i in range(order):
for j in range(order):
M[i,j] = lagrange_basis(nodes_poly,1.0,i) *lagrange_basis(nodes_poly,1.0,j)\
-sum([lagrange_deriv(nodes_poly,nodes_quad[q],i)\
*lagrange_basis(nodes_poly,nodes_quad[q],j)\
*w[q] for q in range(quad_order)])
# generate mass matrix
RHSmat = np.zeros((order,order))
for i in range(order):
for j in range(order):
RHSmat[i,j] = sum([lagrange_basis(nodes_poly,nodes_quad[q],i)*\
lagrange_basis(nodes_poly,nodes_quad[q],j)*\
w[q] for q in range(quad_order)])
return nodes_poly, w_poly, M, RHSmat
def ader(func, tspan, y_0, M_sub, K_corr, distribution):
N_time=len(tspan)
dim=len(y_0)
U=np.zeros((dim, N_time))
u_p=np.zeros((dim, M_sub+1))
u_a=np.zeros((dim, M_sub+1))
u_tn=np.zeros((dim, M_sub+1))
rhs= np.zeros((dim,M_sub+1))
x_poly, w_poly, ADER, RHS_mat = getADER_matrix(M_sub+1, distribution)
invader = np.linalg.inv(ADER)
evolMatrix=np.matmul(invader,RHS_mat)
reconstructionCoefficients=np.array([lagrange_basis(x_poly,1.0,i) for i in range(M_sub+1)])
U[:,0]=y_0
for it in range(1, N_time):
delta_t=(tspan[it]-tspan[it-1])
for m in range(M_sub+1):
u_a[:,m]=U[:,it-1]
u_p[:,m]=U[:,it-1]
u_tn[:,m]=U[:,it-1]
for k in range(1,K_corr+1):
u_p=np.copy(u_a)
for r in range(M_sub+1):
rhs[:,r]=func(u_p[:,r])
for d in range(dim):
u_a[d,:] = u_tn[d,:] + delta_t*np.matmul(evolMatrix,rhs[d,:])
U[:,it] = np.matmul(u_a,reconstructionCoefficients)
return tspan, U
pr=ODEproblem("threeBodies")
tt=np.linspace(0,pr.T_fin,1000)
tt,U=ader(pr.flux,tt,pr.u0,4,5,"gaussLegendre")
plt.figure()
plt.plot(U[0,:],U[1,:],'*',label="sun")
plt.plot(U[4,:],U[5,:],label="earth")
plt.plot(U[8,:],U[9,:],label="Mars")
plt.legend()
plt.show()
plt.figure()
plt.title("Distance from the original position of the sun")
plt.semilogy(tt,U[4,:]**2+U[5,:]**2,label="earth")
plt.semilogy(tt,U[8,:]**2+U[9,:]**2, label="mars")
plt.legend()
plt.show()
#Test convergence
pr=ODEproblem("linear_system2")
tt=np.linspace(0,pr.T_fin,10)
tt,uu=ader(pr.flux, tt, pr.u0, 7, 8, "equispaced")
plt.plot(tt,uu[0,:])
plt.plot(tt,uu[1,:])
plt.show()
def compute_integral_error(c,c_exact): # c is dim x times
times=np.shape(c)[1]
error=0.
for t in range(times):
error = error + np.linalg.norm(c[:,t]-c_exact[:,t],2)**2.
error = np.sqrt(error/times)
return error
NN=5
dts=[pr.T_fin/2.0**k for k in range(3,3+NN)]
errorsDeC=np.zeros(len(dts))
for order in range(2,8):
for k in range(NN):
dt0=dts[k]
tt=np.arange(0,pr.T_fin,dt0)
t2,U2=ader(pr.flux, tt, pr.u0, order-1, order, "gaussLobatto")
u_exact=pr.exact_solution_times(pr.u0,tt)
errorsDeC[k]=compute_integral_error(U2,u_exact)
plt.loglog(dts,errorsDeC,"--",label="ADER%d"%(order))
plt.loglog(dts,[dt**order*errorsDeC[2]/dts[2]**order for dt in dts],":",label="ref %d"%(order))
plt.title("ADER error convergence")
plt.legend()
#plt.savefig("convergence_ADER.pdf")
plt.show()
pr=ODEproblem("lotka")
tt=np.linspace(0,pr.T_fin,150)
t2,U2=ader(pr.flux, tt, pr.u0, 1, 2, "gaussLobatto")
t8,U8=ader(pr.flux, tt, pr.u0, 7, 8, "gaussLobatto")
tt=np.linspace(0,pr.T_fin,2000)
tref,Uref=dec(pr.flux, tt, pr.u0, 4,5, "gaussLobatto")
plt.figure(figsize=(12,6))
plt.subplot(211)
plt.plot(t2,U2[0,:],label="ADER2")
plt.plot(t8,U8[0,:],label="ADER8")
plt.plot(tref,Uref[0,:], ":",linewidth=2,label="ref")
plt.legend()
plt.title("Prey")
plt.subplot(212)
plt.plot(t2,U2[1,:],label="ADER2")
plt.plot(t8,U8[1,:],label="ADER8")
plt.plot(tref,Uref[1,:],":", linewidth=2,label="ref")
plt.legend()
plt.title("Predator")
```
### Pro exercise: implicit ADER
Using the fact that ADER can be written into DeC, try to make ADER implicit by changing only the definition of $\mathcal{L}^1$
* Write the formulation and the update formula
* Implement it adding (as for the DeC an extra input of the jacobian of the flux)
### Pro exercise: ADER as RK
How can you write the ADER scheme into a RK setting?
At the end we are computing some coefficients in a more elaborated way to use them explicitly, so one should be able to write it down.
### Few notes on the stability
Computing the stability region of the ADER method results, for a fixed order of accuracy, for any point distribution to the same stability region. This coincide with the DeC method stability region for the same order of accuracy.
This can be shown numerically, I put here some plots, but no analytical proof is available yet.
**Stability for ADER and DeC methods with $p$ subtimesteps**
| ADER | ADER vs DeC |
| ----------- | ----------- |
|  |  |
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function =
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = # signals into hidden layer
hidden_outputs = # signals from hidden layer
# TODO: Output layer
final_inputs = # signals into final output layer
final_outputs = # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
hidden_errors = # errors propagated to the hidden layer
hidden_grad = # hidden layer gradients
# TODO: Update the weights
self.weights_hidden_to_output += # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = # signals into hidden layer
hidden_outputs = # signals from hidden layer
# TODO: Output layer
final_inputs = # signals into final output layer
final_outputs = # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
```
import sys
### Set the hyperparameters here ###
epochs = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
#### Your answer below
## Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
```
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
| github_jupyter |
# Data Retriving and Pre-processing
**Importing Libraries**
```
# ALL THE IMPORTS NECESSARY
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from geopy.distance import great_circle as vc
import math as Math
```
**Retriving Data**
```
data = pd.read_csv('/content/drive/My Drive/ML_2020/Project code/1920-2012-data.csv', index_col=None, names=['Year', 'Month', 'Day', 'Hour', 'HurricaneNum', 'Name', 'Lat', 'Long', 'WindSpeed', 'Pressure', 'NullCol'])
# Preview the first 5 rows of data
print(data.head())
#Averaging wind speed of particular hurricane name in particlar year
data['unique-key'] = data['Year'].map(str) + '-' + data['HurricaneNum'].map(str)
windSpeedAvg = data.groupby('unique-key', as_index=False)['WindSpeed'].mean()
windSpeedAvg['unique-key'] = windSpeedAvg['unique-key'].str.split('-',n=1,expand=True)
print(windSpeedAvg[:12])
# Delete the columns of information that we are not using so far
data.drop(['Name','Month', 'Day', 'Hour', 'HurricaneNum', 'Lat', 'Long', 'Pressure' ,'NullCol','unique-key'], axis = 1, inplace = True)
data['Year']=windSpeedAvg['unique-key']
data['WindSpeed'] = windSpeedAvg['WindSpeed']
# Preview the first 5 rows of data after delete unnecessary columns
print(windSpeedAvg.shape)
data = data.dropna()
data = data.reset_index(drop=True)
print(data.tail())
print(data.shape)
```
**Pre-Processing Data**
```
list_of_wind_speed = data['WindSpeed'].to_list()
list_of_wind_speed = [i * 1.5 for i in list_of_wind_speed] # unit of wind speed is knot 1knot = 1.5mph
#print("wind Speed : ",list_of_wind_speed[:5])
hurricane_type = []
## Categorized hurricane into 5 different types
for i in list_of_wind_speed:
if i>=74 and i<=95:
hurricane_type.append(1)
elif i>95 and i<=110:
hurricane_type.append(2)
elif i>110 and i<=129:
hurricane_type.append(3)
elif i>129 and i<=156:
hurricane_type.append(4)
elif i>156:
hurricane_type.append(5)
elif i<74:
hurricane_type.append(0)
else:
hurricane_type.append(-1)
```
**Pre Processing Con.**
```
data['WindSpeed'] = list_of_wind_speed # assign value of wind speed in mph unit
#Add new column in data frame
data['HurricaneType'] = hurricane_type
print("After adding new column :\n",data.head())
data = data[data['HurricaneType'] != 0] # ignore all values which does not contain hurricane that is type 0
#print(data[data['HurricaneType'] == -1])
data = data.reset_index(drop=True)
print("After removing non hurricane entries :\n",data.head())
print("Shape of data : ",data.shape)
temp =data[data['Year'] == str(2000)]
print(len(temp))
print(temp[:5])
print("len of : ",len(temp[temp['HurricaneType'] ==1 ]))
print("Unique year : ",pd.unique(data['Year']))
print("Length of Unique year : ",len(pd.unique(data['Year'])))
```
**Pre Processing Con.**
```
hurricane_no =[]
unique_year = pd.unique(data['Year'])
for i in unique_year:
for j in range(5):
number = data[data['Year']==str(i)]
hurricane_no.append([i,j+1,len(number[number['HurricaneType']==j+1])])
```
**Store the data into csv**
```
df = pd.DataFrame(hurricane_no, columns = ['Year', 'HurricaneType','HurricaneFreq'])
print(df)
df.to_csv(r'/content/drive/My Drive/ML_2020/Project code/final_data.csv')
```
# Model
```
# import required libraries
import pandas as pd
import numpy as np
import scipy.linalg
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
%matplotlib inline
from numpy.linalg import inv
from numpy.linalg import pinv
import math
# fetch data from file
data = pd.read_csv('/content/drive/My Drive/Machine Learning/Hurricane_project/final_data.csv', usecols=['Year', 'HurricaneType','HurricaneFreq'])
# convert csv data into list for easy computation
list_year = data['Year'].to_list()
list_HurricaneType = data['HurricaneType'].to_list()
list_HurricaneFreq = data['HurricaneFreq'].to_list()
X= []
for i in range(0,len(list_year)):
X.append([list_year[i],list_HurricaneType[i]])
print(X)
X = np.array(X)
Y = list_HurricaneFreq
# divide data in training and testing set (80-20)
size = int(len(X) * 0.8)
X_train = X[:size]
X_test = X[size:]
Y_train = Y[:size]
Y_test = Y[size:]
# for building feature matrix of any polynomial degree
def multiplication(x1=0, x1_time=0, x2=0, x2_time=0):
out_x1 = x1
out_x2 = x2
for i in range(x1_time-1):
out_x1 = np.multiply(x1, out_x1)
for j in range(x2_time-1):
out_x2 = np.multiply(x2, out_x2)
if x1_time==0:
return out_x2
elif x2_time==0:
return out_x1
else:
return np.multiply(out_x1, out_x2)
## Create Phi polynomial matrix of any degree for 2 feature
def poly_features(X, K):
# X: inputs of size N x 1
# K: degree of the polynomial
# computes the feature matrix Phi (N x (K+1))
X = np.array(X)
x1 = X[:,0]
x2 = X[:,1]
N = X.shape[0]
col = sum(range(K+2))
#initialize Phi
phi = np.ones((N, col))
cnt=1
for k in range(1, K+1):
for i in range(k+1):
phi[:, cnt] = multiplication(x1=x1, x1_time=k-i, x2=x2, x2_time=i) # for any degree polynomial
cnt += 1
return phi
K = 5 # Define the degree of the polynomial we wish to fit
Phi = poly_features(X_train, K) # N x (K+1) feature matrix
theta_ml = nonlinear_features_maximum_likelihood(Phi, Y_train) # maximum likelihood estimator
# feature matrix for test inputs
Phi_test = poly_features(X_test, K)
y_pred = Phi_test @ theta_ml # predicted y-values
plt.figure()
ax = plt.axes(projection='3d')
# plot data for visulization
plt.plot(X[:,0], X[:,1], Y, '+')
plt.xlabel("Year")
plt.ylabel("Hurricane Type")
plt.legend(["Data"])
```
# RMSE
```
# Root Mean Square Error
def RMSE(y, ypred):
diff_sqr = pow(y-ypred, 2)
rmse = math.sqrt(1/len(y) * sum(diff_sqr)) ## sum of sqaue error between real and predict value
return rmse
## training loss
K_max = 20 # max polynomial degree
rmse_train = np.zeros((K_max+1,))
for k in range(1, K_max+1):
Phi = poly_features(X_train, k) # N x (K+1) feature matrix
theta_ml = nonlinear_features_maximum_likelihood(Phi, Y_train) # maximum likelihood estimator
y_pred = Phi @ theta_ml
rmse_train[k] = RMSE(Y_train, y_pred) # RMSE for different degree polynomial
plt.figure()
plt.plot(rmse_train)
plt.xlabel("degree of polynomial")
plt.ylabel("RMSE");
plt.xlim(1, K_max+1)
plt.ylim(min(rmse_train[1:]), max(rmse_train))
K_max = 20 # max polynomial degree
rmse_train = np.zeros((K_max+1,)) # initialize rmse for train and test data set
rmse_test = np.zeros((K_max+1,))
for k in range(K_max+1):
# feature matrix
Phi = poly_features(X_train, k) # N x (K+1) feature matrix
# maximum likelihood estimate
theta_ml = nonlinear_features_maximum_likelihood(Phi, Y_train)
# predict y-values of training set
ypred_train = Phi @ theta_ml
# RMSE on training set
rmse_train[k] = RMSE(Y_train, ypred_train)
#--------------------------------------------------------------
# feature matrix for test inputs
Phi_test = poly_features(X_test, k)
# prediction (test set)
ypred_test = Phi_test @ theta_ml
# RMSE on test set
rmse_test[k] = RMSE(Y_test, ypred_test)
plt.figure()
plt.semilogy(rmse_train) # this plots the RMSE on a logarithmic scale
plt.semilogy(rmse_test) # this plots the RMSE on a logarithmic scale
plt.xlabel("degree of polynomial")
plt.ylabel("RMSE")
plt.legend(["training set", "test set"])
```
# MLE
```
# theta_ml = (phi_T * phi)-1 phi_T*y
from numpy.linalg import pinv
def nonlinear_features_maximum_likelihood(Phi, y):
# Phi: features matrix for training inputs. Size of N x D
# y: training targets. Size of N by 1
# returns: maximum likelihood estimator theta_ml. Size of D x 1
kappa = 1e-08 # 'jitter' term; good for numerical stability
D = Phi.shape[1]
# maximum likelihood estimate
X_transpose = Phi.T
X_tran_X = np.matmul(X_transpose, Phi)
X_tran_X_inv = pinv(X_tran_X)
X_tran_X_inv_X_tran = np.matmul(X_tran_X_inv, X_transpose)
theta_ml = np.matmul(X_tran_X_inv_X_tran, y)
return theta_ml
```
# MAP
```
def map_estimate_poly(Phi, y, sigma, alpha):
# Phi: training inputs, Size of N x D
# y: training targets, Size of D x 1
# sigma: standard deviation of the noise
# alpha: standard deviation of the prior on the parameters
# returns: MAP estimate theta_map, Size of D x 1
D = Phi.shape[1]
# maximum likelihood estimate
X_T = Phi.T
X_T_X = np.matmul(X_T, Phi)
# regularization element
reg = (sigma**2 / alpha**2) * np.ones(X_T_X.shape)
X_T_X_reg = X_T_X + reg
X_T_X_reg_inv = pinv(X_T_X_reg)
X_T_X_reg_inv_X_T = np.matmul(X_T_X_reg_inv, X_T)
theta_map = np.matmul(X_T_X_reg_inv_X_T, y)
return theta_map
sigma = 1.0 # noise standard deviation
alpha = 1.0 # standard deviation of the parameter prior
N=20
# get the MAP estimate
K = 8 # polynomial degree
# feature matrix
Phi = poly_features(X_train, K)
theta_map = map_estimate_poly(Phi, Y_train, sigma, alpha)
# maximum likelihood estimate
theta_ml = nonlinear_features_maximum_likelihood(Phi, Y_train)
Phi_test = poly_features(X_test, K)
y_pred_map = Phi_test @ theta_map
y_pred_mle = Phi_test @ theta_ml
plt.figure()
ax = plt.axes(projection='3d')
plt.plot(X[:,0],X[:,1], Y, '+')
plt.plot(X_test[:,0],X_test[:,1], y_pred_map)
plt.plot(X_test[:,0],X_test[:,1], Y_test)
plt.plot(X_test[:,0],X_test[:,1], y_pred_mle)
# plt.xlim(-5, 5)
# plt.ylim(-3.5, 1)
plt.legend(["data", "map prediction", "ground truth function", "maximum likelihood"])
## EDIT THIS CELL
K_max = 20 # this is the maximum degree of polynomial we will consider
assert(K_max <= N) # this is the latest point when we'll run into numerical problems
rmse_mle = np.zeros((K_max+1,))
rmse_map = np.zeros((K_max+1,))
for k in range(K_max+1):
# rmse_mle[k] = -1 ## Compute the maximum likelihood estimator, compute the test-set
# feature matrix for test inputs
Phi = poly_features(X_train, k) # N x (K+1) feature matrix
# maximum likelihood estimate
theta_ml = nonlinear_features_maximum_likelihood(Phi, Y_train)
# prediction (test set)
Phi_ml = poly_features(X_test, k)
ypred_test_ml = Phi_ml @ theta_ml
# RMSE on test set for MLE
rmse_mle[k] = RMSE(Y_test, ypred_test_ml)
#--------------------------------------------------------------------------------
Phi = poly_features(X_train, k)
theta_map = map_estimate_poly(Phi, Y_train, sigma, alpha)
Phi_map = poly_features(X_test, k)
ypred_test_map = Phi_map @ theta_map
# RMSE on test set for MAP
rmse_map[k] = RMSE(Y_test, ypred_test_map) ## Compute the MAP estimator, compute the test-set predicitons, compute plt.figure()
plt.semilogy(rmse_mle) # this plots the RMSE on a logarithmic scale
plt.semilogy(rmse_map) # this plots the RMSE on a logarithmic scale
plt.xlabel("degree of polynomial")
plt.ylabel("RMSE")
plt.legend(["Maximum likelihood", "MAP"])
```
| github_jupyter |
```
import numpy as np
import sklearn
import os
import pandas as pd
import scipy
from sklearn.linear_model import LinearRegression
import sklearn
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import random
from torchvision import datasets, transforms
import copy
#!pip install line_profiler
os.chdir("/content/drive/MyDrive/Winter_Research")
```
### Load Data
```
master_df = pd.read_csv("Sentinel2_Traffic/Traffic_Data/5_state_traffic.csv")
master_df = master_df.set_index("Unnamed: 0")
CA_x, CA_y = [], []
KS_x, KS_y = [], []
MT_x, MT_y = [], []
TX_x, TX_y = [], []
OH_x, OH_y = [], []
states = {"CA" : [CA_x, CA_y, "Roi_1"],
"KS" : [KS_x, KS_y, "Roi_2"],
"MT" : [MT_x, MT_y, "Roi_3"],
"TX" : [TX_x, TX_y, "Roi_4"],
"OH" : [OH_x, OH_y, "Roi_5"]}
for st in ["CA", "KS", "MT", "TX", "OH"]:
path = "Rois/" + states[st][2] + "/greedy_a/"
imgs = os.listdir(path)
for img in imgs:
date = img.split('.')[0]
photo = np.loadtxt(path + img).reshape(-1, 7, 3)
if photo[pd.isnull(photo)].shape[0] == 0:
print("waasss", photo.shape[0])
if st == "CA" and photo.shape[0] != 72264:
continue
if st == "KS" and photo.shape[0] != 69071:
continue
if st == "MT" and photo.shape[0] != 72099:
continue
if st == "TX" and photo.shape[0] != 71764:
continue
if st == "OH" and photo.shape[0] != 62827:
continue
if date in list(master_df.index):
if not pd.isna(master_df.loc[date][st]):
states[st][0].append(photo)
states[st][1].append(master_df.loc[date][st])
len(states['CA'][0])
states
for s in ["CA", "KS", "MT", "TX", "OH"]:
for i in range(len(states[s][0])):
states[s][0][i] = states[s][0][i][:8955]
def load(states, mean_bal_x=True, mean_bal_y=True):
img_st = []
y = []
for s in states:
val = np.array(states[s][0])
if mean_bal_x:
img_st.append((val - np.mean(val, axis=0)) / np.mean(val, axis=0))
else:
img_st.append(val)
for i in states[s][1]:
if mean_bal_y:
y.append((i - np.mean(states[s][1])) / np.mean(states[s][1]))
else:
y.append(i)
X = np.concatenate(img_st)
return X, y
X, y = load(states, mean_bal_x=False, mean_bal_y=False)
print(len(X), len(y))
def load_some(states):
img_st = []
y = []
for s in states:
if s == "MT":
continue
img_st.append(np.array(states[s][0]))
for i in states[s][1]:
y.append(i)
X = np.concatenate(img_st)
return np.array(X), np.array(y)
def load_MT(states):
img_st = np.array(states["MT"][0])
y_test = []
for i in states["MT"][1]:
y_test.append(i)
return img_st, np.array(y_test)
def load_some_augment(X, y):
new_imgs = []
new_y = []
for i in range(X.shape[0]):
a = random.randint(0, X.shape[0] - 1)
b = random.randint(0, X.shape[0] - 1)
new_imgs.append(mush(X[a], X[b]))
new_y.append(y[a] + y[b])
return np.array(new_imgs), np.array(new_y)
def mush(img_a, img_b):
new_img = np.zeros((img_a.shape[0] + img_b.shape[0], 7, 3))
buffer = int((img_a.shape[0] + 0.5) // 8)
# print(buffer)
for i in range(0, img_a.shape[0]*2, buffer*2):
# print(i)
# print(img_a[i // 2: i // 2 + buffer, :, :].shape)
if (i // 2) + buffer > img_a.shape[0]:
buffer = img_a.shape[0] - (i // 2)
new_img[i: i + buffer, :, :] = img_a[i // 2: i // 2 + buffer, :, :]
new_img[i + buffer: i + 2 * buffer, :, :] = img_b[i // 2: i // 2 + buffer, :, :]
return new_img
#X, y = load_some_augment(X, y)
# X_test, y_test = load_MT(states)
# X_test, y_test = augment(X_test)
# y_test
# X_test = np.concatenate((X_test, X_test), axis=1)
# y_test = y_test + y_test
def augment(X, y):
new_imgs = []
new_y = []
for i in range(X.shape[0]):
new_y.extend([y[i]]*4)
#OG
#new_imgs.append(X[i]) #1
#Chunk Half
chunk1 = X[i][:X[i].shape[0] // 3, :, :]
chunk2 = X[i][X[i].shape[0] // 3 : 2 * X[i].shape[0] // 3, :, :]
chunk3 = X[i][2 * X[i].shape[0] // 3 :, :, :]
chunks = {0 : chunk1, 1 : chunk2, 2 : chunk3}
# for order in [(0, 1, 2), (0, 2, 1)]: #, (1, 0, 2), (1, 2, 0), (2, 1, 0), (2, 0, 1)
# new_img = np.zeros(X[i].shape)
# new_img[:X[i].shape[0] // 3, :, :] = chunks[order[0]]
# new_img[X[i].shape[0] // 3 : 2 * X[i].shape[0] // 3, :, :] = chunks[order[1]]
# new_img[2 * X[i].shape[0] // 3 :, :, :] = chunks[order[2]]
new_img = X[i]
new_imgs.append(new_img)
new_imgs.append(np.flip(new_img, axis=0))
new_imgs.append(np.flip(new_img, axis=1))
new_imgs.append(np.flip(np.flip(new_img, axis=0), axis=1))
return np.array(new_imgs), np.array(new_y)
# Can't sugment befoire split
# X, y = load_some(states)
# X, y = augment(X, y)
# print(X.shape, y.shape)
# y_baseline = np.loadtxt("Baseline_Y.csv", delimiter=',')
print(torch.cuda.device_count())
cuda0 = torch.device('cuda:0')
#Train, test, val, split
# 41
#Just MT version
X_train_t, X_test, y_train_t, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.2, random_state=41)
# X_train_t = X
# y_train_t = y
X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X_train_t, y_train_t, test_size=0.1, random_state=41)
X_train, y_train = augment(X_train, y_train)
#To tensors
X_train = torch.as_tensor(X_train, device=cuda0, dtype=torch.float)
X_test = torch.as_tensor(X_test, device=cuda0, dtype=torch.float)
X_val = torch.as_tensor(X_val, device=cuda0, dtype=torch.float)
y_train = torch.as_tensor(y_train, device=cuda0, dtype=torch.float)
y_val = torch.as_tensor(y_val, device=cuda0, dtype=torch.float)
y_test = torch.as_tensor(y_test, device=cuda0, dtype=torch.float)
#Reshape y
y_train = y_train.reshape(y_train.shape[0], 1)
y_test = y_test.reshape(y_test.shape[0], 1)
y_val = y_val.reshape(y_val.shape[0], 1)
print(X_train.shape, y_train.shape, X_val.shape, y_val.shape, X_test.shape, y_test.shape)
X_train = X_train.permute(0, 3, 1, 2)
X_val = X_val.permute(0, 3, 1, 2)
X_test = X_test.permute(0, 3, 1, 2)
```
# PyTorch Model
```
del model
X_train.shape
# OG 3 ==> 10, reg layer, 10 ==> 10, flatten, ==> 100, 100==> 50, 50 ==> 1
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, stride=1, padding=1)
self.reg = nn.BatchNorm2d(10)
self.relu1 = nn.ReLU()
#self.reg = nn.Dropout(p=0.8)
self.conv2 = nn.Conv2d(in_channels=10, out_channels=3, kernel_size=3, stride=1, padding=1)
# self.reg = nn.BatchNorm2d(3)
self.relu2 = nn.ReLU()
# self.pool = nn.MaxPool2d(kernel_size=2)
# self.conv3 = nn.Conv2d(in_channels=20, out_channels=10, kernel_size=3, stride=1, padding=1)
# self.reg = nn.BatchNorm2d(10)
# self.relu3 = nn.ReLU()
# self.conv4 = nn.Conv2d(in_channels=50, out_channels=10, kernel_size=3, stride=1, padding=1)
# self.relu4 = nn.ReLU()
# self.conv5 = nn.Conv2d(in_channels=10, out_channels=100, kernel_size=3, stride=1, padding=1)
# self.relu5 = nn.ReLU()
self.fc1 = nn.Linear(in_features=(125370 // 2)*3, out_features=100) # 100
self.relu6 = nn.ReLU()
self.fc2 = nn.Linear(in_features=100, out_features=50) #100 -> 50
self.relu7 = nn.ReLU()
self.fc3 = nn.Linear(in_features=50, out_features=1)
# self.relu8 = nn.ReLU()
# self.fc4 = nn.Linear(in_features=20, out_features=1)
def forward(self, input):
output = self.conv1(input)
output = self.relu1(output)
output = self.reg(output)
output = self.conv2(output)
output = self.relu2(output)
# output = self.conv3(output)
# output = self.relu3(output)
# output = self.pool(output)
# output = self.conv3(output)
# output = self.relu3(output)
# output = self.conv4(output)
# output = self.relu4(output)
# output = self.conv4(output)
# output = self.relu4(output)
#print(output.shape)
output = output.reshape(-1, (125370 // 2)*3)
#print(output.shape)
output = self.fc1(output)
output = self.relu6(output)
#print(output.shape)
output = self.fc2(output)
output = self.relu7(output)
output = self.fc3(output)
# output = self.relu8(output)
# output = self.fc4(output)
#print(output.shape)
return output
model = Net()
model = model.cuda()
torch.cuda.empty_cache()
X_train.shape
batches_x = []
batches_y = []
batch_size = 10
for i in range(0, X_train.shape[0], batch_size):
batches_x.append(X_train[i:i+batch_size])
batches_y.append(y_train[i:i+batch_size])
batches_x[0].shape
del optimizer
del criterion
# del model
torch.cuda.empty_cache()
criterion = nn.MSELoss()
model.to('cuda:0')
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
train_loss = []
val_loss = []
def init_weights(m):
if type(m) == nn.Linear:
torch.nn.init.xavier_uniform(m.weight)
m.bias.data.fill_(0.01)
model.apply(init_weights)
best_model = model
min_val = 1e9
loss_arr = []
epochs = 100
for i in range(epochs):
model.train()
loss_tot = 0
#for j in range(X_train.shape[0]):
for batch_x, batch_y in zip(batches_x, batches_y):
# print(batch_x.shape)
y_hat = model.forward(batch_x)
#print("y_hat", y_hat.shape, y_hat)
#print("y_train", y_train)
#break
loss = criterion(y_hat, batch_y)
loss_arr.append(loss)
loss_tot += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
with torch.no_grad():
model.eval()
y_hat_t = model.forward(X_val)
loss_v = criterion(y_hat_t, y_val)
val_loss.append(loss_v.item())
if loss_v.item() < min_val:
print("new_best")
min_val = loss_v.item()
best_model = copy.deepcopy(model)
if i % 5 == 0:
print(f'Epoch: {i} Train Loss: {loss_tot // len(batches_x)} " Val Loss: "{loss_v}')
train_loss.append(int(loss_tot // len(batches_x)))
min_val
preds = []
model.eval()
with torch.no_grad():
y_hat_t = best_model.forward(X_test)
loss = criterion(y_hat_t, y_test)
val_loss.append(loss.item())
print(loss.item())
#preds.append(y_hat.argmax().item())
PATH = "models/augmented_test_115k.tar"
torch.save(model.state_dict(), PATH)
print(y_test)
plt.plot(range(len(train_loss[10:])), train_loss[4:])
plt.plot(range(len(val_loss[4:])), val_loss[4:])
plt.legend(["Train Loss", "Val Loss"])
plt.xlabel("Epoch")
plt.ylabel("MSE Loss")
#plt.savefig("Train_Test.png")
plt.show()
x_temp = y_test.cpu()
y_temp = y_hat_t.cpu()
# print(y_temp)
# for i in range(y_temp.shape[0]):
# if y_temp[i] > 5000:
# print(x_temp.shape)
# x_temp = torch.cat([x_temp[0:i, :], x_temp[i+1:, :]])
# y_temp = torch.cat([y_temp[0:i, :], y_temp[i+1:, :]])
# break
x_plot = np.array(y_temp)
y_plot = np.array(x_temp)
new_x = np.array(x_plot).reshape(-1,1)
new_y = np.array(y_plot)
fit = LinearRegression().fit(new_x, new_y)
score = fit.score(new_x, new_y)
plt.xlabel("Prediction")
plt.ylabel("Actual Traffic")
print(score)
plt.scatter(new_x, new_y)
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = x_vals
plt.plot(x_vals, y_vals, '--')
# plt.savefig("Aug_batch_r2_0.85_mse_97k.png")
plt.show()
```
0.8731882702459102 MSE--123
0.8591212743652898
0.8662367216836014
0.873
0.889 MSE-99
R^2 = 0.911 MSE == 79 num 4
num 5 R^2 0.922 MSE == 82,000
num 11 R^2 == 0.93 MSE = 60
```
# 0.945, 0.830
# MSE 88, 914, 76
#0.950
#MSE 63,443
X_test
y_hat
torch.cuda.memory_summary(device=0, abbreviated=False)
import gc
for obj in gc.get_objects():
try:
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
print(type(obj), obj.size())
except:
pass
s = y_train[y_train[:, 1] == 5]
s
np.mean(s[:, 0])
preds = {}
for i in range(1, 6):
select = y_train[y_train[:, 1] == i]
preds[i] = np.mean(select[:, 0])
preds
y_test
x = []
y = []
mse = 0
for i in range(y_test.shape[0]):
x.append(preds[y_test[i][1]])
y.append(y_test[i][0])
mse += (preds[y_test[i][1]] - y_test[i][0])**2
mse / len(y_test)
x_plot = np.array(x)
y_plot = np.array(y)
new_x = np.array(x_plot).reshape(-1,1)
new_y = np.array(y_plot)
fit = LinearRegression().fit(new_x, new_y)
score = fit.score(new_x, new_y)
plt.xlabel("Prediction")
plt.ylabel("Actual Traffic")
print(score)
plt.scatter(new_x, new_y)
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = x_vals
plt.plot(x_vals, y_vals, '--')
plt.savefig("Baseline.png")
plt.show()
# 0.873
# 99098
```
| github_jupyter |
# Accumulation Distribution Line (ADL)
https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:accumulation_distribution_line
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2018-06-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
df['MF Multiplier'] = (2*df['Adj Close'] - df['Low'] - df['High'])/(df['High']-df['Low'])
df['MF Volume'] = df['MF Multiplier']*df['Volume']
df['ADL'] = df['MF Volume'].cumsum()
df = df.drop(['MF Multiplier','MF Volume'],axis=1)
df['VolumePositive'] = df['Open'] < df['Adj Close']
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(3, 1, 1)
ax1.plot(df['Adj Close'])
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.legend(loc='best')
ax2 = plt.subplot(3, 1, 2)
ax2.plot(df['ADL'], label='Accumulation Distribution Line')
ax2.grid()
ax2.legend(loc='best')
ax2.set_ylabel('Accumulation Distribution Line')
ax3 = plt.subplot(3, 1, 3)
ax3v = ax3.twinx()
colors = df.VolumePositive.map({True: 'g', False: 'r'})
ax3v.bar(df.index, df['Volume'], color=colors, alpha=0.4)
ax3.set_ylabel('Volume')
ax3.grid()
ax3.set_xlabel('Date')
```
## Candlestick with ADL
```
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(3, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax2 = plt.subplot(3, 1, 2)
ax2.plot(df['ADL'], label='Accumulation Distribution Line')
ax2.grid()
ax2.legend(loc='best')
ax2.set_ylabel('Accumulation Distribution Line')
ax3 = plt.subplot(3, 1, 3)
ax3v = ax3.twinx()
colors = df.VolumePositive.map({True: 'g', False: 'r'})
ax3v.bar(df.index, df['Volume'], color=colors, alpha=0.4)
ax3.set_ylabel('Volume')
ax3.grid()
ax3.set_xlabel('Date')
```
| github_jupyter |
# Plagiarism Detection, Feature Engineering
In this project, you will be tasked with building a plagiarism detector that examines an answer text file and performs binary classification; labeling that file as either plagiarized or not, depending on how similar that text file is to a provided, source text.
Your first task will be to create some features that can then be used to train a classification model. This task will be broken down into a few discrete steps:
* Clean and pre-process the data.
* Define features for comparing the similarity of an answer text and a source text, and extract similarity features.
* Select "good" features, by analyzing the correlations between different features.
* Create train/test `.csv` files that hold the relevant features and class labels for train/test data points.
In the _next_ notebook, Notebook 3, you'll use the features and `.csv` files you create in _this_ notebook to train a binary classification model in a SageMaker notebook instance.
You'll be defining a few different similarity features, as outlined in [this paper](https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c412841_developing-a-corpus-of-plagiarised-short-answers/developing-a-corpus-of-plagiarised-short-answers.pdf), which should help you build a robust plagiarism detector!
To complete this notebook, you'll have to complete all given exercises and answer all the questions in this notebook.
> All your tasks will be clearly labeled **EXERCISE** and questions as **QUESTION**.
It will be up to you to decide on the features to include in your final training and test data.
---
## Read in the Data
The cell below will download the necessary, project data and extract the files into the folder `data/`.
This data is a slightly modified version of a dataset created by Paul Clough (Information Studies) and Mark Stevenson (Computer Science), at the University of Sheffield. You can read all about the data collection and corpus, at [their university webpage](https://ir.shef.ac.uk/cloughie/resources/plagiarism_corpus.html).
> **Citation for data**: Clough, P. and Stevenson, M. Developing A Corpus of Plagiarised Short Answers, Language Resources and Evaluation: Special Issue on Plagiarism and Authorship Analysis, In Press. [Download]
```
# NOTE:
# you only need to run this cell if you have not yet downloaded the data
# otherwise you may skip this cell or comment it out
!wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c4147f9_data/data.zip
!unzip data
# import libraries
import pandas as pd
import numpy as np
import os
```
This plagiarism dataset is made of multiple text files; each of these files has characteristics that are is summarized in a `.csv` file named `file_information.csv`, which we can read in using `pandas`.
```
csv_file = 'data/file_information.csv'
plagiarism_df = pd.read_csv(csv_file)
# print out the first few rows of data info
plagiarism_df.head()
```
## Types of Plagiarism
Each text file is associated with one **Task** (task A-E) and one **Category** of plagiarism, which you can see in the above DataFrame.
### Tasks, A-E
Each text file contains an answer to one short question; these questions are labeled as tasks A-E. For example, Task A asks the question: "What is inheritance in object oriented programming?"
### Categories of plagiarism
Each text file has an associated plagiarism label/category:
**1. Plagiarized categories: `cut`, `light`, and `heavy`.**
* These categories represent different levels of plagiarized answer texts. `cut` answers copy directly from a source text, `light` answers are based on the source text but include some light rephrasing, and `heavy` answers are based on the source text, but *heavily* rephrased (and will likely be the most challenging kind of plagiarism to detect).
**2. Non-plagiarized category: `non`.**
* `non` indicates that an answer is not plagiarized; the Wikipedia source text is not used to create this answer.
**3. Special, source text category: `orig`.**
* This is a specific category for the original, Wikipedia source text. We will use these files only for comparison purposes.
---
## Pre-Process the Data
In the next few cells, you'll be tasked with creating a new DataFrame of desired information about all of the files in the `data/` directory. This will prepare the data for feature extraction and for training a binary, plagiarism classifier.
### EXERCISE: Convert categorical to numerical data
You'll notice that the `Category` column in the data, contains string or categorical values, and to prepare these for feature extraction, we'll want to convert these into numerical values. Additionally, our goal is to create a binary classifier and so we'll need a binary class label that indicates whether an answer text is plagiarized (1) or not (0). Complete the below function `numerical_dataframe` that reads in a `file_information.csv` file by name, and returns a *new* DataFrame with a numerical `Category` column and a new `Class` column that labels each answer as plagiarized or not.
Your function should return a new DataFrame with the following properties:
* 4 columns: `File`, `Task`, `Category`, `Class`. The `File` and `Task` columns can remain unchanged from the original `.csv` file.
* Convert all `Category` labels to numerical labels according to the following rules (a higher value indicates a higher degree of plagiarism):
* 0 = `non`
* 1 = `heavy`
* 2 = `light`
* 3 = `cut`
* -1 = `orig`, this is a special value that indicates an original file.
* For the new `Class` column
* Any answer text that is not plagiarized (`non`) should have the class label `0`.
* Any plagiarized answer texts should have the class label `1`.
* And any `orig` texts will have a special label `-1`.
### Expected output
After running your function, you should get a DataFrame with rows that looks like the following:
```
File Task Category Class
0 g0pA_taska.txt a 0 0
1 g0pA_taskb.txt b 3 1
2 g0pA_taskc.txt c 2 1
3 g0pA_taskd.txt d 1 1
4 g0pA_taske.txt e 0 0
...
...
99 orig_taske.txt e -1 -1
```
```
# Read in a csv file and return a transformed dataframe
def numerical_dataframe(csv_file='data/file_information.csv'):
'''Reads in a csv file which is assumed to have `File`, `Category` and `Task` columns.
This function does two things:
1) converts `Category` column values to numerical values
2) Adds a new, numerical `Class` label column.
The `Class` column will label plagiarized answers as 1 and non-plagiarized as 0.
Source texts have a special label, -1.
:param csv_file: The directory for the file_information.csv file
:return: A dataframe with numerical categories and a new `Class` label column'''
df = pd.read_csv(csv_file)
df['Category'] = df['Category'].replace({"non":0,"heavy":1,'light':2,"cut":3,"orig":-1})
df['Class'] = df['Category'].apply(lambda num: 1 if num > 0 else num)
return df
```
### Test cells
Below are a couple of test cells. The first is an informal test where you can check that your code is working as expected by calling your function and printing out the returned result.
The **second** cell below is a more rigorous test cell. The goal of a cell like this is to ensure that your code is working as expected, and to form any variables that might be used in _later_ tests/code, in this case, the data frame, `transformed_df`.
> The cells in this notebook should be run in chronological order (the order they appear in the notebook). This is especially important for test cells.
Often, later cells rely on the functions, imports, or variables defined in earlier cells. For example, some tests rely on previous tests to work.
These tests do not test all cases, but they are a great way to check that you are on the right track!
```
# informal testing, print out the results of a called function
# create new `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
# check that all categories of plagiarism have a class label = 1
transformed_df.head(10)
# test cell that creates `transformed_df`, if tests are passed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# importing tests
import problem_unittests as tests
# test numerical_dataframe function
tests.test_numerical_df(numerical_dataframe)
# if above test is passed, create NEW `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
print('\nExample data: ')
transformed_df.head()
```
## Text Processing & Splitting Data
Recall that the goal of this project is to build a plagiarism classifier. At it's heart, this task is a comparison text; one that looks at a given answer and a source text, compares them and predicts whether an answer has plagiarized from the source. To effectively do this comparison, and train a classifier we'll need to do a few more things: pre-process all of our text data and prepare the text files (in this case, the 95 answer files and 5 original source files) to be easily compared, and split our data into a `train` and `test` set that can be used to train a classifier and evaluate it, respectively.
To this end, you've been provided code that adds additional information to your `transformed_df` from above. The next two cells need not be changed; they add two additional columns to the `transformed_df`:
1. A `Text` column; this holds all the lowercase text for a `File`, with extraneous punctuation removed.
2. A `Datatype` column; this is a string value `train`, `test`, or `orig` that labels a data point as part of our train or test set
The details of how these additional columns are created can be found in the `helpers.py` file in the project directory. You're encouraged to read through that file to see exactly how text is processed and how data is split.
Run the cells below to get a `complete_df` that has all the information you need to proceed with plagiarism detection and feature engineering.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create a text column
text_df = helpers.create_text_column(transformed_df)
text_df.head()
# after running the cell above
# check out the processed text for a single file, by row index
row_idx = 0 # feel free to change this index
sample_text = text_df.iloc[0]['Text']
print('Sample processed text:\n\n', sample_text)
```
## Split data into training and test sets
The next cell will add a `Datatype` column to a given DataFrame to indicate if the record is:
* `train` - Training data, for model training.
* `test` - Testing data, for model evaluation.
* `orig` - The task's original answer from wikipedia.
### Stratified sampling
The given code uses a helper function which you can view in the `helpers.py` file in the main project directory. This implements [stratified random sampling](https://en.wikipedia.org/wiki/Stratified_sampling) to randomly split data by task & plagiarism amount. Stratified sampling ensures that we get training and test data that is fairly evenly distributed across task & plagiarism combinations. Approximately 26% of the data is held out for testing and 74% of the data is used for training.
The function **train_test_dataframe** takes in a DataFrame that it assumes has `Task` and `Category` columns, and, returns a modified frame that indicates which `Datatype` (train, test, or orig) a file falls into. This sampling will change slightly based on a passed in *random_seed*. Due to a small sample size, this stratified random sampling will provide more stable results for a binary plagiarism classifier. Stability here is smaller *variance* in the accuracy of classifier, given a random seed.
```
random_seed = 1 # can change; set for reproducibility
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create new df with Datatype (train, test, orig) column
# pass in `text_df` from above to create a complete dataframe, with all the information you need
complete_df = helpers.train_test_dataframe(text_df, random_seed=random_seed)
# check results
complete_df.head(10)
```
# Determining Plagiarism
Now that you've prepared this data and created a `complete_df` of information, including the text and class associated with each file, you can move on to the task of extracting similarity features that will be useful for plagiarism classification.
> Note: The following code exercises, assume that the `complete_df` as it exists now, will **not** have its existing columns modified.
The `complete_df` should always include the columns: `['File', 'Task', 'Category', 'Class', 'Text', 'Datatype']`. You can add additional columns, and you can create any new DataFrames you need by copying the parts of the `complete_df` as long as you do not modify the existing values, directly.
---
# Similarity Features
One of the ways we might go about detecting plagiarism, is by computing **similarity features** that measure how similar a given answer text is as compared to the original wikipedia source text (for a specific task, a-e). The similarity features you will use are informed by [this paper on plagiarism detection](https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c412841_developing-a-corpus-of-plagiarised-short-answers/developing-a-corpus-of-plagiarised-short-answers.pdf).
> In this paper, researchers created features called **containment** and **longest common subsequence**.
Using these features as input, you will train a model to distinguish between plagiarized and not-plagiarized text files.
## Feature Engineering
Let's talk a bit more about the features we want to include in a plagiarism detection model and how to calculate such features. In the following explanations, I'll refer to a submitted text file as a **Student Answer Text (A)** and the original, wikipedia source file (that we want to compare that answer to) as the **Wikipedia Source Text (S)**.
### Containment
Your first task will be to create **containment features**. To understand containment, let's first revisit a definition of [n-grams](https://en.wikipedia.org/wiki/N-gram). An *n-gram* is a sequential word grouping. For example, in a line like "bayes rule gives us a way to combine prior knowledge with new information," a 1-gram is just one word, like "bayes." A 2-gram might be "bayes rule" and a 3-gram might be "combine prior knowledge."
> Containment is defined as the **intersection** of the n-gram word count of the Wikipedia Source Text (S) with the n-gram word count of the Student Answer Text (S) *divided* by the n-gram word count of the Student Answer Text.
$$ \frac{\sum{count(\text{ngram}_{A}) \cap count(\text{ngram}_{S})}}{\sum{count(\text{ngram}_{A})}} $$
If the two texts have no n-grams in common, the containment will be 0, but if _all_ their n-grams intersect then the containment will be 1. Intuitively, you can see how having longer n-gram's in common, might be an indication of cut-and-paste plagiarism. In this project, it will be up to you to decide on the appropriate `n` or several `n`'s to use in your final model.
### EXERCISE: Create containment features
Given the `complete_df` that you've created, you should have all the information you need to compare any Student Answer Text (A) with its appropriate Wikipedia Source Text (S). An answer for task A should be compared to the source text for task A, just as answers to tasks B, C, D, and E should be compared to the corresponding original source text.
In this exercise, you'll complete the function, `calculate_containment` which calculates containment based upon the following parameters:
* A given DataFrame, `df` (which is assumed to be the `complete_df` from above)
* An `answer_filename`, such as 'g0pB_taskd.txt'
* An n-gram length, `n`
### Containment calculation
The general steps to complete this function are as follows:
1. From *all* of the text files in a given `df`, create an array of n-gram counts; it is suggested that you use a [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) for this purpose.
2. Get the processed answer and source texts for the given `answer_filename`.
3. Calculate the containment between an answer and source text according to the following equation.
>$$ \frac{\sum{count(\text{ngram}_{A}) \cap count(\text{ngram}_{S})}}{\sum{count(\text{ngram}_{A})}} $$
4. Return that containment value.
You are encouraged to write any helper functions that you need to complete the function below.
```
complete_df.head()
from sklearn.feature_extraction.text import CountVectorizer
# Calculate the ngram containment for one answer file/source file pair in a df
def calculate_containment(df, n, answer_filename):
'''Calculates the containment between a given answer text and its associated source text.
This function creates a count of ngrams (of a size, n) for each text file in our data.
Then calculates the containment by finding the ngram count for a given answer text,
and its associated source text, and calculating the normalized intersection of those counts.
:param df: A dataframe with columns,
'File', 'Task', 'Category', 'Class', 'Text', and 'Datatype'
:param n: An integer that defines the ngram size
:param answer_filename: A filename for an answer text in the df, ex. 'g0pB_taskd.txt'
:return: A single containment value that represents the similarity
between an answer text and its source text.
'''
ans_text, ans_task = df[df['File'] == answer_filename][["Text","Task"]].values[0]
source_text = df[(df["Class"] == -1) & (df["Task"] == ans_task)]["Text"].values[0]
counter = CountVectorizer(analyzer='word',ngram_range=(n,n))
ngrams_arr = counter.fit_transform([ans_text, source_text]).toarray()
return np.min(ngrams_arr,axis=0).sum()/ngrams_arr[0].sum()
```
### Test cells
After you've implemented the containment function, you can test out its behavior.
The cell below iterates through the first few files, and calculates the original category _and_ containment values for a specified n and file.
>If you've implemented this correctly, you should see that the non-plagiarized have low or close to 0 containment values and that plagiarized examples have higher containment values, closer to 1.
Note what happens when you change the value of n. I recommend applying your code to multiple files and comparing the resultant containment values. You should see that the highest containment values correspond to files with the highest category (`cut`) of plagiarism level.
```
# select a value for n
n = 3
# indices for first few files
test_indices = range(5)
# iterate through files and calculate containment
category_vals = []
containment_vals = []
for i in test_indices:
# get level of plagiarism for a given file index
category_vals.append(complete_df.loc[i, 'Category'])
# calculate containment for given file and n
filename = complete_df.loc[i, 'File']
c = calculate_containment(complete_df, n, filename)
containment_vals.append(c)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print(str(n)+'-gram containment values: \n', containment_vals)
# run this test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test containment calculation
# params: complete_df from before, and containment function
tests.test_containment(complete_df, calculate_containment)
```
### QUESTION 1: Why can we calculate containment features across *all* data (training & test), prior to splitting the DataFrame for modeling? That is, what about the containment calculation means that the test and training data do not influence each other?
**Answer:**<br>
Because containment feature was derived/caculated out from original feature of a file. It extracts information only from its existing feature and does nothing from other data points.
This is only a data preprocessing step. If we don't calculate the containment for test data at this point, we will have to do it when we try to validate model or making predictions.
---
## Longest Common Subsequence
Containment a good way to find overlap in word usage between two documents; it may help identify cases of cut-and-paste as well as paraphrased levels of plagiarism. Since plagiarism is a fairly complex task with varying levels, it's often useful to include other measures of similarity. The paper also discusses a feature called **longest common subsequence**.
> The longest common subsequence is the longest string of words (or letters) that are *the same* between the Wikipedia Source Text (S) and the Student Answer Text (A). This value is also normalized by dividing by the total number of words (or letters) in the Student Answer Text.
In this exercise, we'll ask you to calculate the longest common subsequence of words between two texts.
### EXERCISE: Calculate the longest common subsequence
Complete the function `lcs_norm_word`; this should calculate the *longest common subsequence* of words between a Student Answer Text and corresponding Wikipedia Source Text.
It may be helpful to think of this in a concrete example. A Longest Common Subsequence (LCS) problem may look as follows:
* Given two texts: text A (answer text) of length n, and string S (original source text) of length m. Our goal is to produce their longest common subsequence of words: the longest sequence of words that appear left-to-right in both texts (though the words don't have to be in continuous order).
* Consider:
* A = "i think pagerank is a link analysis algorithm used by google that uses a system of weights attached to each element of a hyperlinked set of documents"
* S = "pagerank is a link analysis algorithm used by the google internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents"
* In this case, we can see that the start of each sentence of fairly similar, having overlap in the sequence of words, "pagerank is a link analysis algorithm used by" before diverging slightly. Then we **continue moving left -to-right along both texts** until we see the next common sequence; in this case it is only one word, "google". Next we find "that" and "a" and finally the same ending "to each element of a hyperlinked set of documents".
* Below, is a clear visual of how these sequences were found, sequentially, in each text.
<img src='notebook_ims/common_subseq_words.png' width=40% />
* Now, those words appear in left-to-right order in each document, sequentially, and even though there are some words in between, we count this as the longest common subsequence between the two texts.
* If I count up each word that I found in common I get the value 20. **So, LCS has length 20**.
* Next, to normalize this value, divide by the total length of the student answer; in this example that length is only 27. **So, the function `lcs_norm_word` should return the value `20/27` or about `0.7408`.**
In this way, LCS is a great indicator of cut-and-paste plagiarism or if someone has referenced the same source text multiple times in an answer.
### LCS, dynamic programming
If you read through the scenario above, you can see that this algorithm depends on looking at two texts and comparing them word by word. You can solve this problem in multiple ways. First, it may be useful to `.split()` each text into lists of comma separated words to compare. Then, you can iterate through each word in the texts and compare them, adding to your value for LCS as you go.
The method I recommend for implementing an efficient LCS algorithm is: using a matrix and dynamic programming. **Dynamic programming** is all about breaking a larger problem into a smaller set of subproblems, and building up a complete result without having to repeat any subproblems.
This approach assumes that you can split up a large LCS task into a combination of smaller LCS tasks. Let's look at a simple example that compares letters:
* A = "ABCD"
* S = "BD"
We can see right away that the longest subsequence of _letters_ here is 2 (B and D are in sequence in both strings). And we can calculate this by looking at relationships between each letter in the two strings, A and S.
Here, I have a matrix with the letters of A on top and the letters of S on the left side:
<img src='notebook_ims/matrix_1.png' width=40% />
This starts out as a matrix that has as many columns and rows as letters in the strings S and O **+1** additional row and column, filled with zeros on the top and left sides. So, in this case, instead of a 2x4 matrix it is a 3x5.
Now, we can fill this matrix up by breaking it into smaller LCS problems. For example, let's first look at the shortest substrings: the starting letter of A and S. We'll first ask, what is the Longest Common Subsequence between these two letters "A" and "B"?
**Here, the answer is zero and we fill in the corresponding grid cell with that value.**
<img src='notebook_ims/matrix_2.png' width=30% />
Then, we ask the next question, what is the LCS between "AB" and "B"?
**Here, we have a match, and can fill in the appropriate value 1**.
<img src='notebook_ims/matrix_3_match.png' width=25% />
If we continue, we get to a final matrix that looks as follows, with a **2** in the bottom right corner.
<img src='notebook_ims/matrix_6_complete.png' width=25% />
The final LCS will be that value **2** *normalized* by the number of n-grams in A. So, our normalized value is 2/4 = **0.5**.
### The matrix rules
One thing to notice here is that, you can efficiently fill up this matrix one cell at a time. Each grid cell only depends on the values in the grid cells that are directly on top and to the left of it, or on the diagonal/top-left. The rules are as follows:
* Start with a matrix that has one extra row and column of zeros.
* As you traverse your string:
* If there is a match, fill that grid cell with the value to the top-left of that cell *plus* one. So, in our case, when we found a matching B-B, we added +1 to the value in the top-left of the matching cell, 0.
* If there is not a match, take the *maximum* value from either directly to the left or the top cell, and carry that value over to the non-match cell.
<img src='notebook_ims/matrix_rules.png' width=50% />
After completely filling the matrix, **the bottom-right cell will hold the non-normalized LCS value**.
This matrix treatment can be applied to a set of words instead of letters. Your function should apply this to the words in two texts and return the normalized LCS value.
```
# Compute the normalized LCS given an answer text and a source text
def lcs_norm_word(answer_text, source_text):
'''Computes the longest common subsequence of words in two texts; returns a normalized value.
:param answer_text: The pre-processed text for an answer text
:param source_text: The pre-processed text for an answer's associated source text
:return: A normalized LCS value'''
ans_words = answer_text.split()
source_words = source_text.split()
matrix = np.zeros((len(source_words)+1,len(ans_words)+1))
for i,s_word in enumerate(source_words,1):
for j,a_word in enumerate(ans_words,1):
if s_word == a_word:
matrix[i][j] = matrix[i-1][j-1] + 1
else:
matrix[i][j] = max(matrix[i-1][j],matrix[i][j-1])
return matrix[len(source_words)][len(ans_words)]/len(ans_words)
```
### Test cells
Let's start by testing out your code on the example given in the initial description.
In the below cell, we have specified strings A (answer text) and S (original source text). We know that these texts have 20 words in common and the submitted answer is 27 words long, so the normalized, longest common subsequence should be 20/27.
```
# Run the test scenario from above
# does your function return the expected value?
A = "i think pagerank is a link analysis algorithm used by google that uses a system of weights attached to each element of a hyperlinked set of documents"
S = "pagerank is a link analysis algorithm used by the google internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents"
# calculate LCS
lcs = lcs_norm_word(A, S)
print('LCS = ', lcs)
# expected value test
assert lcs==20/27., "Incorrect LCS value, expected about 0.7408, got "+str(lcs)
print('Test passed!')
```
This next cell runs a more rigorous test.
```
# run test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test lcs implementation
# params: complete_df from before, and lcs_norm_word function
tests.test_lcs(complete_df, lcs_norm_word)
```
Finally, take a look at a few resultant values for `lcs_norm_word`. Just like before, you should see that higher values correspond to higher levels of plagiarism.
```
# test on your own
test_indices = range(5) # look at first few files
category_vals = []
lcs_norm_vals = []
# iterate through first few docs and calculate LCS
for i in test_indices:
category_vals.append(complete_df.loc[i, 'Category'])
# get texts to compare
answer_text = complete_df.loc[i, 'Text']
task = complete_df.loc[i, 'Task']
# we know that source texts have Class = -1
orig_rows = complete_df[(complete_df['Class'] == -1)]
orig_row = orig_rows[(orig_rows['Task'] == task)]
source_text = orig_row['Text'].values[0]
# calculate lcs
lcs_val = lcs_norm_word(answer_text, source_text)
lcs_norm_vals.append(lcs_val)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print('Normalized LCS values: \n', lcs_norm_vals)
```
---
# Create All Features
Now that you've completed the feature calculation functions, it's time to actually create multiple features and decide on which ones to use in your final model! In the below cells, you're provided two helper functions to help you create multiple features and store those in a DataFrame, `features_df`.
### Creating multiple containment features
Your completed `calculate_containment` function will be called in the next cell, which defines the helper function `create_containment_features`.
> This function returns a list of containment features, calculated for a given `n` and for *all* files in a df (assumed to the the `complete_df`).
For our original files, the containment value is set to a special value, -1.
This function gives you the ability to easily create several containment features, of different n-gram lengths, for each of our text files.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Function returns a list of containment features, calculated for a given n
# Should return a list of length 100 for all files in a complete_df
def create_containment_features(df, n, column_name=None):
containment_values = []
if(column_name==None):
column_name = 'c_'+str(n) # c_1, c_2, .. c_n
# iterates through dataframe rows
for i in df.index:
file = df.loc[i, 'File']
# Computes features using calculate_containment function
if df.loc[i,'Category'] > -1:
c = calculate_containment(df, n, file)
containment_values.append(c)
# Sets value to -1 for original tasks
else:
containment_values.append(-1)
print(str(n)+'-gram containment features created!')
return containment_values
```
### Creating LCS features
Below, your complete `lcs_norm_word` function is used to create a list of LCS features for all the answer files in a given DataFrame (again, this assumes you are passing in the `complete_df`. It assigns a special value for our original, source files, -1.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Function creates lcs feature and add it to the dataframe
def create_lcs_features(df, column_name='lcs_word'):
lcs_values = []
# iterate through files in dataframe
for i in df.index:
# Computes LCS_norm words feature using function above for answer tasks
if df.loc[i,'Category'] > -1:
# get texts to compare
answer_text = df.loc[i, 'Text']
task = df.loc[i, 'Task']
# we know that source texts have Class = -1
orig_rows = df[(df['Class'] == -1)]
orig_row = orig_rows[(orig_rows['Task'] == task)]
source_text = orig_row['Text'].values[0]
# calculate lcs
lcs = lcs_norm_word(answer_text, source_text)
lcs_values.append(lcs)
# Sets to -1 for original tasks
else:
lcs_values.append(-1)
print('LCS features created!')
return lcs_values
```
## EXERCISE: Create a features DataFrame by selecting an `ngram_range`
The paper suggests calculating the following features: containment *1-gram to 5-gram* and *longest common subsequence*.
> In this exercise, you can choose to create even more features, for example from *1-gram to 7-gram* containment features and *longest common subsequence*.
You'll want to create at least 6 features to choose from as you think about which to give to your final, classification model. Defining and comparing at least 6 different features allows you to discard any features that seem redundant, and choose to use the best features for your final model!
In the below cell **define an n-gram range**; these will be the n's you use to create n-gram containment features. The rest of the feature creation code is provided.
```
# Define an ngram range
ngram_range = range(1,7)
# The following code may take a minute to run, depending on your ngram_range
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
features_list = []
# Create features in a features_df
all_features = np.zeros((len(ngram_range)+1, len(complete_df)))
# Calculate features for containment for ngrams in range
i=0
for n in ngram_range:
column_name = 'c_'+str(n)
features_list.append(column_name)
# create containment features
all_features[i]=np.squeeze(create_containment_features(complete_df, n))
i+=1
# Calculate features for LCS_Norm Words
features_list.append('lcs_word')
all_features[i]= np.squeeze(create_lcs_features(complete_df))
# create a features dataframe
features_df = pd.DataFrame(np.transpose(all_features), columns=features_list)
# Print all features/columns
print()
print('Features: ', features_list)
print()
# print some results
features_df.head(10)
```
## Correlated Features
You should use feature correlation across the *entire* dataset to determine which features are ***too*** **highly-correlated** with each other to include both features in a single model. For this analysis, you can use the *entire* dataset due to the small sample size we have.
All of our features try to measure the similarity between two texts. Since our features are designed to measure similarity, it is expected that these features will be highly-correlated. Many classification models, for example a Naive Bayes classifier, rely on the assumption that features are *not* highly correlated; highly-correlated features may over-inflate the importance of a single feature.
So, you'll want to choose your features based on which pairings have the lowest correlation. These correlation values range between 0 and 1; from low to high correlation, and are displayed in a [correlation matrix](https://www.displayr.com/what-is-a-correlation-matrix/), below.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Create correlation matrix for just Features to determine different models to test
corr_matrix = features_df.corr().abs().round(2)
# display shows all of a dataframe
display(corr_matrix)
import seaborn as sns
sns.heatmap(corr_matrix,cmap='Reds')
```
## EXERCISE: Create selected train/test data
Complete the `train_test_data` function below. This function should take in the following parameters:
* `complete_df`: A DataFrame that contains all of our processed text data, file info, datatypes, and class labels
* `features_df`: A DataFrame of all calculated features, such as containment for ngrams, n= 1-5, and lcs values for each text file listed in the `complete_df` (this was created in the above cells)
* `selected_features`: A list of feature column names, ex. `['c_1', 'lcs_word']`, which will be used to select the final features in creating train/test sets of data.
It should return two tuples:
* `(train_x, train_y)`, selected training features and their corresponding class labels (0/1)
* `(test_x, test_y)`, selected training features and their corresponding class labels (0/1)
** Note: x and y should be arrays of feature values and numerical class labels, respectively; not DataFrames.**
Looking at the above correlation matrix, you should decide on a **cutoff** correlation value, less than 1.0, to determine which sets of features are *too* highly-correlated to be included in the final training and test data. If you cannot find features that are less correlated than some cutoff value, it is suggested that you increase the number of features (longer n-grams) to choose from or use *only one or two* features in your final model to avoid introducing highly-correlated features.
Recall that the `complete_df` has a `Datatype` column that indicates whether data should be `train` or `test` data; this should help you split the data appropriately.
```
# Takes in dataframes and a list of selected features (column names)
# and returns (train_x, train_y), (test_x, test_y)
def train_test_data(complete_df, features_df, selected_features):
'''Gets selected training and test features from given dataframes, and
returns tuples for training and test features and their corresponding class labels.
:param complete_df: A dataframe with all of our processed text data, datatypes, and labels
:param features_df: A dataframe of all computed, similarity features
:param selected_features: An array of selected features that correspond to certain columns in `features_df`
:return: training and test features and labels: (train_x, train_y), (test_x, test_y)'''
df = pd.concat((complete_df, features_df), axis=1)
# get the training features
train_x = df[df['Datatype'] == 'train'][selected_features].values
# And training class labels (0 or 1)
train_y = df[df['Datatype'] == 'train']['Class'].values
# get the test features and labels
test_x = df[df['Datatype'] == 'test'][selected_features].values
test_y = df[df['Datatype'] == 'test']['Class'].values
return (train_x, train_y), (test_x, test_y)
```
### Test cells
Below, test out your implementation and create the final train/test data.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
test_selection = list(features_df)[:2] # first couple columns as a test
# test that the correct train/test data is created
(train_x, train_y), (test_x, test_y) = train_test_data(complete_df, features_df, test_selection)
# params: generated train/test data
tests.test_data_split(train_x, train_y, test_x, test_y)
```
## EXERCISE: Select "good" features
If you passed the test above, you can create your own train/test data, below.
Define a list of features you'd like to include in your final mode, `selected_features`; this is a list of the features names you want to include.
```
# Select your list of features, this should be column names from features_df
# ex. ['c_1', 'lcs_word']
selected_features = ['c_1', 'c_6']
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
(train_x, train_y), (test_x, test_y) = train_test_data(complete_df, features_df, selected_features)
# check that division of samples seems correct
# these should add up to 95 (100 - 5 original files)
print('Training size: ', len(train_x))
print('Test size: ', len(test_x))
print()
print('Training df sample: \n', train_x[:10])
```
### Question 2: How did you decide on which features to include in your final model?
**Answer:**<br>
Looking into colored heatmap, we can see that of all the correlations among all features, c3 to c6 has a significant lower correlation with c1 compare to other correlations. Then among c3 to c6 they are highly correlated to each other, so I decided to pick only one, which is c6 who has lowest correlation with c_1. At the end, I didn't include lcs_word since it is highly correlated to every other features.
---
## Creating Final Data Files
Now, you are almost ready to move on to training a model in SageMaker!
You'll want to access your train and test data in SageMaker and upload it to S3. In this project, SageMaker will expect the following format for your train/test data:
* Training and test data should be saved in one `.csv` file each, ex `train.csv` and `test.csv`
* These files should have class labels in the first column and features in the rest of the columns
This format follows the practice, outlined in the [SageMaker documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html), which reads: "Amazon SageMaker requires that a CSV file doesn't have a header record and that the target variable [class label] is in the first column."
## EXERCISE: Create csv files
Define a function that takes in x (features) and y (labels) and saves them to one `.csv` file at the path `data_dir/filename`.
It may be useful to use pandas to merge your features and labels into one DataFrame and then convert that into a csv file. You can make sure to get rid of any incomplete rows, in a DataFrame, by using `dropna`.
```
def make_csv(x, y, filename, data_dir):
'''Merges features and labels and converts them into one csv file with labels in the first column.
:param x: Data features
:param y: Data labels
:param file_name: Name of csv file, ex. 'train.csv'
:param data_dir: The directory where files will be saved
'''
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
pd.concat((pd.DataFrame(y),pd.DataFrame(x)),axis=1).to_csv(os.path.join(data_dir, filename),index=False,header=False)
# nothing is returned, but a print statement indicates that the function has run
print('Path created: '+str(data_dir)+'/'+str(filename))
```
### Test cells
Test that your code produces the correct format for a `.csv` file, given some text features and labels.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
fake_x = [ [0.39814815, 0.0001, 0.19178082],
[0.86936937, 0.44954128, 0.84649123],
[0.44086022, 0., 0.22395833] ]
fake_y = [0, 1, 1]
make_csv(fake_x, fake_y, filename='to_delete.csv', data_dir='test_csv')
# read in and test dimensions
fake_df = pd.read_csv('test_csv/to_delete.csv', header=None)
# check shape
assert fake_df.shape==(3, 4), \
'The file should have as many rows as data_points and as many columns as features+1 (for indices).'
# check that first column = labels
assert np.all(fake_df.iloc[:,0].values==fake_y), 'First column is not equal to the labels, fake_y.'
print('Tests passed!')
# delete the test csv file, generated above
! rm -rf test_csv
```
If you've passed the tests above, run the following cell to create `train.csv` and `test.csv` files in a directory that you specify! This will save the data in a local directory. Remember the name of this directory because you will reference it again when uploading this data to S3.
```
# can change directory, if you want
data_dir = 'plagiarism_data'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
make_csv(train_x, train_y, filename='train.csv', data_dir=data_dir)
make_csv(test_x, test_y, filename='test.csv', data_dir=data_dir)
```
## Up Next
Now that you've done some feature engineering and created some training and test data, you are ready to train and deploy a plagiarism classification model. The next notebook will utilize SageMaker resources to train and test a model that you design.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import warnings
warnings.filterwarnings('ignore')
```
## Introduction
```
from IPython.display import YouTubeVideo
YouTubeVideo(id="BYOK12I9vgI", width="100%")
```
In this chapter, we will look at bipartite graphs and their applications.
## What are bipartite graphs?
As the name suggests,
bipartite have two (bi) node partitions (partite).
In other words, we can assign nodes to one of the two partitions.
(By contrast, all of the graphs that we have seen before are _unipartite_:
they only have a single partition.)
### Rules for bipartite graphs
With unipartite graphs, you might remember a few rules that apply.
Firstly, nodes and edges belong to a _set_.
This means the node set contains only unique members,
i.e. no node can be duplicated.
The same applies for the edge set.
On top of those two basic rules, bipartite graphs add an additional rule:
Edges can only occur between nodes of **different** partitions.
In other words, nodes within the same partition
are not allowed to be connected to one another.
### Applications of bipartite graphs
Where do we see bipartite graphs being used?
Here's one that is very relevant to e-commerce,
which touches our daily lives:
> We can model customer purchases of products using a bipartite graph.
> Here, the two node sets are **customer** nodes and **product** nodes,
> and edges indicate that a customer $C$ purchased a product $P$.
On the basis of this graph, we can do interesting analyses,
such as finding customers that are similar to one another
on the basis of their shared product purchases.
Can you think of other situations
where a bipartite graph model can be useful?
## Dataset
Here's another application in crime analysis,
which is relevant to the example that we will use in this chapter:
> This bipartite network contains persons
> who appeared in at least one crime case
> as either a suspect, a victim, a witness
> or both a suspect and victim at the same time.
> A left node represents a person and a right node represents a crime.
> An edge between two nodes shows that
> the left node was involved in the crime
> represented by the right node.
This crime dataset was also sourced from Konect.
```
from nams import load_data as cf
G = cf.load_crime_network()
for n, d in G.nodes(data=True):
G.nodes[n]["degree"] = G.degree(n)
```
If you inspect the nodes,
you will see that they contain a special metadata keyword: `bipartite`.
This is a special keyword that NetworkX can use
to identify nodes of a given partition.
### Visualize the crime network
To help us get our bearings right, let's visualize the crime network.
```
import nxviz as nv
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(7, 7))
nv.circos(G, sort_by="degree", group_by="bipartite", node_color_by="bipartite", node_enc_kwargs={"size_scale": 3})
```
### Exercise: Extract each node set
A useful thing to be able to do
is to extract each partition's node set.
This will become handy when interacting with
NetworkX's bipartite algorithms later on.
> Write a function that extracts all of the nodes
> from specified node partition.
> It should also raise a plain Exception
> if no nodes exist in that specified partition.
> (as a precuation against users putting in invalid partition names).
```
import networkx as nx
def extract_partition_nodes(G: nx.Graph, partition: str):
nodeset = [_ for _, _ in _______ if ____________]
if _____________:
raise Exception(f"No nodes exist in the partition {partition}!")
return nodeset
from nams.solutions.bipartite import extract_partition_nodes
# Uncomment the next line to see the answer.
# extract_partition_nodes??
```
## Bipartite Graph Projections
In a bipartite graph, one task that can be useful to do
is to calculate the projection of a graph onto one of its nodes.
What do we mean by the "projection of a graph"?
It is best visualized using this figure:
```
from nams.solutions.bipartite import draw_bipartite_graph_example, bipartite_example_graph
from nxviz import annotate
import matplotlib.pyplot as plt
bG = bipartite_example_graph()
pG = nx.bipartite.projection.projected_graph(bG, "abcd")
ax = draw_bipartite_graph_example()
plt.sca(ax[0])
annotate.parallel_labels(bG, group_by="bipartite")
plt.sca(ax[1])
annotate.arc_labels(pG)
```
As shown in the figure above, we start first with a bipartite graph with two node sets,
the "alphabet" set and the "numeric" set.
The projection of this bipartite graph onto the "alphabet" node set
is a graph that is constructed such that it only contains the "alphabet" nodes,
and edges join the "alphabet" nodes because they share a connection to a "numeric" node.
The red edge on the right
is basically the red path traced on the left.
### Computing graph projections
How does one compute graph projections using NetworkX?
Turns out, NetworkX has a `bipartite` submodule,
which gives us all of the facilities that we need
to interact with bipartite algorithms.
First of all, we need to check that the graph
is indeed a bipartite graph.
NetworkX provides a function for us to do so:
```
from networkx.algorithms import bipartite
bipartite.is_bipartite(G)
```
Now that we've confirmed that the graph is indeed bipartite,
we can use the NetworkX bipartite submodule functions
to generate the bipartite projection onto one of the node partitions.
First off, we need to extract nodes from a particular partition.
```
person_nodes = extract_partition_nodes(G, "person")
crime_nodes = extract_partition_nodes(G, "crime")
```
Next, we can compute the projection:
```
person_graph = bipartite.projected_graph(G, person_nodes)
crime_graph = bipartite.projected_graph(G, crime_nodes)
```
And with that, we have our projected graphs!
Go ahead and inspect them:
```
list(person_graph.edges(data=True))[0:5]
list(crime_graph.edges(data=True))[0:5]
```
Now, what is the _interpretation_ of these projected graphs?
- For `person_graph`, we have found _individuals who are linked by shared participation (whether witness or suspect) in a crime._
- For `crime_graph`, we have found _crimes that are linked by shared involvement by people._
Just by this graph, we already can find out pretty useful information.
Let's use an exercise that leverages what you already know
to extract useful information from the projected graph.
### Exercise: find the crime(s) that have the most shared connections with other crimes
> Find crimes that are most similar to one another
> on the basis of the number of shared connections to individuals.
_Hint: This is a degree centrality problem!_
```
import pandas as pd
def find_most_similar_crimes(cG: nx.Graph):
"""
Find the crimes that are most similar to other crimes.
"""
dcs = ______________
return ___________________
from nams.solutions.bipartite import find_most_similar_crimes
find_most_similar_crimes(crime_graph)
```
### Exercise: find the individual(s) that have the most shared connections with other individuals
> Now do the analogous thing for individuals!
```
def find_most_similar_people(pG: nx.Graph):
"""
Find the persons that are most similar to other persons.
"""
dcs = ______________
return ___________________
from nams.solutions.bipartite import find_most_similar_people
find_most_similar_people(person_graph)
```
## Weighted Projection
Though we were able to find out which graphs were connected with one another,
we did not record in the resulting projected graph
the **strength** by which the two nodes were connected.
To preserve this information, we need another function:
```
weighted_person_graph = bipartite.weighted_projected_graph(G, person_nodes)
list(weighted_person_graph.edges(data=True))[0:5]
```
### Exercise: Find the people that can help with investigating a `crime`'s `person`.
Let's pretend that we are a detective trying to solve a crime,
and that we right now need to find other individuals
who were not implicated in the same _exact_ crime as an individual was,
but who might be able to give us information about that individual
because they were implicated in other crimes with that individual.
> Implement a function that takes in a bipartite graph `G`, a string `person` and a string `crime`,
> and returns a list of other `person`s that were **not** implicated in the `crime`,
> but were connected to the `person` via other crimes.
> It should return a _ranked list_,
> based on the **number of shared crimes** (from highest to lowest)
> because the ranking will help with triage.
```
list(G.neighbors('p1'))
def find_connected_persons(G, person, crime):
# Step 0: Check that the given "person" and "crime" are connected.
if _____________________________:
raise ValueError(f"Graph does not have a connection between {person} and {crime}!")
# Step 1: calculate weighted projection for person nodes.
person_nodes = ____________________________________
person_graph = bipartite.________________________(_, ____________)
# Step 2: Find neighbors of the given `person` node in projected graph.
candidate_neighbors = ___________________________________
# Step 3: Remove candidate neighbors from the set if they are implicated in the given crime.
for p in G.neighbors(crime):
if ________________________:
_____________________________
# Step 4: Rank-order the candidate neighbors by number of shared connections.
_________ = []
## You might need a for-loop here
return pd.DataFrame(__________).sort_values("________", ascending=False)
from nams.solutions.bipartite import find_connected_persons
find_connected_persons(G, 'p2', 'c10')
```
## Degree Centrality
The degree centrality metric is something we can calculate for bipartite graphs.
Recall that the degree centrality metric is the number of neighbors of a node
divided by the total number of _possible_ neighbors.
In a unipartite graph, the denominator can be the total number of nodes less one
(if self-loops are not allowed)
or simply the total number of nodes (if self loops _are_ allowed).
### Exercise: What is the denominator for bipartite graphs?
Think about it for a moment, then write down your answer.
```
from nams.solutions.bipartite import bipartite_degree_centrality_denominator
from nams.functions import render_html
render_html(bipartite_degree_centrality_denominator())
```
### Exercise: Which `persons` are implicated in the most number of crimes?
> Find the `persons` (singular or plural) who are connected to the most number of crimes.
To do so, you will need to use `nx.bipartite.degree_centrality`,
rather than the regular `nx.degree_centrality` function.
`nx.bipartite.degree_centrality` requires that you pass in
a node set from one of the partitions
so that it can correctly partition nodes on the other set.
What is returned, though, is the degree centrality
for nodes in both sets.
Here is an example to show you how the function is used:
```python
dcs = nx.bipartite.degree_centrality(my_graph, nodes_from_one_partition)
```
```
def find_most_crime_person(G, person_nodes):
dcs = __________________________
return ___________________________
from nams.solutions.bipartite import find_most_crime_person
find_most_crime_person(G, person_nodes)
```
## Solutions
Here are the solutions to the exercises above.
```
from nams.solutions import bipartite
import inspect
print(inspect.getsource(bipartite))
```
| github_jupyter |
```
# Internal python libraries
import numpy as np
import matplotlib.pyplot as plt
# Esto controla el tamaño de las figuras en el script
plt.rcParams['figure.figsize'] = (10, 10)
import ipywidgets as ipw
from ipywidgets import widgets, interact_manual
from IPython.display import Image
# Esto es para poder correr todo en linea
ipw.interact_manual.opts['manual_name'] = "CALCULAR!"
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
def RUN_ALL(ST):
# Definiciones generales que sirven para cualquiera de las cosas que se llamen acá
DESCR=[r"Q $(m^3/s)$", r"m", r"b $(m)$", r"D_0 $(m)$" ]
# Seccion triangular
if ST == "Triangular":
from Functions import Triang
interact_manual(Triang, \
Q=widgets.FloatText(description=DESCR[0], min=0, max=10, value=0.5 , readout_format='E'),\
m=widgets.FloatText(description=DESCR[1], min=0, max=10, value=1 , readout_format='E'));
# Seccion rectangular
elif ST == "Rectangular":
from Functions import Rect
interact_manual(Rect, \
Q=widgets.FloatText(description=DESCR[0], min=0, max=10, value=0.5 , readout_format='E'),\
b=widgets.FloatText(description=DESCR[2], min=0, max=10, value=1 , readout_format='E'));
# Seccion trapezoidal
elif ST == "Trapezoidal":
from Functions import Trapez
interact_manual(Trapez, \
Q=widgets.FloatText(description=DESCR[0], min=0, max=10, value=0.5 , readout_format='E'),\
m=widgets.FloatText(description=DESCR[1], min=0, max=10, value=1 , readout_format='E'), \
b=widgets.FloatText(description=DESCR[2], min=0, max=10, value=1 , readout_format='E'));
# Sección circular
elif ST == "Circular":
from Functions import Circ
interact_manual(Circ, \
Q=widgets.FloatText(description=DESCR[0], min=0, max=10, value=0.5 , readout_format='E'),\
d0=widgets.FloatText(description=DESCR[1], min=0, max=10, value=2 , readout_format='E'));
return
# ===========================================================================
# Primera función para seleccionar la fomra geométrica de la sección trans-
# versal con la que trabajará el programa
# ===========================================================================
# Descripción de la lista desplegable
select = ["Triangular", "Rectangular", "Trapezoidal", "Circular"]
# Correr todo y poner los sliders en la pantalla
interact_manual(RUN_ALL, ST=widgets.Dropdown(options = select, description = "Sección:"))
```
| github_jupyter |
# Predict Model
The aim of this notebook is to assess how well our [logistic regression classifier](../models/LR.csv) generalizes to unseen data. We will accomplish this by using the Matthew's Correlation Coefficient (MCC) to evaluate it's predictive performance on the test set. Following this, we will determine which features the classifier deems most important in the classification of a physicist as a Nobel Laureate. Finally, we will use our model to predict the most likely Physics Nobel Prize Winners in 2018.
```
import ast
import numpy as np
import pandas as pd
from sklearn.externals import joblib
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import matthews_corrcoef
from src.features.features_utils import convert_categoricals_to_numerical
from src.features.features_utils import convert_target_to_numerical
from src.models.metrics_utils import confusion_matrix_to_dataframe
from src.models.metrics_utils import print_matthews_corrcoef
from src.visualization.visualization_utils import plot_logistic_regression_odds_ratio
```
## Reading in the Data
First let's read in the classifier parameters and metadata that we saved in order to reconstruct the classifier.
```
classifier_params = pd.read_csv('../models/LR.csv', squeeze=True, index_col=0)
classifier_params
```
Next let's read in the training, validation and test features and targets. We make sure to convert the categorical fields to a numerical form that is suitable for building machine learning models.
```
train_features = pd.read_csv('../data/processed/train-features.csv')
X_train = convert_categoricals_to_numerical(train_features)
X_train.head()
train_target = pd.read_csv('../data/processed/train-target.csv', index_col='full_name', squeeze=True)
y_train = convert_target_to_numerical(train_target)
y_train.head()
validation_features = pd.read_csv('../data/processed/validation-features.csv')
X_validation = convert_categoricals_to_numerical(validation_features)
X_validation.head()
validation_target = pd.read_csv('../data/processed/validation-target.csv', index_col='full_name',
squeeze=True)
y_validation = convert_target_to_numerical(validation_target)
y_validation.head()
test_features = pd.read_csv('../data/processed/test-features.csv')
X_test = convert_categoricals_to_numerical(test_features)
X_test.head()
test_target = pd.read_csv('../data/processed/test-target.csv', index_col='full_name', squeeze=True)
y_test = convert_target_to_numerical(test_target)
y_test.head()
```
## Retraining on the Training and Validation Data
It makes sense to retrain the model on both the training and validation data so that we can obtain as good a predictive performance as possible. So let's combine the training and validation features and targets, reconstruct the classifier and retrain the model.
```
X_train_validation = X_train.append(X_validation)
assert(len(X_train_validation) == len(X_train) + len(X_validation))
X_train_validation.head()
y_train_validation = y_train.append(y_validation)
assert(len(y_train_validation) == len(y_train) + len(y_validation))
y_train_validation.head()
classifier = LogisticRegression(**ast.literal_eval(classifier_params.params))
classifier.fit(X_train_validation, y_train_validation)
```
## Predicting on the Test Data
Here comes the moment of truth! We will soon see just how good the model is by predicting on the test data. However, first it makes sense to look at the performance of our "naive" [baseline model](5.0-baseline-model.ipynb) on the test data. Recall that this is a model that predicts the physicist is a laureate whenever the number of workplaces is at least 2.
```
y_train_pred = X_train_validation.num_workplaces_at_least_2
y_test_pred = X_test.num_workplaces_at_least_2
mcc_train_validation = matthews_corrcoef(y_train_validation, y_train_pred)
mcc_test = matthews_corrcoef(y_test, y_test_pred)
name = 'Baseline Classifier'
print_matthews_corrcoef(mcc_train_validation, name, data_label='train + validation')
print_matthews_corrcoef(mcc_test, name, data_label='test')
```
Unsurprisingly, this classifier exhibits very poor performance on the test data. We see evidence of the covariate shift again here due to the relatively large difference in the test and train + validation MCCs. Either physicists started working in more workplaces in general, or the records of where physicists have worked are better in modern times. The confusion matrix and classification report indicate that the classifier is poor in terms of both precision and recall when identifying laureates.
```
display(confusion_matrix_to_dataframe(confusion_matrix(y_test, y_test_pred)))
print(classification_report(y_test, y_test_pred))
```
OK let's see how our logistic regression model does on the test data.
```
y_train_pred = (classifier.predict_proba(X_train_validation)[:, 1] > ast.literal_eval(
classifier_params.threshold)).astype('int64')
y_test_pred = (classifier.predict_proba(X_test)[:, 1] > ast.literal_eval(
classifier_params.threshold)).astype('int64')
mcc_train_validation = matthews_corrcoef(y_train_validation, y_train_pred)
mcc_test = matthews_corrcoef(y_test, y_test_pred)
print_matthews_corrcoef(mcc_train_validation, classifier_params.name, data_label='train + validation')
print_matthews_corrcoef(mcc_test, classifier_params.name, data_label='test')
```
This classifier performs much better on the test data than the baseline classifier. Again we are discussing its performance in relative and not absolute terms. There is very little in the literature, even as a rule of thumb, saying what the expected MCC is for a "good performing classifier" as it is very dependent on the context and usage. As we noted before, predicting Physics Nobel Laureates is a difficult task due to the many complex factors involved, so we certainly should not be expecting stellar performance from *any* classifier. This includes both machine classifiers, either machine-learning-based or rules-based, and human classifiers without inside knowledge. However, let us try and get off the fence just a little now.
The MCC is a [contingency matrix](https://en.wikipedia.org/wiki/Contingency_table) method of calculating the [Pearson product-moment correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) and so it has the [same interpretation](https://stats.stackexchange.com/questions/118219/how-to-interpret-matthews-correlation-coefficient-mcc). If the values in the link are to be believed, then our classifier has a "moderate positive relationship" with the target. This [statistical guide](https://statistics.laerd.com/statistical-guides/pearson-correlation-coefficient-statistical-guide.php ) also seems to agree with this assessment. However, we can easily find examples that indicate there is a [low positive correlation](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3576830/) or a [weak uphill (positive) linear relationship](https://www.dummies.com/education/math/statistics/how-to-interpret-a-correlation-coefficient-r/) between the classifier's predictions and the target.
So should we conclude that the classifier has a low or moderate performance? Asking this question is missing the purpose of this study. Instead we should ask, based on the classifier's performance, would we be willing to make recommendations to the Nobel Committee, about any biases that may be present when deciding Physics Laureates? We can see from the confusion matrix and classification report that although this classifier has reasonable recall of laureates, it is contaminated by too many false postives. Or in other words, it is not precise enough. As a result, the answer to the question is very likely no.
```
display(confusion_matrix_to_dataframe(confusion_matrix(y_test, y_test_pred)))
print(classification_report(y_test, y_test_pred))
```
## Most Important Features
Out of interest, let's determine the features that are most important to the prediction by looking at the coefficients of the logistic regression model. Each coefficient represents the impact that the *presence* vs. *absence* of a predictor has on the [log odds ratio](https://en.wikipedia.org/wiki/Odds_ratio#Role_in_logistic_regression) of a physicist being classified as a laureate. The change in [odds ratio](https://en.wikipedia.org/wiki/Odds_ratio) for each predictor can can simply be computed by exponentiating its associated coefficient. The top fifteen most important features are plotted in the chart below.
```
top_n = 15
ax = plot_logistic_regression_odds_ratio(classifier.coef_, top_n=top_n, columns=X_train_validation.columns,
title='Top {} most important features in prediction of Physics Nobel Laureates'.format(top_n))
ax.figure.set_size_inches(10, 8)
```
By far the most important feature is being an experimental physicist. This matches with what we observed during the [exploratory data analysis](4.0-exploratory-data-analysis.ipynb). Next comes having at least one physics laureate doctoral student and then living for at least 65-79 years. We also saw during the exploratory data analysis that the later also seemed to have a big effect in distinguishing laureates from their counterparts. Some of the other interesting top features are being a citizen of France or Switzerland, working at [Bell Labs](https://en.wikipedia.org/wiki/Bell_Labs#Discoveries_and_developments) or [The University of Cambridge](https://en.wikipedia.org/wiki/List_of_Nobel_laureates_by_university_affiliation#University_of_Cambridge_(2nd)), being an alumnus in Asia and having at least two alma mater.
## Prediction of 2018 Physics Nobel Laureates
Now let us use the logistic regression model to predict the 2018 Physics Nobel Laureates. A maximum of three physicists can be awarded the prize in any one year. However, to give ourselves more of a fighting chance, we will instead try to predict the ten most likely winners. Let's start by forming the feature and target dataframes of living physicists (i.e the union of the validation and test sets) as the Nobel Prize cannot be awarded posthumously.
```
X_validation_test = X_validation.append(X_test)
assert(len(X_validation_test) == len(X_validation) + len(X_test))
X_validation_test.head()
y_validation_test = y_validation.append(y_test)
assert(len(y_validation_test) == len(y_validation) + len(y_test))
y_validation_test.head()
```
Recall that *John Bardeen* is the only [double laureate in Physics](https://www.nobelprize.org/prizes/facts/facts-on-the-nobel-prize-in-physics/), so although it is possible to receive the Nobel Prize in Physics multiple times, it is extremely rare. So let's drop previous Physics Laureates from the dataframe. This will make the list far more interesting as it will not be polluted by previous laureates.
```
X_eligible = X_validation_test.drop(y_validation_test[y_validation_test == 1].index)
assert(len(X_eligible) == len(X_validation_test) - len(y_validation_test[y_validation_test == 1]))
X_eligible.head()
```
According to our model, these are the ten most likely winners of 2018 Physics Nobel Prize:
```
physicist_win_probabilites = pd.Series(
classifier.predict_proba(X_eligible)[:, 1], index=X_eligible.index).sort_values(ascending=False)
physicist_win_probabilites[:10]
```
The list contains some great and very interesting physicists who have won numerous of the top prizes in physics. We'll leave you to check out their Wikipedia articles for some more information on them. However, a few are worth discussing now. Without doubt the most infamous is [Jocelyn Bell Burnell](https://en.wikipedia.org/wiki/Jocelyn_Bell_Burnell) who, as a postgraduate student, co-discovered the first radio pulsars in 1967. Her Wikipedia article says:
"The discovery was recognised by the award of the 1974 Nobel Prize in Physics, but despite the fact that she was the first to observe the pulsars, Bell was excluded from the recipients of the prize.
The paper announcing the discovery of pulsars had five authors. Bell's thesis supervisor Antony Hewish was listed first, Bell second. Hewish was awarded the Nobel Prize, along with the astronomer Martin Ryle. Many prominent astronomers criticised Bell's omission, including Sir Fred Hoyle."
You can read more about her in her Wikipedia article and further details about other [Nobel Physics Prize controversies](https://en.wikipedia.org/wiki/Nobel_Prize_controversies#Physics).
[Vera Rubin](https://en.wikipedia.org/wiki/Vera_Rubin) was an American astronomer who's research provided evidence of the existence of [dark matter](https://en.wikipedia.org/wiki/Dark_matter). According to her Wikipedia article, she "never won the Nobel Prize, though physicists such as Lisa Randall and Emily Levesque have argued that this was an oversight." Unfortunately she died on 25 December 2016 and is no longer eligible for the award. Recall that the list contains some deceased physicists due to the lag in updates of dbPedia data from Wikipedia. *Peter Mansfield*, who is also on the list, is deceased too.
[Manfred Eigen](https://en.wikipedia.org/wiki/Manfred_Eigen) actually won the 1967 Nobel Prize in Chemistry for work on measuring fast chemical reactions.
The actual winners of the [2018 Nobel Prize in Physics](https://www.nobelprize.org/prizes/physics/2018/summary/) were [Gérard Mourou](https://en.wikipedia.org/wiki/G%C3%A9rard_Mourou), [Arthur Ashkin](https://en.wikipedia.org/wiki/Arthur_Ashkin) and [Donna Strickland](https://en.wikipedia.org/wiki/Donna_Strickland). Our model actually had zero chance of predicting them as they were never in the original [list of physicists](../data/raw/physicists.txt) scraped from Wikipedia! Obviously they are now deemed famous enough to have been added to Wikipedia since.
```
('Gérard Mourou' in physicist_win_probabilites,
'Arthur Ashkin' in physicist_win_probabilites,
'Donna Strickland' in physicist_win_probabilites)
```
So should we declare this part of the study as an epic failure as we were unable to identify the winners? No not quite. Closer inspection reveals many interesting characteristics of the three winners that are related to the top features in our predictive model:
- *Gérard Mourou* is an experimental physicist, a citizen of France, 74 years of age (i.e. years lived group 65-79), has at least one physics laureate doctoral student (i.e. *Donna Strickland*) and has 3 alma mater.
- *Arthur Ashkin* is an experimental physicist, worked at Bell Labs and has 2 alma mater.
- *Donna Strickland* is an experimental physicist and has 2 alma mater.
Maybe this is a pure coincidence, but more likely, there are patterns in the data that the model has found. Whether or not these characteristics can be attributed to biases in the [Nobel Physics Prize nomination and selection process](https://www.nobelprize.org/nomination/physics/) is another matter, as correlation does not necessarily imply causation.
This section was a lot of fun and quite informative about the logistic regression classifier, however, it was not possible without cheating. Look closely to see if you can spot the cheating!
## Model Deployment
It makes sense to retrain the model on *all* the data so that we can obtain as good a predictive performance as possible. So let's go ahead and do this now.
```
X_train_validation_test = X_train_validation.append(X_test)
assert(len(X_train_validation_test) == len(X_train_validation) + len(X_test))
X_train_validation_test.head()
y_train_validation_test = y_train_validation.append(y_test)
assert(len(y_train_validation_test) == len(y_train_validation) + len(y_test))
y_train_validation_test.head()
classifier.fit(X_train_validation_test, y_train_validation_test)
```
Due to the short training time, it is possible in this study to always recreate the logistic regression classifier from the [model template](../models/LR.csv) that we persisted. Every time we want to use the model to make predictions on new data, it is easy enough to retrain the model first. However, if we had more data and longer training times, this would be rather cumbersome. In such a case, if we were deploying the model, which we are not for the reasons mentioned above, it would make sense to actually persist the trained model. Nonetheless, for completeness, let's persist the model.
```
joblib.dump(classifier, '../models/LR.joblib')
```
As a sanity check let's load the model and make sure that we get the same results as before.
```
classifier_check = joblib.load('../models/LR.joblib')
np.testing.assert_allclose(classifier.predict_proba(X_train_validation_test),
classifier_check.predict_proba(X_train_validation_test))
```
Great, everything looks good.
Although persisting the model suffers from the [compatibility and security issues](https://stackabuse.com/scikit-learn-save-and-restore-models/#compatibilityissues) mentioned previously, we have the [model template](../models/LR.csv) that allows us to reconstruct the classifier for future python, library and model versions. This mitigates the compatibility risk. We can also mitigate the security risk by only restoring the model from *trusted* or *authenticated* sources.
| github_jupyter |
```
from kbc_pul.project_info import project_dir as kbc_e_metrics_project_dir
import os
from typing import List, Dict, Set, Optional
import numpy as np
import pandas as pd
from artificial_bias_experiments.evaluation.confidence_comparison.df_utils import ColumnNamesInfo
from artificial_bias_experiments.known_prop_scores.dataset_generation_file_naming import \
get_root_dir_experiment_noisy_propensity_scores
from kbc_pul.confidence_naming import ConfidenceEnum
from kbc_pul.observed_data_generation.sar_two_subject_groups.sar_two_subject_groups_prop_scores import \
PropScoresTwoSARGroups
from artificial_bias_experiments.noisy_prop_scores.sar_two_subject_groups.experiment_info import \
NoisyPropScoresSARExperimentInfo
from artificial_bias_experiments.noisy_prop_scores.sar_two_subject_groups.noisy_prop_scores_sar_two_groups_loading import \
load_df_noisy_prop_scores_two_groups
from pathlib import Path
from pylo.language.lp import Clause as PyloClause
```
# Noisy SAR 2 groups - paper table
```
dataset_name="yago3_10"
is_pca_version: bool = False
true_prop_score_in_filter = 0.5
true_prop_score_other_list = [0.3, .7]
# true_prop_scores = PropScoresTwoSARGroups(
# in_filter=true_prop_score_in_filter,
# other=true_prop_score_other
# )
noisy_prop_score_in_filter: float = true_prop_score_in_filter
noisy_prop_score_not_in_filter_list: List[float] = [0.1, 0.2, .3, .4, .5, .6, .7, .8, .9, 1]
root_experiment_dir: str = os.path.join(
get_root_dir_experiment_noisy_propensity_scores(),
'sar_two_subject_groups',
dataset_name
)
path_root_experiment_dir = Path(root_experiment_dir)
true_prop_score_other_to_df_map: Dict[float, pd.DataFrame] = dict()
df_list_complete: List[pd.DataFrame] = []
for true_prop_score_other in true_prop_score_other_list:
true_prop_scores = PropScoresTwoSARGroups(
in_filter=true_prop_score_in_filter,
other=true_prop_score_other
)
# df_list: List[pd.DataFrame] = []
for target_rel_path in path_root_experiment_dir.iterdir():
if target_rel_path.is_dir():
for filter_dir in target_rel_path.iterdir():
if filter_dir.is_dir():
target_relation = target_rel_path.name
filter_relation = filter_dir.name
print(f"{target_relation} - {filter_relation}")
try:
experiment_info = NoisyPropScoresSARExperimentInfo(
dataset_name=dataset_name,
target_relation=target_relation,
filter_relation=filter_relation,
true_prop_scores=true_prop_scores,
noisy_prop_score_in_filter=noisy_prop_score_in_filter,
noisy_prop_score_not_in_filter_list=noisy_prop_score_not_in_filter_list,
is_pca_version=is_pca_version
)
df_rule_wrappers_tmp = load_df_noisy_prop_scores_two_groups(
experiment_info=experiment_info
)
df_list_complete.append(df_rule_wrappers_tmp)
except Exception as err:
print(err)
df_rule_wrappers_all_targets: pd.DataFrame = pd.concat(df_list_complete, axis=0)
# true_prop_score_other_to_df_map[true_prop_score_other] = df_for_true_prop_score_other
df_rule_wrappers_all_targets.head()
df_rule_wrappers_all_targets.columns
column_names_logistics: List[str] = [
'target_relation',
'filter_relation',
'true_prop_scores_in_filter', 'true_prop_scores_not_in_filter',
'noisy_prop_scores_in_filter', 'noisy_prop_scores_not_in_filter',
'random_trial_index',
"Rule"
]
other_columns = [col for col in df_rule_wrappers_all_targets.columns if col not in column_names_logistics]
resorted_columns = column_names_logistics + other_columns
df_rule_wrappers_all_targets = df_rule_wrappers_all_targets[resorted_columns]
df_rule_wrappers_all_targets.head()
df_rule_wrappers_all_targets.rename(
columns={
'true_prop_scores_in_filter': "true_filter",
'true_prop_scores_not_in_filter': "true_other",
'noisy_prop_scores_in_filter': "noisy_filter", 'noisy_prop_scores_not_in_filter': "noisy_other",
},
inplace=True,
errors="ignore"
)
column_names_logistics: List[str] = [
'target_relation',
'filter_relation',
'true_filter', 'true_other',
'noisy_filter', 'noisy_other',
'random_trial_index',
"Rule"
]
df_rule_wrappers_all_targets.head()
```
## 2. Only keep a subset of rules
### 2.1. Only keep the non-recursive rules; drop recursive rules
```
from kbc_pul.data_structures.rule_wrapper import get_pylo_rule_from_string, is_pylo_rule_recursive
def is_rule_recursive(rule_string: str) -> bool:
pylo_rule: PyloClause = get_pylo_rule_from_string(rule_string)
is_rule_recursive = is_pylo_rule_recursive(pylo_rule)
return is_rule_recursive
mask_recursive_rules = df_rule_wrappers_all_targets.apply(
lambda row: is_rule_recursive(row["Rule"]),
axis=1
)
print(len(df_rule_wrappers_all_targets))
df_rule_wrappers_all_targets: pd.DataFrame = df_rule_wrappers_all_targets[~mask_recursive_rules]
print(len(df_rule_wrappers_all_targets))
```
### 2.3 Drop the Pair-positive columns (both directions)
```
df_rule_wrappers_all_targets.drop(
[ConfidenceEnum.TRUE_CONF_BIAS_YS_ZERO_S_TO_O.value,
ConfidenceEnum.TRUE_CONF_BIAS_YS_ZERO_O_TO_S.value],
axis=1,
inplace=True,
errors='ignore'
)
df_rule_wrappers_all_targets.head()
```
### 2.4 Drop the IPW-PCA columns (both directions)
```
df_rule_wrappers_all_targets.drop(
[ConfidenceEnum.IPW_PCA_CONF_S_TO_O.value,
ConfidenceEnum.IPW_PCA_CONF_O_TO_S.value],
axis=1,
inplace=True,
errors='ignore'
)
df_rule_wrappers_all_targets.head()
```
### 2.4 Drop the $c_{q}=0.5$ column
```
df_rule_wrappers_all_targets.drop(
["true_filter", "noisy_filter"],
axis=1,
inplace=True,
errors='ignore'
)
column_names_logistics = [
col for col in column_names_logistics
if col != "true_filter"
and col != "noisy_filter"
]
df_rule_wrappers_all_targets.head()
group_by_list = [
"target_relation",
"filter_relation",
'true_other',
'noisy_other',
"Rule",
"random_trial_index"
]
df_count_trials: pd.DataFrame = df_rule_wrappers_all_targets[
[
"target_relation",
"filter_relation",
'true_other',
'noisy_other',
"Rule",
"random_trial_index"
]
].groupby(
[
"target_relation",
"filter_relation",
'true_other',
'noisy_other',
"Rule",
]
).count().reset_index()
df_less_than_ten_trials: pd.DataFrame = df_count_trials[df_count_trials["random_trial_index"].values != 10]
df_less_than_ten_trials
df_rule_wrappers_all_targets = df_rule_wrappers_all_targets[
~(
(df_rule_wrappers_all_targets["target_relation"] == "isaffiliatedto")
&
(df_rule_wrappers_all_targets["filter_relation"] == "wasbornin")
&
(df_rule_wrappers_all_targets["Rule"]=="isaffiliatedto(A,B) :- playsfor(A,B)")
)
]
df_rule_wrappers_all_targets.head()
```
**Now, we have the full dataframe**
****
## Calculate $[conf(R) - \widehat{conf}(R)]$
```
true_conf: ConfidenceEnum = ConfidenceEnum.TRUE_CONF
conf_estimators_list: List[ConfidenceEnum] = [
ConfidenceEnum.CWA_CONF,
ConfidenceEnum.ICW_CONF,
ConfidenceEnum.PCA_CONF_S_TO_O,
ConfidenceEnum.PCA_CONF_O_TO_S,
ConfidenceEnum.IPW_CONF,
]
all_confs_list: List[ConfidenceEnum] = [ConfidenceEnum.TRUE_CONF ] + conf_estimators_list
column_names_all_confs: List[str] = [
conf.get_name()
for conf in all_confs_list
]
df_rule_wrappers_all_targets = df_rule_wrappers_all_targets[
column_names_logistics + column_names_all_confs
]
df_rule_wrappers_all_targets.head()
df_conf_estimators_true_other = df_rule_wrappers_all_targets[
df_rule_wrappers_all_targets["true_other"] == df_rule_wrappers_all_targets["noisy_other"]
]
df_conf_estimators_true_other.head()
column_names_info =ColumnNamesInfo(
true_conf=true_conf,
column_name_true_conf=true_conf.get_name(),
conf_estimators=conf_estimators_list,
column_names_conf_estimators=[
col.get_name()
for col in conf_estimators_list
],
column_names_logistics=column_names_logistics
)
def get_df_rulewise_squared_diffs_between_true_conf_and_conf_estimator(
df_rule_wrappers: pd.DataFrame,
column_names_info: ColumnNamesInfo
) -> pd.DataFrame:
df_rulewise_diffs_between_true_conf_and_conf_estimator: pd.DataFrame = df_rule_wrappers[
column_names_info.column_names_logistics
]
col_name_estimator: str
for col_name_estimator in column_names_info.column_names_conf_estimators:
df_rulewise_diffs_between_true_conf_and_conf_estimator \
= df_rulewise_diffs_between_true_conf_and_conf_estimator.assign(
**{
col_name_estimator: (
(df_rule_wrappers[column_names_info.column_name_true_conf]
- df_rule_wrappers[col_name_estimator]) ** 2
)
}
)
return df_rulewise_diffs_between_true_conf_and_conf_estimator
df_conf_squared_errors: pd.DataFrame = get_df_rulewise_squared_diffs_between_true_conf_and_conf_estimator(
df_rule_wrappers=df_rule_wrappers_all_targets,
column_names_info = column_names_info
)
df_conf_squared_errors.head()
```
## AVERAGE the PCA(S) and PCA(O)
```
df_conf_squared_errors["PCA"] = (
(
df_conf_squared_errors[ConfidenceEnum.PCA_CONF_S_TO_O.value]
+
df_conf_squared_errors[ConfidenceEnum.PCA_CONF_O_TO_S.value]
) / 2
)
df_conf_squared_errors.head()
df_conf_squared_errors = df_conf_squared_errors.drop(
columns=[
ConfidenceEnum.PCA_CONF_S_TO_O.value
+
ConfidenceEnum.PCA_CONF_O_TO_S.value
],
axis=1,
errors='ignore'
)
df_conf_squared_errors.head()
```
# Now start averaging
```
df_conf_squared_errors_avg_over_trials: pd.DataFrame = df_conf_squared_errors.groupby(
by=["target_relation", "filter_relation", 'true_other', "noisy_other", "Rule"],
sort=True,
as_index=False
).mean()
df_conf_squared_errors_avg_over_trials.head()
df_conf_squared_errors_avg_over_trials_and_rules: pd.DataFrame = df_conf_squared_errors_avg_over_trials.groupby(
by=["target_relation", "filter_relation", 'true_other', "noisy_other",],
sort=True,
as_index=False
).mean()
df_conf_squared_errors_avg_over_trials_and_rules.head()
len(df_conf_squared_errors_avg_over_trials_and_rules)
```
### How many $p$, $q$ combinations are there?
```
df_p_and_q = df_conf_squared_errors_avg_over_trials_and_rules[["target_relation", "filter_relation"]].drop_duplicates()
df_p_and_q.head()
len(df_p_and_q)
df_conf_errors_avg_over_trials_and_rules_and_q: pd.DataFrame = df_conf_squared_errors_avg_over_trials_and_rules.groupby(
by=["target_relation", 'true_other', "noisy_other",],
sort=True,
as_index=False
).mean()
df_conf_errors_avg_over_trials_and_rules_and_q.head()
len(df_conf_errors_avg_over_trials_and_rules_and_q)
```
## Subset of noisy_other
```
first_true_label_freq_to_include = 0.3
second_true_label_freq_to_include = 0.7
true_label_frequencies_set: Set[float] = {
first_true_label_freq_to_include, second_true_label_freq_to_include,
}
true_label_frequency_to_estimate_map: Dict[float, Set[float]] = dict()
label_frequency_est_diff: float = 0.1
label_frequencies_to_keep: Set[float] = set(true_label_frequencies_set)
for true_label_freq in true_label_frequencies_set:
true_label_frequency_to_estimate_map[true_label_freq] = {
round(true_label_freq - label_frequency_est_diff, 1),
round(true_label_freq + label_frequency_est_diff, 1)
}
label_frequencies_to_keep.update(true_label_frequency_to_estimate_map[true_label_freq])
df_conf_errors_avg_over_trials_and_rules_and_q_c_subset = df_conf_errors_avg_over_trials_and_rules_and_q[
df_conf_errors_avg_over_trials_and_rules_and_q["noisy_other"].isin(label_frequencies_to_keep)
]
df_conf_errors_avg_over_trials_and_rules_and_q_c_subset.head()
len(df_conf_errors_avg_over_trials_and_rules_and_q_c_subset)
```
## Count the rules per $p$
```
df_n_rules_per_target = df_rule_wrappers_all_targets[["target_relation", "Rule"]].groupby(
by=['target_relation'],
# sort=True,
# as_index=False
)["Rule"].nunique().to_frame().reset_index().rename(
columns={"Rule" : "# rules"}
)
df_n_rules_per_target.head()
```
****
# Format pretty table
Goal:
* put smallest value per row in BOLT
* per target: mean_value 0.3 / 0.4
```
true_label_freq_to_noisy_to_df_map: Dict[float, Dict[float, pd.DataFrame]] = dict()
for true_label_freq in true_label_frequencies_set:
df_true_tmp: pd.DataFrame = df_conf_errors_avg_over_trials_and_rules_and_q_c_subset[
df_conf_errors_avg_over_trials_and_rules_and_q_c_subset["true_other"] == true_label_freq
]
noisy_label_freq_to_df_map = dict()
true_label_freq_to_noisy_to_df_map[true_label_freq] = noisy_label_freq_to_df_map
df_true_and_noisy_tmp = df_true_tmp[
df_true_tmp["noisy_other"] == true_label_freq
]
noisy_label_freq_to_df_map[true_label_freq] = df_true_and_noisy_tmp[
[col for col in df_true_and_noisy_tmp.columns if col != "noisy_other" and col != "true_other"]
]
for noisy_label_freq in true_label_frequency_to_estimate_map[true_label_freq]:
df_true_and_noisy_tmp = df_true_tmp[
df_true_tmp["noisy_other"] == noisy_label_freq
]
noisy_label_freq_to_df_map[noisy_label_freq] = df_true_and_noisy_tmp[
[col for col in df_true_and_noisy_tmp.columns if col != "noisy_other" and col != "true_other"]
]
true_label_freq_to_noisy_to_df_map[first_true_label_freq_to_include][0.2].head()
from typing import Iterator
true_label_freq_to_df_map = dict()
label_freq_estimators: Iterator[float]
for true_label_freq in true_label_frequencies_set:
noisy_to_df_map: Dict[float, pd.DataFrame] = true_label_freq_to_noisy_to_df_map[true_label_freq]
df_true_label_freq: pd.DataFrame = noisy_to_df_map[true_label_freq]
lower_est: float = round(true_label_freq - label_frequency_est_diff, 1)
higher_est: float = round(true_label_freq + label_frequency_est_diff, 1)
df_lower: pd.DataFrame = noisy_to_df_map[lower_est][
['target_relation', ConfidenceEnum.IPW_CONF.value]
].rename(
columns={
ConfidenceEnum.IPW_CONF.value: f"{ConfidenceEnum.IPW_CONF.value}_lower"
}
)
df_true_label_freq = pd.merge(
left=df_true_label_freq,
right=df_lower,
on="target_relation"
)
df_higher = noisy_to_df_map[higher_est][
['target_relation', ConfidenceEnum.IPW_CONF.value]
].rename(
columns={
ConfidenceEnum.IPW_CONF.value: f"{ConfidenceEnum.IPW_CONF.value}_higher"
}
)
df_true_label_freq = pd.merge(
left=df_true_label_freq,
right=df_higher,
on="target_relation"
)
true_label_freq_to_df_map[true_label_freq] = df_true_label_freq
true_label_freq_to_df_map[0.3].head()
for key, df in true_label_freq_to_df_map.items():
true_label_freq_to_df_map[key] = df.drop(
columns=["random_trial_index"],
axis=1,
errors='ignore'
)
df_one_row_per_target = pd.merge(
left=true_label_freq_to_df_map[first_true_label_freq_to_include],
right=true_label_freq_to_df_map[second_true_label_freq_to_include],
on="target_relation",
suffixes=(f"_{first_true_label_freq_to_include}", f"_{second_true_label_freq_to_include}")
)
df_one_row_per_target.head()
```
## What is the smallest value?
```
all_values: np.ndarray = df_one_row_per_target[
[ col
for col in df_one_row_per_target.columns
if col != "target_relation"
]
].values
min_val = np.amin(all_values)
min_val
min_val * 10000
max_val = np.amax(all_values)
max_val
max_val * 10000
df_one_row_per_target.head() * 10000
df_one_row_per_target.dtypes
exponent = 4
multiplication_factor = 10 ** exponent
multiplication_factor
df_one_row_per_target[
df_one_row_per_target.select_dtypes(include=['number']).columns
] *= multiplication_factor
df_one_row_per_target
df_one_row_per_target.head()
```
## Output files definitions
```
dir_latex_table: str = os.path.join(
kbc_e_metrics_project_dir,
"paper_latex_tables",
'known_prop_scores',
'sar_two_groups'
)
if not os.path.exists(dir_latex_table):
os.makedirs(dir_latex_table)
filename_tsv_rule_stats = os.path.join(
dir_latex_table,
"conf_error_stats_v3.tsv"
)
filename_tsv_single_row_summary = os.path.join(
dir_latex_table,
"noisy_sar_two_groups_single_row_summary.tsv"
)
```
## Create single-row summary
```
df_one_row_in_total: pd.Series = df_one_row_per_target.mean(
)
df_one_row_in_total
df_n_rules_per_target.head()
df_one_row_in_total["# rules"] = int(df_n_rules_per_target["# rules"].sum())
df_one_row_in_total
type(df_one_row_in_total)
df_one_row_in_total.to_csv(
filename_tsv_single_row_summary,
sep = "\t",
header=None
)
```
### Now create a pretty table
```
column_names_info.column_names_conf_estimators
simplified_column_names_conf_estimators = ['CWA', 'PCA', 'ICW', 'IPW',]
multi_index_columns = [
("$p$", ""),
("\# rules", "")
]
from itertools import product
# conf_upper_cols = column_names_info.column_names_conf_estimators + [
# f"{ConfidenceEnum.IPW_CONF.value} " + "($\Delta c=-" + f"{label_frequency_est_diff}" + "$)",
# f"{ConfidenceEnum.IPW_CONF.value} " + "($\Delta c=" + f"{label_frequency_est_diff}" + "$)",
# ]
conf_upper_cols = simplified_column_names_conf_estimators + [
f"{ConfidenceEnum.IPW_CONF.value} " + "($-\Delta$)",
f"{ConfidenceEnum.IPW_CONF.value} " + "($+\Delta$)",
]
c_subcols = ["$c_{\\neg q}=0.3$", "$c_{\\neg q}=0.7$"]
multi_index_columns = multi_index_columns + list(product(c_subcols, conf_upper_cols))
# multi_index_list
multi_index_columns = pd.MultiIndex.from_tuples(multi_index_columns)
multi_index_columns
rule_counter: int = 1
rule_str_to_rule_id_map: Dict[str, int] = {}
float_precision: int = 1
col_name_conf_estimator: str
pretty_rows: List[List] = []
row_index: int
row: pd.Series
# columns_to_use = [
# "$p$",
# "\# rules"
# ] + column_names_info.column_names_conf_estimators + [
# f"{ConfidenceEnum.IPW_CONF.value} " + "($\Delta c=-" + f"{label_frequency_est_diff}" + "$)",
# f"{ConfidenceEnum.IPW_CONF.value} " + "($\Delta c=" + f"{label_frequency_est_diff}" + "$)",
# ]
LabelFreq = float
def get_dict_with_smallest_estimator_per_label_freq(row: pd.Series) -> Dict[LabelFreq, Set[str]]:
# Find estimator with smallest mean value for label frequency###################
label_freq_to_set_of_smallest_est_map: Dict[LabelFreq, Set[str]] = dict()
for label_freq in [first_true_label_freq_to_include, second_true_label_freq_to_include]:
o_set_of_col_names_with_min_value: Optional[Set[str]] = None
o_current_smallest_value: Optional[float] = None
# Find smallest squared error
for col_name_conf_estimator in simplified_column_names_conf_estimators:
current_val: float = row[f"{col_name_conf_estimator}_{label_freq}"]
# print(current_val)
if o_set_of_col_names_with_min_value is None or o_current_smallest_value > current_val:
o_set_of_col_names_with_min_value = {col_name_conf_estimator}
o_current_smallest_value = current_val
elif current_val == o_current_smallest_value:
o_set_of_col_names_with_min_value.update(col_name_conf_estimator)
label_freq_to_set_of_smallest_est_map[label_freq] = o_set_of_col_names_with_min_value
return label_freq_to_set_of_smallest_est_map
def format_value_depending_on_whether_it_is_smallest(
value: float,
is_smallest: bool,
float_precision: float,
use_si: bool = False
)-> str:
if is_smallest:
if not use_si:
formatted_value = "$\\bm{" + f"{value:0.{float_precision}f}" + "}$"
# formatted_value = "$\\bm{" + f"{value:0.{float_precision}e}" + "}$"
else:
formatted_value = "\\textbf{$" + f"\\num[round-precision={float_precision},round-mode=figures,scientific-notation=true]"+\
"{"+ str(value) + "}"+ "$}"
else:
if not use_si:
formatted_value = f"${value:0.{float_precision}f}$"
# formatted_value = f"${value:0.{float_precision}e}$"
else:
formatted_value = "$" + f"\\num[round-precision={float_precision},round-mode=figures,scientific-notation=true]"+\
"{"+ str(value) + "}"+ "$"
return formatted_value
estimator_columns = simplified_column_names_conf_estimators + [
f"{ConfidenceEnum.IPW_CONF.value}_lower",
f"{ConfidenceEnum.IPW_CONF.value}_higher"
]
# For each row, i.e. for each target relation
for row_index, row in df_one_row_per_target.iterrows():
# Find estimator with smallest mean value for label frequency###################
label_freq_to_set_of_smallest_est_map: Dict[float, Set[str]] = get_dict_with_smallest_estimator_per_label_freq(
row=row
)
##################################################################################
# Construct the new row
######################
target_relation = row["target_relation"]
nb_of_rules = df_n_rules_per_target[df_n_rules_per_target['target_relation'] == target_relation][
"# rules"
].iloc[0]
new_row: List[str] = [
target_relation,
nb_of_rules
]
# For each Confidence estimator, get the value at c 0.3 and 0.7
# for col_name_conf_estimator in estimator_columns:
# mean_val_03:float = row[f"{col_name_conf_estimator}_0.3"]
# mean_val_07:float = row[f"{col_name_conf_estimator}_0.7"]
#
# new_row_value = (
# format_value_depending_on_whether_it_is_smallest(
# value=mean_val_03,
# is_smallest=col_name_conf_estimator == label_freq_to_smallest_est_map[0.3],
# float_precision=float_precision
# )
# + " / "
# + format_value_depending_on_whether_it_is_smallest(
# value=mean_val_07,
# is_smallest=col_name_conf_estimator == label_freq_to_smallest_est_map[0.7],
# float_precision=float_precision
# )
# )
# new_row.append(new_row_value)
for col_name_conf_estimator in estimator_columns:
mean_val_03:float = row[f"{col_name_conf_estimator}_{first_true_label_freq_to_include}"]
new_row_value_03 = format_value_depending_on_whether_it_is_smallest(
value=mean_val_03,
is_smallest=(
col_name_conf_estimator in label_freq_to_set_of_smallest_est_map[first_true_label_freq_to_include]
),
float_precision=float_precision
)
new_row.append(new_row_value_03)
for col_name_conf_estimator in estimator_columns:
mean_val_07:float = row[f"{col_name_conf_estimator}_{second_true_label_freq_to_include}"]
new_row_value_07 = format_value_depending_on_whether_it_is_smallest(
value=mean_val_07,
is_smallest=(
col_name_conf_estimator in label_freq_to_set_of_smallest_est_map[second_true_label_freq_to_include]
),
float_precision=float_precision
)
new_row.append(new_row_value_07)
pretty_rows.append(new_row)
df_pretty: pd.DataFrame = pd.DataFrame(
data=pretty_rows,
columns=multi_index_columns
)
df_pretty.head()
df_pretty: pd.DataFrame = df_pretty.sort_values(
by=["$p$"]
)
df_pretty.head()
```
# To file
```
# dir_latex_table: str = os.path.join(
# kbc_e_metrics_project_dir,
# "paper_latex_tables",
# 'known_prop_scores',
# 'scar'
# )
#
# if not os.path.exists(dir_latex_table):
# os.makedirs(dir_latex_table)
filename_latex_table: str = os.path.join(
dir_latex_table,
"confidence-error-table-sar-two-subject-groups-agg-per-p.tex"
)
filename_tsv_table: str = os.path.join(
dir_latex_table,
"confidence-error-table-sar-two-subject-groups-agg-per-p.tsv"
)
with open(filename_latex_table, "w") as latex_ofile:
with pd.option_context("max_colwidth", 1000):
latex_ofile.write(
df_pretty.to_latex(
column_format="lr|lllllll|lllllll",
index=False,
float_format="{:0.3f}".format,
escape=False,
# caption="$[widehat{conf}-conf]^2$ for SCAR. "
# "std=standard confidence, "
# "PCA (S) = PCA confidence with $s$ as domain, "
# "PCA (O) = PCA confidence with $o$ as domain, "
# "IPW = PCA confidence with $\hat{e}=e$, "
# "IPW +/- $" + f"{label_frequency_est_diff:0.1}" + "$ = IPW confidence with $\hat{e}=e+/-" + f"{label_frequency_est_diff:0.1}" + "$."
)
)
with open(filename_tsv_table, "w") as tsv_ofile:
tsv_ofile.write(df_pretty.to_csv(
index=False,
sep="\t"
))
print(filename_latex_table)
```
| github_jupyter |
```
!pip install google_images_download
#Imports
import tensorflow as tf
import keras
from google.colab import drive
import os
from fastai.vision import *
from fastai.metrics import error_rate
import re
from google_images_download import google_images_download
# Start off with Mounting Drive Locally
drive.mount('/content/drive/')
!ls "/content/drive/My Drive/Auto_Query"
#change the working directory to the Drive folder
os.chdir("/content/drive/My Drive/Auto_Query")
#Initiliazation.py
%reload_ext autoreload
%autoreload 2
%matplotlib inline
bs = 64 #batch size
sz = 224 #image size wanted
PATH = "/content/drive/My Drive/Auto_Query/downloads"
response = google_images_download.googleimagesdownload()
# TODO: Take in all queries at once, then split into
# individual queries
request_one = input('What would you like first?')
request_two = input('And then?')
name_one = str(request_one)
name_two = str(request_two)
#Test
print('first request: ' +request_one)
print('second request: '+request_two)
search_queries = [request_one, request_two]
print(search_queries)
def downloadimages(query):
arguments = {"keywords":query,
"format": "jpg",
# "limit": 250,
"print_urls": True,
"size": "medium"}
try:
response.download(arguments)
except FileNotFoundError:
arguments = {"keywords": query,
"format": "jpg",
# "limit":4,
"print_urls":True,
"size": "medium",
"usage_rights":"labeled-for-reuse"}
try:
# Downloading the photos based
# on the given arguments
response.download(arguments)
except:
pass
for query in search_queries:
downloadimages(query)
print()
#get_classes.py
classes = []
for d in os.listdir(PATH):
if os.path.isdir(os.path.join(PATH, d)) and not d.startswith('.'):
classes.append(d)
print ("There are ", len(classes), "classes:\n", classes)
#verify_images.py
for c in classes:
print ("Class:", c)
verify_images(os.path.join(PATH, c), delete=True);
#create_training_validation.py
data = ImageDataBunch.from_folder(PATH, ds_tfms=get_transforms(), size=sz, bs=bs, valid_pct=0.2).normalize(imagenet_stats)
print ("There are", len(data.train_ds), "training images and", len(data.valid_ds), "validation images." )
#show data
data.show_batch(rows = 3, figsize=(7,8))
#If the user wants to build their own model
# import tensorflow as tf
# import keras
custom_query = input('Would you like to build your own model?')
if custom_query.lower() == 'yes':
print('true')
#TODO: request model specifications and use keras to build it out
#model
learn = cnn_learner(data, models.resnet34, metrics=accuracy)
learn.lr_find();
learn.recorder.plot()
#fit
#adjust to ideal learning rate manually for now
learn.fit_one_cycle(7, max_lr=slice(1e-3,1e-2))
#interpretation
interpretation = ClassificationInterpretation.from_learner(learn)
interpretation.plot_confusion_matrix(figsize=(12,12), dpi=60)
interpretation.plot_top_losses(9, figsize=(15,11), heatmap=False)
# test model with a new image
# TODO: either set up a new image scrape and make sure the image is new,
# or connect to a webcam on local machine
# path = './'
# img = open_image(get_image_files(path)[0])
# pred_class,pred_idx,outputs = learn.predict(img)
# img.show()
# print ("It is a", pred_class)
```
| github_jupyter |
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
```
# Northland Geology and Roads
This Jupyter Notebook is intended to help you visualize the anomalies of interest against the Geologic Map and the Topographic Map with roads. It is also a good example on how to retrieve information from [Macrostrat.org](https://macrostrat.org/map/#/z=9.0/x=174.1666/y=-35.5429/bedrock/lines/) for New Zealand. All maps are in projection EPGS:3857.
## SplitMap with ImageOverlay
This example shows how to use the [Leaflet SplitMap](https://github.com/QuantStack/leaflet-splitmap) slider with ipyleaflet. A raster is added as an image overlay. The opacity of the raster can be changed before running the cells of the notebook.
Details of units in the geologic map can be found here [Macrostrat.org](https://macrostrat.org/map/#/z=9.0/x=174.1666/y=-35.5429/bedrock/lines/). Macrostrat data can be browsed through the [Rockd.org](https://rockd.org/) mobile app, which is a good resource for geologists and geoscientists alike.
## Check the metadata on the geologic unit
To check the metadata of a particular unit, click on it and press the "Check Formation!" button.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import io
import os
import requests
from IPython.display import display,clear_output
import ipywidgets as widgets
from ipywidgets import Label,Dropdown,jslink
from ipyleaflet import (
Map, TileLayer, SplitMapControl, ScaleControl,FullScreenControl,ImageOverlay,LayersControl,WidgetControl,Marker,
DrawControl,MeasureControl
)
center = [-35.611896, 174.185050] #-34.8252978589571, 173.54580993652344
zoom = 10
opacity=0.8
m = Map(center=center, zoom=zoom)
left = TileLayer(url='https://tiles.macrostrat.org/carto/{z}/{x}/{y}.png')
right = TileLayer(url='https://{s}.tile.opentopomap.org/{z}/{x}/{y}.png')
#Image Overlay for a grid
#image = ImageOverlay(
# url='RTP.png',
# # url='../06Q1fSz.png',
# bounds=((-35.611896, 174.185050), (-35.411021, 174.380248)),
# opacity=opacity
#)
image = ImageOverlay(
url='Northland_TMI_RTP_reprojected.png',
bounds=((-36.4081355,172.6260946), (-34.3687166,174.6581728)),
opacity=opacity,
name='RTP'
)
image2 = ImageOverlay(
url='Ternary_reprojected.png',
bounds=((-36.4422,172.5928), (-34.34,174.7874)),
opacity=opacity,
name='Ternary Map'
)
image3 = ImageOverlay(
url='Northland_CB_2.png',
bounds=((-37.0078735, 171.9919065), (-34.0078135, 174.9919665)),
opacity=opacity,
name='Bouguer'
)
#Add the layer
m.add_layer(image)
m.add_layer(image2)
m.add_layer(image3)
control = LayersControl(position='topright')
m.add_control(control)
m
#Split the map
control = SplitMapControl(left_layer=left, right_layer=right)
m.add_control(control)
m.add_control(ScaleControl(position='bottomleft',max_width=200))
m.add_control(FullScreenControl())
#m
#Display coordinates on click
def handle_interaction(**kwargs):
if kwargs.get('type') == 'mousedown': #'mousemove' will update coordinates with every position
label.value = str(kwargs.get('coordinates'))
label = Label()
display(label)
m.on_interaction(handle_interaction)
#Button to check the formation on the geographic location selected
button = widgets.Button(description="Check Formation!")
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
if label.value=='':
clear_output(wait=True)
print('Click on a geologic formation on the map and click to get the info')
else:
#Get the infor from the API
clear_output(wait=True)
lat = label.value.strip('][').split(', ')[0] #Break the label into usable strings
lon = label.value.strip('][').split(', ')[1] #Break the label into usable strings
url='https://macrostrat.org/api/v2/geologic_units/map?lat='+lat+'&lng='+lon+'&scale=large&response=short&format=csv'
s=requests.get(url).content
c=pd.read_csv(io.StringIO(s.decode('utf-8')))
#Make it into a Pandas DataFrame
df=pd.DataFrame(c)
display(df)
button.on_click(on_button_clicked)
#Show the maps
m
```
# Retrieve more information from the API on Geological Units
We can request the geojson file for a geologic unit given the geographic coordinates. The url should be formatted as in the example code below.
```
from ipyleaflet import Map, ImageOverlay,GeoJSON
m = Map(center=(-35.611896, 174.185050), zoom=7)
#Request info from the API
if label.value=='':
print('Click on a geologic formation on the map above and run this cell to get the info and polygon')
else:
lat = label.value.strip('][').split(', ')[0] #Break the label into usable strings
lon = label.value.strip('][').split(', ')[1] #Break the label into usable strings
#Request the geojson file from the API
import urllib.request, json
with urllib.request.urlopen('https://macrostrat.org/api/v2/geologic_units/map?lat='+lat+'&lng='+lon+'&response=short&scale=large&format=geojson_bare') as url:
data = json.loads(url.read().decode())
#Get the infor from the API
url='https://macrostrat.org/api/v2/geologic_units/map?lat='+lat+'&lng='+lon+'&scale=large&response=short&format=csv'
s=requests.get(url).content
c=pd.read_csv(io.StringIO(s.decode('utf-8')))
#Make it into a Pandas DataFrame
df=pd.DataFrame(c)
display(df)
geo_json = GeoJSON(
data=data,
style={'color': 'red','opacity': 1,'fillOpacity': 0.7} #If Color from the API change 'color' to df.color[0]
)
m.add_layer(geo_json)
#Add draw control
draw_control = DrawControl()
draw_control.polyline = {
"shapeOptions": {
"color": "#6bc2e5",
"weight": 8,
"opacity": 1.0
}
}
draw_control.polygon = {
"shapeOptions": {
"fillColor": "#6be5c3",
"color": "#6be5c3",
"fillOpacity": 1.0
},
"drawError": {
"color": "#dd253b",
"message": "Oups!"
},
"allowIntersection": False
}
draw_control.circle = {
"shapeOptions": {
"fillColor": "#efed69",
"color": "#efed69",
"fillOpacity": 1.0
}
}
draw_control.rectangle = {
"shapeOptions": {
"fillColor": "#fca45d",
"color": "#fca45d",
"fillOpacity": 1.0
}
}
m.add_control(draw_control)
#Measure Control
measure = MeasureControl(
position='bottomleft',
active_color = 'orange',
primary_length_unit = 'kilometers'
)
m.add_control(measure)
measure.completed_color = 'red'
measure.add_length_unit('yards', 1.09361, 4)
measure.secondary_length_unit = 'yards'
measure.add_area_unit('sqyards', 1.19599, 4)
measure.secondary_area_unit = 'sqyards'
m
```
# Northland EQs
The EQs of Northland displayed using the Circle marker and Popup tool of ipyleaflet.
```
from ipywidgets import HTML
from ipyleaflet import Circle,Popup
import numpy as np
import matplotlib.pyplot as plt
#Load the data for EQs
long,lati,depth,mag=np.loadtxt('EQs_Northland.dat',skiprows=1,unpack=True)
#Create the map
m2 = Map(center=(-35.611896, 174.185050), zoom=7)
circles = []
for i in range(0,len(long)):
c = Circle(location=(lati[i],long[i]), radius=np.int(10*depth[i]),name=np.str(depth[i]))
circles.append(c)
m2.add_layer(c)
#Display values with a popup
def handle_interaction2(**kwargs):
if kwargs.get('type') == 'mousedown': #'mousemove' will update coordinates with every position
label.value = str(kwargs.get('coordinates'))
lt = float(label.value.strip('][').split(', ')[0]) #Break the label into usable strings
lng = float(label.value.strip('][').split(', ')[1]) #Break the label into usable strings
message=HTML()
p=np.where((long==lng)&(lati==lt)) #Find the locations to extract the value
if len(p[0])>1:
message.value = np.str(depth[p].mean())+' km'+' <b>Mw:</b> '+np.str(mag[p].mean())
elif len(p[0])==1:
message.value = np.str(depth[p])+' km'+' <b>Mw:</b> '+np.str(mag[p])
else:
message.value = 'No data'
# Popup with a given location on the map:
popup = Popup(
location=(lt,lng),
child=message,
close_button=False,
auto_close=False,
close_on_escape_key=False
)
m2.add_layer(popup)
label2 = Label()
display(label2)
m2.on_interaction(handle_interaction2)
m2
```
# Epithermal vector data over Northland
the data available can be obtained from the [New Zealand Petroleum and Minerals](https://www.nzpam.govt.nz/) database (datapack MR4343). The shapefiles were reprojected into EPSG:4326 as GeoJSON files for visualization on ipyleaflet from NZMG (EPSG:27200) coordinates.
## Toggle Controls for displaying layers
A way of using radio buttons to display the layers.
```
from ipyleaflet import Map, ImageOverlay,GeoJSON,json
center = (-35.5, 174.6) #or your desired coordinates
m3 = Map(center=center, zoom=6)
radio_button = widgets.RadioButtons(options=['Epithermal', 'Hydrothermal', 'Calderas', 'Vents', 'Veins', 'Faults',
'Gold Ocurrences','Basin Gold','Basin Silver','Basin Mercury'],
value='Epithermal',
description='Northland:')
def toggle_vector(map):
#Toggle options
if map == 'Epithermal': plot_vector('epiareas.geojson')
if map == 'Hydrothermal': plot_vector('hydrothm.geojson')
if map == 'Calderas': plot_vector('calderap.geojson')
if map == 'Vents': plot_vector('vents.geojson')
if map == 'Veins': plot_vector('veins.geojson')
if map == 'Faults': plot_vector('faults.geojson')
if map == 'Gold Ocurrences': plot_vector('au.geojson')
if map == 'Basin Gold': plot_vector('ssbasinau.geojson'),plot_maps()
if map == 'Basin Silver': plot_vector('ssbasinag.geojson'),plot_maps()
if map == 'Basin Mercury': plot_vector('ssbasinhg.geojson'),plot_maps()
display(m3)
def plot_vector(filename):
m3.clear_layers()
#Add the geojson file
with open('./Vector_data_GeoJSON/'+filename+'','r') as f: #epiareas.geojson
data = json.load(f)
layer = GeoJSON(data=data, style = {'color': 'Blue', 'opacity':1, 'weight':1.9, 'dashArray':'9', 'fillOpacity':0.3})
#Plotting depending on type of data
typed=data['features'][0]['geometry']['type'] #Check type of data (Point or Polygon)
if typed=='Point': #If Point use circles
layer = GeoJSON(data=data,
point_style={'radius': 1, 'color': 'blue', 'fillOpacity': 0.8, 'fillColor': 'blue', 'weight': 3})
m3.add_layer(layer)
else: #Plot polygon shape
m3.add_layer(layer)
#m
widgets.interact(toggle_vector, map=radio_button)#plot_vector('epiareas.geojson')
#Split the map
control = SplitMapControl(left_layer=left, right_layer=right)
m3.add_control(control)
m3.add_control(ScaleControl(position='bottomleft',max_width=200))
m3.add_control(FullScreenControl())
```
## Dropdown options with maps
Example of combining the dropdown menu with some map overlays.
```
from ipyleaflet import Map, ImageOverlay,GeoJSON,json
from ipywidgets import Label,Dropdown,jslink
from ipywidgets import widgets as w
from ipywidgets import Widget
center = (-35.5, 174.6) #or your desired coordinates
m4 = Map(center=center, zoom=6)
def plot_maps():
image = ImageOverlay(
url='Northland_TMI_RTP_reprojected.png',
bounds=((-36.4081355,172.6260946), (-34.3687166,174.6581728)),
opacity=opacity,
name='RTP',
visible=True
)
image2 = ImageOverlay(
url='Ternary_reprojected.png',
bounds=((-36.4422,172.5928), (-34.34,174.7874)),
opacity=opacity,
name='Ternary Map',
visible=True
)
image3 = ImageOverlay(
url='Northland_CB_2.png',
bounds=((-37.0078735, 171.9919065), (-34.0078135, 174.9919665)),
opacity=opacity,
name='Bouguer',
visible=True
)
##Add slider control
#slider = w.FloatSlider(min=0, max=1, value=1, # Opacity is valid in [0,1] range
# orientation='vertical', # Vertical slider is what we want
# readout=False, # No need to show exact value = False
# layout=w.Layout(width='2em')) # Fine tune display layout: make it thinner
# Connect slider value to opacity property of the Image Layer
#w.jslink((slider, 'value'),
# (image, 'opacity') )
#m4.add_control(WidgetControl(widget=slider))
#Add the layer
m4.add_layer(image)
m4.add_layer(image2)
m4.add_layer(image3)
return
dropdown = widgets.Dropdown(options=['Epithermal', 'Hydrothermal', 'Calderas', 'Vents', 'Veins', 'Faults',
'Gold Ocurrences','Basin Gold','Basin Silver','Basin Mercury'],
value='Epithermal',
description='Northland:')
def toggle_vector2(map):
#Toggle options
if map == 'Epithermal': plot_vector2('epiareas.geojson'),plot_maps()
if map == 'Hydrothermal': plot_vector2('hydrothm.geojson'),plot_maps()
if map == 'Calderas': plot_vector2('calderap.geojson'),plot_maps()
if map == 'Vents': plot_vector2('vents.geojson'),plot_maps()
if map == 'Veins': plot_vector2('veins.geojson'),plot_maps()
if map == 'Faults': plot_vector2('faults.geojson'),plot_maps()
if map == 'Gold Ocurrences': plot_vector2('au.geojson'),plot_maps()
if map == 'Basin Gold': plot_vector2('ssbasinau.geojson'),plot_maps()
if map == 'Basin Silver': plot_vector2('ssbasinag.geojson'),plot_maps()
if map == 'Basin Mercury': plot_vector2('ssbasinhg.geojson'),plot_maps()
display(m4)
def plot_vector2(filename):
m4.clear_layers()
#Add the geojson file
with open('./Vector_data_GeoJSON/'+filename+'','r') as f: #epiareas.geojson
data = json.load(f)
layer = GeoJSON(data=data, style = {'color': 'Blue', 'opacity':1, 'weight':1.9, 'dashArray':'9', 'fillOpacity':0.3})
#Plotting depending on type of data
typed=data['features'][0]['geometry']['type'] #Check type of data (Point or Polygon)
if typed=='Point': #If Point use circles
layer = GeoJSON(data=data,
point_style={'radius': 1, 'color': 'blue', 'fillOpacity': 0.8, 'fillColor': 'blue', 'weight': 3})
m4.add_layer(layer)
else: #Plot polygon shape
m4.add_layer(layer)
#m
widgets.interact(toggle_vector2, map=dropdown)#plot_vector('epiareas.geojson')
#Split the map
scontrol = SplitMapControl(left_layer=left, right_layer=right)
m4.add_control(scontrol)
m4.add_control(ScaleControl(position='bottomleft',max_width=200))
m4.add_control(FullScreenControl())
#Layers Control
control = LayersControl(position='topright')
m4.add_control(control)
```
| github_jupyter |
```
# %gui qt
import numpy as np
import mne
import pickle
import sys
import os
# import matplotlib
from multiprocessing import Pool
from tqdm import tqdm
import matplotlib.pyplot as plt
# import vispy
# print(vispy.sys_info())
# BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# sys.path.append(BASE_DIR)
%matplotlib inline
mne.utils.set_config('MNE_USE_CUDA', 'true')
mne.cuda.init_cuda(verbose=True)
baseFolder='./pickled-avg'
files=[f for f in os.listdir(baseFolder) if not f.startswith('.')]
data=pickle.load(open('pickled-avg/OpenBCISession_2020-02-14_11-09-00-SEVEN', 'rb'))
data[0]
#Naming system for blocks into integers
bloc={
"sync":1,
"baseline":2,
"stressor":3,
"survey":4,
"rest":5,
"slowBreath":6,
"paced":7
}
def createMNEObj(data, name='Empty'):
#Create Metadata
sampling_rate = 125
channel_names = ['Fp1', 'Fp2', 'C3', 'C4', 'P7', 'P8', 'O1', 'O2', 'F7', 'F8', 'F3', 'F4', 'T7', 'T8', 'P3', 'P4',
'time', 'bpm', 'ibi', 'sdnn', 'sdsd', 'rmssd', 'pnn20', 'pnn50', 'hr_mad', 'sd1', 'sd2', 's', 'sd1/sd2', 'breathingrate', 'segment_indices1', 'segment_indices2', 'block']
channel_types = ['eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg',
'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'stim']
n_channels = len(channel_types)
info = mne.create_info(ch_names=channel_names, sfreq=sampling_rate, ch_types=channel_types)
info['description'] = name
print(info)
transformed = []
start=-1.0
for i in range(len(data)):
add=[]
add=data[i][1:17]
# print(data[i][19].keys())
if start==-1:
start=data[i][18].hour*3600 + data[i][18].minute*60 + data[i][18].second + data[i][18].microsecond/1000
add.append(0.0)
else:
tim=data[i][18].hour*3600 + data[i][18].minute*60 + data[i][18].second + data[i][18].microsecond/1000
add.append(tim-start)
# add.append(str(data[i][18].hour)+':'+str(data[i][18].minute)+':'+str(data[i][18].second)+':'+str(int(data[i][18].microsecond/1000)))
# try:
add.append(data[i][19]['bpm'])
# except Exception as e:
# print(e, i)
# print(data[i][19])
# print(len(data))
add.append(data[i][19]['ibi'])
add.append(data[i][19]['sdnn'])
add.append(data[i][19]['sdsd'])
add.append(data[i][19]['rmssd'])
add.append(data[i][19]['pnn20'])
add.append(data[i][19]['pnn50'])
add.append(data[i][19]['hr_mad'])
add.append(data[i][19]['sd1'])
add.append(data[i][19]['sd2'])
add.append(data[i][19]['s'])
add.append(data[i][19]['sd1/sd2'])
add.append(data[i][19]['breathingrate'])
add.append(data[i][19]['segment_indices'][0])
add.append(data[i][19]['segment_indices'][1])
add.append(bloc[data[i][20]])
transformed.append(np.array(add))
transformed=np.array(transformed)
print(transformed[0])
#have to convert rows to columns to fit MNE structure
transformed=transformed.transpose()
print(transformed[0], transformed[1], transformed[2], transformed[3])
print(len(transformed[0]))
loaded=mne.io.RawArray(transformed, info)
return loaded
raw=createMNEObj(data)
raw[1][]
data
np.transpose(np.transpose(data))
def filt(ind):
name=files[ind]
data=pickle.load(open('pickled-avg/'+name, 'rb'))
# if ind==1:
# pbar = tqdm(total=len(data), position=ind)
raw=createMNEObj(data)
print('Created object')
montage = mne.channels.make_standard_montage('easycap-M1')
raw.set_montage(montage, raise_if_subset=False)
mne.io.Raw.filter(raw,l_freq=0.5,h_freq=None)
print('Done filtering')
tem=np.transpose(data)
# for i in tqdm(range(len(data))):
# if ind==1:
# pbar.update(1)
# data[i][k+1]=raw[k][0][0][i]
for k in range(0, 16):
tem[k+1]=raw[k][0][0]
data=np.transpose(tem)
pickle.dump(data, open('pickled-high/'+name, "wb" ) )
filt(1)
p = Pool(18)
master=p.map(filt, range(len(files)))
data=pickle.load(open('pickled-filt/OpenBCISession_2020-02-14_11-09-00-SEVEN', 'rb'))
data[0]
```
| github_jupyter |
# Cross Industry Standart Process for Data Mining
In this section we are going to analise Boston AIRBNB Data Set. We are looking to help people on cleaning datasets e how to deal with some especific data. In this post we are going to cover all the subjects bellow:
1. Business Understanding: Understand the problem
2. Data Understanding: Understand the data to solve your problem
3. Data Preparation: Organizing it in a way that will allow us to answer our questions of interests.
4. Modeling: Building a predictive model
5. Evaluation
6. Insights
```
#first we import de the libraries wich are going to be used
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
#then we open the sugested datasets
df_calendar = pd.read_csv("calendar.csv")
df_reviews = pd.read_csv("reviews.csv")
```
# 1. Business Understanding
Understanding de Problem.
We are solving two questions:
* Price, there are any correlations that we can made, in order to predict it?
# 2. Data Understanding
Understand the data to solve your problem.
Now that we have stabilished ours goals, we need to undestand our data in order to get there
### Let's take a look at the data?
```
df_calendar.head() #looking calendar
df_listings.head(2) #looking listings
df_reviews.head()# looking reviews
print("We have {} rows(cases) and {} features. Our primary goal is to look into these {} features.".format(df_listings.shape[0],df_listings.shape[1],df_listings.shape[1]))
```
As we have a lot of features, let's select some columns that may have correlation with our goal (predict price).
```
view_df_listings = pd.DataFrame(df_listings.dtypes, columns=["type"])
view_df_listings[:50] #lets take a look at the first 50 columns, wich ones we are going to pick
view_df_listings[50:]
```
For these two ranges of columns, after tooking a look i selected the following columns to be part of our data:
* host_response_time
* host_response_rate
* host_acceptance_rate
* host_is_superhost
* host_total_listings_count
* latitude
* longitude
* property_type
* room_type
* accommodates
* bathrooms
* bedrooms
* beds
* bed_type
* amenities
* square_feet
* security_deposit
* cleaning_fee
* guests_included
* extra_people
* review_scores_rating
* review_scores_accuracy
* review_scores_cleanliness
* review_scores_checkin
* review_scores_communication
* review_scores_location
* review_scores_value
* cancellation_policy
```
df_base = df_listings[["host_response_time","host_response_rate","host_acceptance_rate",
"host_is_superhost","host_total_listings_count","latitude","longitude",
"property_type","room_type","accommodates","bathrooms","bedrooms","beds",
"bed_type","amenities","square_feet","security_deposit","cleaning_fee",
"guests_included","extra_people","review_scores_rating","review_scores_accuracy",
"review_scores_cleanliness","review_scores_checkin","review_scores_communication",
"review_scores_location","review_scores_value","cancellation_policy","price"]]
print("Now we have {} features".format(df_base.shape[1]))
no_nulls = set(df_base.columns[df_base.isnull().mean()==0]) ##selecting only columns fully completed
print('''Of all selected features only the following columns are fully completed, without any NANs.
{}
'''.format(no_nulls))
print(df_base.shape)
df_base.isnull().sum()
```
### Data Conclusion
Conclusions:
1. As we can see **square_feet** is the columns with most NAN values, so we are going to drop it.
2. For **bathrooms, bedrooms and beds**, as we understand that they exist, but for some reason didn't show up. We will fill it with the int(mean or average)
3. Column **Security_deposit** all the NAN values will be replaced by 0
4. All the **review_scores** with NAN values will be dropped, because we can't put a value that it doesn't exist
5. Column **cleaning_fee** all the NAN values will be replaced by 0
6. All the **host_response_time**,**host_is_superhost** NANs we are going to be dropped because they are categorical features
7. All the **host_response_rate**,**host_acceptance_rate** NANs we are going to be filled with their mean/average.
8. We will also drop the rows of **property_type** that have NANs values
# Data Preparation
Organizing it in a way that will allow us to answer our questions of interests
```
df_base_t1 = df_base.drop(["square_feet"], axis = 1)# 1.Dropping square_feet
#symbols
df_base_t1['host_response_rate'] = df_base_t1['host_response_rate'].str.rstrip('%')#removing symbol
df_base_t1['host_acceptance_rate'] = df_base_t1['host_acceptance_rate'].str.rstrip('%')#removing symbol
df_base_t1['security_deposit'] = df_base_t1['security_deposit'].str.lstrip('$')#removing symbol
df_base_t1['cleaning_fee'] = df_base_t1['cleaning_fee'].str.lstrip('$')#removing symbol
df_base_t1['extra_people'] = df_base_t1['extra_people'].str.lstrip('$')#removing symbol
df_base_t1['price'] = df_base_t1['price'].str.lstrip('$')#removing symbol
mean_1 = {"cleaning_fee": 0,#replacing 0
"security_deposit": "0.00",#replacing 0
"bathrooms": int(df_base.square_feet.mean()),#mean of the column
"bedrooms": int(df_base.square_feet.mean()), #mean of the column
"beds": int(df_base.square_feet.mean()) #mean of the column
} #dict
df_base_t1 = df_base_t1.fillna(value = mean_1) # 2, 3 an 5.Filling with the mean() of the columns
#4 an 6. Dropping NANs on the review_scores
df_base_t1.dropna(subset=['review_scores_rating','review_scores_accuracy','review_scores_cleanliness','review_scores_checkin',
'review_scores_communication','review_scores_location','review_scores_value',
'host_response_time','host_is_superhost'], inplace = True)
#7. Replacing categorical cloumns, or simbols
print(df_base_t1.host_response_time.value_counts())
#creating dict for this column
host_response_time = {"within an hour":1,
"within a few hours":2,
"within a day":3,
"a few days or more":4}
df_base_t1= df_base_t1.replace({"host_response_time":host_response_time}) #replacing categorical
#types
df_base_t1['host_response_rate']= df_base_t1['host_response_rate'].astype(int) #converting type
df_base_t1['host_acceptance_rate']= df_base_t1['host_acceptance_rate'].astype(int) #converting type
df_base_t1['cleaning_fee']= df_base_t1['cleaning_fee'].astype(float)#converting type
df_base_t1['extra_people']= df_base_t1['extra_people'].astype(float)#converting type
#symbols inteference
# converting price type
df_base_t1.price=df_base_t1.price.str.replace(",","")
df_base_t1['price']= df_base_t1['price'].astype(float)#converting type
# converting security_deposit type
df_base_t1.security_deposit=df_base_t1.security_deposit.str.replace(",","")
df_base_t1['security_deposit']= df_base_t1['security_deposit'].astype(float)#converting type
### categorical
#creating dict for this column
host_is_superhost = {"f":0,
"t":1}
df_base_t1= df_base_t1.replace({"host_is_superhost":host_is_superhost}) #replacing categorical
#creating dict for this column
property_type = {"Apartment":1,
"House":2,
"Condominium":3,
"Townhouse":4,
"Bed & Breakfast":4,
"Loft":4,
"Boat":4,
"Other":4,
"Villa":4,
"Dorm":4,
"Guesthouse":4,
"Entire Floor":4}
df_base_t1= df_base_t1.replace({"property_type":property_type}) #replacing categorical
#creating dict for this column
room_type = {"Entire home/apt":1,
"Private room":2,
"Shared room":3}
df_base_t1= df_base_t1.replace({"room_type":room_type}) #replacing categorical
#creating dict for this column
bed_type = {"Real Bed":1,
"Futon":2,
"Airbed":3,
"Pull-out Sofa":4,
"Couch":5}
df_base_t1= df_base_t1.replace({"bed_type":bed_type}) #replacing categorical
#creating dict for this column
cancellation_policy = {"strict":1,
"moderate":2,
"flexible":3,
"super_strict_30":4}
df_base_t1= df_base_t1.replace({"cancellation_policy":cancellation_policy}) #replacing categorical
```
### Finished? Not yet...
All we did until now it was putting in numbers what words were showing us.
**property_type**: As there were a lot of property types, i creted four categories (1=Apartment,2= House,3=Condominium,4= Others) eache one especified in the dict above. Eventhough we had more types of property, i consider only four, because it could increase our dimmensionality of data.
I did this to almost every variable, but one is missing, **amenities**, i left this one because there were a lot o categories inside of it. And even if i use get_dummy variables, it could increase the dimmensionality of the data, making the predictions get worse in the future. So i thought we could count how many good amenities are in the house, so that now we transform this information in a number.
```
df_base_t1 = df_base_t1.reset_index(drop=True) #reseting the index
```
colunas 3,7,8,13,14,26
simbolos 15 = security_deposit
16 = cleaning_fee
18 = extra_people
27 = price
```
amenities = []
for i in df_base_t1.amenities:
a = i.split(",")
amenities.append(len(a))
df_base_t1["amenities"] = pd.DataFrame(amenities)
print('''Almost forgot that we still have some values missing...
'''
,df_base_t1.isnull().sum()>0,df_base_t1.shape)
df_base_t1 = df_base_t1.dropna()
print('''Let's take a look now...
'''
,df_base_t1.isnull().sum()>0,df_base_t1.shape)
```
### Let's take a look in our Database
Let's just take a look and try to see if we can get some insides of the data.
```
df_base_t1.describe()
df_base_t1.hist(figsize=(15,15) )
```
### And what about the correlations?
There is a good graphic that will help us to see what columns are correlated to each other.
```
fig, ax = plt.subplots(figsize=(15,15))
sns.heatmap(df_base_t1.corr(), annot=True, fmt=".2f",ax=ax)
```
# Modeling, Evaluating and Insights
```
X= df_base_t1[["host_response_time","host_response_rate","host_acceptance_rate",
"host_is_superhost","host_total_listings_count","latitude","longitude",
"property_type","room_type","accommodates","bathrooms","bedrooms","beds",
"bed_type","amenities","security_deposit","cleaning_fee",
"guests_included","extra_people","review_scores_rating","review_scores_accuracy",
"review_scores_cleanliness","review_scores_checkin","review_scores_communication",
"review_scores_location","review_scores_value","cancellation_policy"]]
y = df_base_t1[["price"]]
#Split into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .30, random_state=42)
lm_model = LinearRegression(normalize=True) # Instantiate
lm_model.fit(X_train, y_train) #Fit
#Predict and score the model
y_test_preds = lm_model.predict(X_test)
"The r-squared score for your model was {} on {} values.".format(r2_score(y_test, y_test_preds), len(y_test))
```
| github_jupyter |
# Hyperparameter Optimization [xgboost](https://github.com/dmlc/xgboost)
What the options there're for tuning?
* [GridSearch](http://scikit-learn.org/stable/modules/grid_search.html)
* [RandomizedSearch](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.RandomizedSearchCV.html)
All right!
Xgboost has about 20 params:
1. base_score
2. **colsample_bylevel**
3. **colsample_bytree**
4. **gamma**
5. **learning_rate**
6. **max_delta_step**
7. **max_depth**
8. **min_child_weight**
9. missing
10. **n_estimators**
11. nthread
12. **objective**
13. **reg_alpha**
14. **reg_lambda**
15. **scale_pos_weight**
16. **seed**
17. silent
18. **subsample**
Let's for tuning will be use 12 of them them with 5-10 possible values, so... there're 12^5 - 12^10 possible cases.
If you will check one case in 10s, for **12^5** you need **30 days** for **12^10** about **20K** years :).
This is too long.. but there's a thid option - **Bayesan optimisation**.
```
import pandas as pd
import xgboost as xgb
import numpy as np
import seaborn as sns
from hyperopt import hp
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
%matplotlib inline
train = pd.read_csv('bike.csv')
train['datetime'] = pd.to_datetime( train['datetime'] )
train['day'] = train['datetime'].map(lambda x: x.day)
```
## Modeling
```
def assing_test_samples(data, last_training_day=0.3, seed=1):
days = data.day.unique()
np.random.seed(seed)
np.random.shuffle(days)
test_days = days[: int(len(days) * 0.3)]
data['is_test'] = data.day.isin(test_days)
def select_features(data):
columns = data.columns[ (data.dtypes == np.int64) | (data.dtypes == np.float64) | (data.dtypes == np.bool) ].values
return [feat for feat in columns if feat not in ['count', 'casual', 'registered'] and 'log' not in feat ]
def get_X_y(data, target_variable):
features = select_features(data)
X = data[features].values
y = data[target_variable].values
return X,y
def train_test_split(train, target_variable):
df_train = train[train.is_test == False]
df_test = train[train.is_test == True]
X_train, y_train = get_X_y(df_train, target_variable)
X_test, y_test = get_X_y(df_test, target_variable)
return X_train, X_test, y_train, y_test
def fit_and_predict(train, model, target_variable):
X_train, X_test, y_train, y_test = train_test_split(train, target_variable)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
return (y_test, y_pred)
def post_pred(y_pred):
y_pred[y_pred < 0] = 0
return y_pred
def rmsle(y_true, y_pred, y_pred_only_positive=True):
if y_pred_only_positive: y_pred = post_pred(y_pred)
diff = np.log(y_pred+1) - np.log(y_true+1)
mean_error = np.square(diff).mean()
return np.sqrt(mean_error)
assing_test_samples(train)
def etl_datetime(df):
df['year'] = df['datetime'].map(lambda x: x.year)
df['month'] = df['datetime'].map(lambda x: x.month)
df['hour'] = df['datetime'].map(lambda x: x.hour)
df['minute'] = df['datetime'].map(lambda x: x.minute)
df['dayofweek'] = df['datetime'].map(lambda x: x.dayofweek)
df['weekend'] = df['datetime'].map(lambda x: x.dayofweek in [5,6])
etl_datetime(train)
train['{0}_log'.format('count')] = train['count'].map(lambda x: np.log2(x) )
for name in ['registered', 'casual']:
train['{0}_log'.format(name)] = train[name].map(lambda x: np.log2(x+1) )
```
## Tuning hyperparmeters using Bayesian optimization algorithms
```
def objective(space):
model = xgb.XGBRegressor(
max_depth = space['max_depth'],
n_estimators = int(space['n_estimators']),
subsample = space['subsample'],
colsample_bytree = space['colsample_bytree'],
learning_rate = space['learning_rate'],
reg_alpha = space['reg_alpha']
)
X_train, X_test, y_train, y_test = train_test_split(train, 'count')
eval_set = [( X_train, y_train), ( X_test, y_test)]
(_, registered_pred) = fit_and_predict(train, model, 'registered_log')
(_, casual_pred) = fit_and_predict(train, model, 'casual_log')
y_test = train[train.is_test == True]['count']
y_pred = (np.exp2(registered_pred) - 1) + (np.exp2(casual_pred) -1)
score = rmsle(y_test, y_pred)
print "SCORE:", score
return{'loss':score, 'status': STATUS_OK }
space ={
'max_depth': hp.quniform("x_max_depth", 2, 20, 1),
'n_estimators': hp.quniform("n_estimators", 100, 1000, 1),
'subsample': hp.uniform ('x_subsample', 0.8, 1),
'colsample_bytree': hp.uniform ('x_colsample_bytree', 0.1, 1),
'learning_rate': hp.uniform ('x_learning_rate', 0.01, 0.1),
'reg_alpha': hp.uniform ('x_reg_alpha', 0.1, 1)
}
trials = Trials()
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=15,
trials=trials)
print(best)
```
## Links
1. http://hyperopt.github.io/hyperopt/
2. https://districtdatalabs.silvrback.com/parameter-tuning-with-hyperopt
3. http://fastml.com/optimizing-hyperparams-with-hyperopt/
4. https://github.com/Far0n/xgbfi
| github_jupyter |
##### Copyright 2020 The TensorFlow Hub Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/solve_glue_tasks_using_bert_on_tpu"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/solve_glue_tasks_using_bert_on_tpu.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/solve_glue_tasks_using_bert_on_tpu.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/solve_glue_tasks_using_bert_on_tpu.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/collections/bert/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
# Solve GLUE tasks using BERT on TPU
BERT can be used to solve many problems in natural language processing. You will learn how to fine-tune BERT for many tasks from the [GLUE benchmark](https://gluebenchmark.com/):
1. [CoLA](https://nyu-mll.github.io/CoLA/) (Corpus of Linguistic Acceptability): Is the sentence grammatically correct?
1. [SST-2](https://nlp.stanford.edu/sentiment/index.html) (Stanford Sentiment Treebank): The task is to predict the sentiment of a given sentence.
1. [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) (Microsoft Research Paraphrase Corpus): Determine whether a pair of sentences are semantically equivalent.
1. [QQP](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Quora Question Pairs2): Determine whether a pair of questions are semantically equivalent.
1. [MNLI](http://www.nyu.edu/projects/bowman/multinli/) (Multi-Genre Natural Language Inference): Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral).
1. [QNLI](https://rajpurkar.github.io/SQuAD-explorer/)(Question-answering Natural Language Inference): The task is to determine whether the context sentence contains the answer to the question.
1. [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment)(Recognizing Textual Entailment): Determine if a sentence entails a given hypothesis or not.
1. [WNLI](https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html)(Winograd Natural Language Inference): The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence.
This tutorial contains complete end-to-end code to train these models on a TPU. You can also run this notebook on a GPU, by changing one line (described below).
In this notebook, you will:
- Load a BERT model from TensorFlow Hub
- Choose one of GLUE tasks and download the dataset
- Preprocess the text
- Fine-tune BERT (examples are given for single-sentence and multi-sentence datasets)
- Save the trained model and use it
Key point: The model you develop will be end-to-end. The preprocessing logic will be included in the model itself, making it capable of accepting raw strings as input.
Note: This notebook should be run using a TPU. In Colab, choose **Runtime -> Change runtime type** and verify that a **TPU** is selected.
## Setup
You will use a separate model to preprocess text before using it to fine-tune BERT. This model depends on [tensorflow/text](https://github.com/tensorflow/text), which you will install below.
```
!pip install -q -U tensorflow-text
```
You will use the AdamW optimizer from [tensorflow/models](https://github.com/tensorflow/models) to fine-tune BERT, which you will install as well.
```
!pip install -q -U tf-models-official
!pip install -U tfds-nightly
import os
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import tensorflow_text as text # A dependency of the preprocessing model
import tensorflow_addons as tfa
from official.nlp import optimization
import numpy as np
tf.get_logger().setLevel('ERROR')
```
Next, configure TFHub to read checkpoints directly from TFHub's Cloud Storage buckets. This is only recomended when running TFHub models on TPU.
Without this setting TFHub would download the compressed file and extract the checkpoint locally. Attempting to load from these local files will fail with following Error:
```
InvalidArgumentError: Unimplemented: File system scheme '[local]' not implemented
```
This is because the [TPU can only read directly from Cloud Storage buckets](https://cloud.google.com/tpu/docs/troubleshooting#cannot_use_local_filesystem).
Note: This setting is automatic in Colab.
```
os.environ["TFHUB_MODEL_LOAD_FORMAT"]="UNCOMPRESSED"
```
### Connect to the TPU worker
The following code connects to the TPU worker and changes TensorFlow's default device to the CPU device on the TPU worker. It also defines a TPU distribution strategy that you will use to distribute model training onto the 8 separate TPU cores available on this one TPU worker. See TensorFlow's [TPU guide](https://www.tensorflow.org/guide/tpu) for more information.
```
import os
if os.environ['COLAB_TPU_ADDR']:
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.TPUStrategy(cluster_resolver)
print('Using TPU')
elif tf.test.is_gpu_available():
strategy = tf.distribute.MirroredStrategy()
print('Using GPU')
else:
raise ValueError('Running on CPU is not recomended.')
```
## Loading models from TensorFlow Hub
Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune.
There are multiple BERT models available to choose from.
- [BERT-Base](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3), [Uncased](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3) and [seven more models](https://tfhub.dev/google/collections/bert/1) with trained weights released by the original BERT authors.
- [Small BERTs](https://tfhub.dev/google/collections/bert/1) have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
- [ALBERT](https://tfhub.dev/google/collections/albert/1): four different sizes of "A Lite BERT" that reduces model size (but not computation time) by sharing parameters between layers.
- [BERT Experts](https://tfhub.dev/google/collections/experts/bert/1): eight models that all have the BERT-base architecture but offer a choice between different pre-training domains, to align more closely with the target task.
- [Electra](https://tfhub.dev/google/collections/electra/1) has the same architecture as BERT (in three different sizes), but gets pre-trained as a discriminator in a set-up that resembles a Generative Adversarial Network (GAN).
- BERT with Talking-Heads Attention and Gated GELU [[base](https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1), [large](https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_large/1)] has two improvements to the core of the Transformer architecture.
See the model documentation linked above for more details.
In this tutorial, you will start with BERT-base. You can use larger and more recent models for higher accuracy, or smaller models for faster training times. To change the model, you only need to switch a single line of code (shown below). All of the differences are encapsulated in the SavedModel you will download from TensorFlow Hub.
```
#@title Choose a BERT model to fine-tune
bert_model_name = 'bert_en_uncased_L-12_H-768_A-12' #@param ["bert_en_uncased_L-12_H-768_A-12", "bert_en_uncased_L-24_H-1024_A-16", "bert_en_wwm_uncased_L-24_H-1024_A-16", "bert_en_cased_L-12_H-768_A-12", "bert_en_cased_L-24_H-1024_A-16", "bert_en_wwm_cased_L-24_H-1024_A-16", "bert_multi_cased_L-12_H-768_A-12", "small_bert/bert_en_uncased_L-2_H-128_A-2", "small_bert/bert_en_uncased_L-2_H-256_A-4", "small_bert/bert_en_uncased_L-2_H-512_A-8", "small_bert/bert_en_uncased_L-2_H-768_A-12", "small_bert/bert_en_uncased_L-4_H-128_A-2", "small_bert/bert_en_uncased_L-4_H-256_A-4", "small_bert/bert_en_uncased_L-4_H-512_A-8", "small_bert/bert_en_uncased_L-4_H-768_A-12", "small_bert/bert_en_uncased_L-6_H-128_A-2", "small_bert/bert_en_uncased_L-6_H-256_A-4", "small_bert/bert_en_uncased_L-6_H-512_A-8", "small_bert/bert_en_uncased_L-6_H-768_A-12", "small_bert/bert_en_uncased_L-8_H-128_A-2", "small_bert/bert_en_uncased_L-8_H-256_A-4", "small_bert/bert_en_uncased_L-8_H-512_A-8", "small_bert/bert_en_uncased_L-8_H-768_A-12", "small_bert/bert_en_uncased_L-10_H-128_A-2", "small_bert/bert_en_uncased_L-10_H-256_A-4", "small_bert/bert_en_uncased_L-10_H-512_A-8", "small_bert/bert_en_uncased_L-10_H-768_A-12", "small_bert/bert_en_uncased_L-12_H-128_A-2", "small_bert/bert_en_uncased_L-12_H-256_A-4", "small_bert/bert_en_uncased_L-12_H-512_A-8", "small_bert/bert_en_uncased_L-12_H-768_A-12", "albert_en_base", "albert_en_large", "albert_en_xlarge", "albert_en_xxlarge", "electra_small", "electra_base", "experts_pubmed", "experts_wiki_books", "talking-heads_base", "talking-heads_large"]
map_name_to_handle = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',
'bert_en_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/3',
'bert_en_wwm_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_wwm_uncased_L-24_H-1024_A-16/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',
'bert_en_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_L-24_H-1024_A-16/3',
'bert_en_wwm_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_wwm_cased_L-24_H-1024_A-16/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_base/2',
'albert_en_large':
'https://tfhub.dev/tensorflow/albert_en_large/2',
'albert_en_xlarge':
'https://tfhub.dev/tensorflow/albert_en_xlarge/2',
'albert_en_xxlarge':
'https://tfhub.dev/tensorflow/albert_en_xxlarge/2',
'electra_small':
'https://tfhub.dev/google/electra_small/2',
'electra_base':
'https://tfhub.dev/google/electra_base/2',
'experts_pubmed':
'https://tfhub.dev/google/experts/bert/pubmed/2',
'experts_wiki_books':
'https://tfhub.dev/google/experts/bert/wiki_books/2',
'talking-heads_base':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',
'talking-heads_large':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_large/1',
}
map_model_to_preprocess = {
'bert_en_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_wwm_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_wwm_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_preprocess/2',
'albert_en_large':
'https://tfhub.dev/tensorflow/albert_en_preprocess/2',
'albert_en_xlarge':
'https://tfhub.dev/tensorflow/albert_en_preprocess/2',
'albert_en_xxlarge':
'https://tfhub.dev/tensorflow/albert_en_preprocess/2',
'electra_small':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'electra_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_pubmed':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_wiki_books':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_large':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
}
tfhub_handle_encoder = map_name_to_handle[bert_model_name]
tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]
print(f'BERT model selected : {tfhub_handle_encoder}')
print(f'Preprocessing model auto-selected: {tfhub_handle_preprocess}')
```
## Preprocess the text
On the [Classify text with BERT colab](https://www.tensorflow.org/tutorials/text/classify_text_with_bert) the preprocessing model is used directly embedded with the BERT encoder.
This tutorial demonstrates how to do preprocessing as part of your input pipeline for training, using Dataset.map, and then merge it into the model that gets exported for inference. That way, both training and inference can work from raw text inputs, although the TPU itself requires numeric inputs.
TPU requirements aside, it can help performance have preprocessing done asynchronously in an input pipeline (you can learn more in the [tf.data performance guide](https://www.tensorflow.org/guide/data_performance)).
This tutorial also demonstrates how to build multi-input models, and how to adjust the sequence length of the inputs to BERT.
Let's demonstrate the preprocessing model.
```
bert_preprocess = hub.load(tfhub_handle_preprocess)
tok = bert_preprocess.tokenize(tf.constant(['Hello TensorFlow!']))
print(tok)
```
Each preprocessing model also provides a method,`.bert_pack_inputs(tensors, seq_length)`, which takes a list of tokens (like `tok` above) and a sequence length argument. This packs the inputs to create a dictionary of tensors in the format expected by the BERT model.
```
text_preprocessed = bert_preprocess.bert_pack_inputs([tok, tok], tf.constant(20))
print('Shape Word Ids : ', text_preprocessed['input_word_ids'].shape)
print('Word Ids : ', text_preprocessed['input_word_ids'][0, :16])
print('Shape Mask : ', text_preprocessed['input_mask'].shape)
print('Input Mask : ', text_preprocessed['input_mask'][0, :16])
print('Shape Type Ids : ', text_preprocessed['input_type_ids'].shape)
print('Type Ids : ', text_preprocessed['input_type_ids'][0, :16])
```
Here are some details to pay attention to:
- `input_mask` The mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the `input_word_ids`, and contains a 1 anywhere the `input_word_ids` is not padding.
- `input_type_ids` has the same shape of `input_mask`, but inside the non-padded region, contains a 0 or a 1 indicating which sentence the token is a part of.
Next, you will create a preprocessing model that encapsulates all this logic. Your model will take strings as input, and return appropriately formatted objects which can be passed to BERT.
Each BERT model has a specific preprocessing model, make sure to use the proper one described on the BERT's model documentation.
Note: BERT adds a "position embedding" to the token embedding of each input, and these come from a fixed-size lookup table. That imposes a max seq length of 512 (which is also a practical limit, due to the quadratic growth of attention computation). For this colab 128 is good enough.
```
def make_bert_preprocess_model(sentence_features, seq_length=128):
"""Returns Model mapping string features to BERT inputs.
Args:
sentence_features: a list with the names of string-valued features.
seq_length: an integer that defines the sequence length of BERT inputs.
Returns:
A Keras Model that can be called on a list or dict of string Tensors
(with the order or names, resp., given by sentence_features) and
returns a dict of tensors for input to BERT.
"""
input_segments = [
tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft)
for ft in sentence_features]
# Tokenize the text to word pieces.
bert_preprocess = hub.load(tfhub_handle_preprocess)
tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name='tokenizer')
segments = [tokenizer(s) for s in input_segments]
# Optional: Trim segments in a smart way to fit seq_length.
# Simple cases (like this example) can skip this step and let
# the next step apply a default truncation to approximately equal lengths.
truncated_segments = segments
# Pack inputs. The details (start/end token ids, dict of output tensors)
# are model-dependent, so this gets loaded from the SavedModel.
packer = hub.KerasLayer(bert_preprocess.bert_pack_inputs,
arguments=dict(seq_length=seq_length),
name='packer')
model_inputs = packer(truncated_segments)
return tf.keras.Model(input_segments, model_inputs)
```
Let's demonstrate the preprocessing model. You will create a test with two sentences input (input1 and input2). The output is what a BERT model would expect as input: `input_word_ids`, `input_masks` and `input_type_ids`.
```
test_preprocess_model = make_bert_preprocess_model(['my_input1', 'my_input2'])
test_text = [np.array(['some random test sentence']),
np.array(['another sentence'])]
text_preprocessed = test_preprocess_model(test_text)
print('Keys : ', list(text_preprocessed.keys()))
print('Shape Word Ids : ', text_preprocessed['input_word_ids'].shape)
print('Word Ids : ', text_preprocessed['input_word_ids'][0, :16])
print('Shape Mask : ', text_preprocessed['input_mask'].shape)
print('Input Mask : ', text_preprocessed['input_mask'][0, :16])
print('Shape Type Ids : ', text_preprocessed['input_type_ids'].shape)
print('Type Ids : ', text_preprocessed['input_type_ids'][0, :16])
```
Let's take a look at the model's structure, paying attention to the two inputs you just defined.
```
tf.keras.utils.plot_model(test_preprocess_model)
```
To apply the preprocessing in all the inputs from the dataset, you will use the `map` function from the dataset. The result is then cached for [performance](https://www.tensorflow.org/guide/data_performance#top_of_page).
```
AUTOTUNE = tf.data.AUTOTUNE
def load_dataset_from_tfds(in_memory_ds, info, split, batch_size,
bert_preprocess_model):
is_training = split.startswith('train')
dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[split])
num_examples = info.splits[split].num_examples
if is_training:
dataset = dataset.shuffle(num_examples)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda ex: (bert_preprocess_model(ex), ex['label']))
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
return dataset, num_examples
```
## Define your model
You are now ready to define your model for sentence or sentence pair classification by feeding the preprocessed inputs through the BERT encoder and putting a linear classifier on top (or other arrangement of layers as you prefer), and using dropout for regularization.
Note: Here the model will be defined using the [Keras functional API](https://www.tensorflow.org/guide/keras/functional)
```
def build_classifier_model(num_classes):
inputs = dict(
input_word_ids=tf.keras.layers.Input(shape=(None,), dtype=tf.int32),
input_mask=tf.keras.layers.Input(shape=(None,), dtype=tf.int32),
input_type_ids=tf.keras.layers.Input(shape=(None,), dtype=tf.int32),
)
encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='encoder')
net = encoder(inputs)['pooled_output']
net = tf.keras.layers.Dropout(rate=0.1)(net)
net = tf.keras.layers.Dense(num_classes, activation=None, name='classifier')(net)
return tf.keras.Model(inputs, net, name='prediction')
```
Let's try running the model on some preprocessed inputs.
```
test_classifier_model = build_classifier_model(2)
bert_raw_result = test_classifier_model(text_preprocessed)
print(tf.sigmoid(bert_raw_result))
```
Let's take a look at the model's structure. You can see the three BERT expected inputs.
```
tf.keras.utils.plot_model(test_classifier_model)
```
## Choose a task from GLUE
You are going to use a TensorFlow DataSet from the [GLUE](https://www.tensorflow.org/datasets/catalog/glue) benchmark suite.
Colab lets you download these small datasets to the local filesystem, and the code below reads them entirely into memory, because the separate TPU worker host cannot access the local filesystem of the colab runtime.
For bigger datasets, you'll need to create your own [Google Cloud Storage](https://cloud.google.com/storage) bucket and have the TPU worker read the data from there. You can learn more in the [TPU guide](https://www.tensorflow.org/guide/tpu#input_datasets).
It's recommended to start with the CoLa dataset (for single sentence) or MRPC (for multi sentence) since these are small and don't take long to fine tune.
```
tfds_name = 'glue/cola' #@param ['glue/cola', 'glue/sst2', 'glue/mrpc', 'glue/qqp', 'glue/mnli', 'glue/qnli', 'glue/rte', 'glue/wnli']
tfds_info = tfds.builder(tfds_name).info
sentence_features = list(tfds_info.features.keys())
sentence_features.remove('idx')
sentence_features.remove('label')
available_splits = list(tfds_info.splits.keys())
train_split = 'train'
validation_split = 'validation'
test_split = 'test'
if tfds_name == 'glue/mnli':
validation_split = 'validation_matched'
test_split = 'test_matched'
num_classes = tfds_info.features['label'].num_classes
num_examples = tfds_info.splits.total_num_examples
print(f'Using {tfds_name} from TFDS')
print(f'This dataset has {num_examples} examples')
print(f'Number of classes: {num_classes}')
print(f'Features {sentence_features}')
print(f'Splits {available_splits}')
with tf.device('/job:localhost'):
# batch_size=-1 is a way to load the dataset into memory
in_memory_ds = tfds.load(tfds_name, batch_size=-1, shuffle_files=True)
# The code below is just to show some samples from the selected dataset
print(f'Here are some sample rows from {tfds_name} dataset')
sample_dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[train_split])
labels_names = tfds_info.features['label'].names
print(labels_names)
print()
sample_i = 1
for sample_row in sample_dataset.take(5):
samples = [sample_row[feature] for feature in sentence_features]
print(f'sample row {sample_i}')
for sample in samples:
print(sample.numpy())
sample_label = sample_row['label']
print(f'label: {sample_label} ({labels_names[sample_label]})')
print()
sample_i += 1
```
The dataset also determines the problem type (classification or regression) and the appropriate loss function for training.
```
def get_configuration(glue_task):
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
if glue_task is 'glue/cola':
metrics = tfa.metrics.MatthewsCorrelationCoefficient()
else:
metrics = tf.keras.metrics.SparseCategoricalAccuracy(
'accuracy', dtype=tf.float32)
return metrics, loss
```
## Train your model
Finally, you can train the model end-to-end on the dataset you chose.
### Distribution
Recall the set-up code at the top, which has connected the colab runtime to
a TPU worker with multiple TPU devices. To distribute training onto them, you will create and compile your main Keras model within the scope of the TPU distribution strategy. (For details, see [Distributed training with Keras](https://www.tensorflow.org/tutorials/distribute/keras).)
Preprocessing, on the other hand, runs on the CPU of the worker host, not the TPUs, so the Keras model for preprocessing as well as the training and validation datasets mapped with it are built outside the distribution strategy scope. The call to `Model.fit()` will take care of distributing the passed-in dataset to the model replicas.
Note: The single TPU worker host already has the resource objects (think: a lookup table) needed for tokenization. Scaling up to multiple workers requires use of `Strategy.experimental_distribute_datasets_from_function` with a function that loads the preprocessing model separately onto each worker.
### Optimizer
Fine-tuning follows the optimizer set-up from BERT pre-training (as in [Classify text with BERT](https://www.tensorflow.org/tutorials/text/classify_text_with_bert)): It uses the AdamW optimizer with a linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (`num_warmup_steps`). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5).
```
epochs = 3
batch_size = 32
init_lr = 2e-5
print(f'Fine tuning {tfhub_handle_encoder} model')
bert_preprocess_model = make_bert_preprocess_model(sentence_features)
with strategy.scope():
# metric have to be created inside the strategy scope
metrics, loss = get_configuration(tfds_name)
train_dataset, train_data_size = load_dataset_from_tfds(
in_memory_ds, tfds_info, train_split, batch_size, bert_preprocess_model)
steps_per_epoch = train_data_size // batch_size
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = num_train_steps // 10
validation_dataset, validation_data_size = load_dataset_from_tfds(
in_memory_ds, tfds_info, validation_split, batch_size,
bert_preprocess_model)
validation_steps = validation_data_size // batch_size
classifier_model = build_classifier_model(num_classes)
optimizer = optimization.create_optimizer(
init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
classifier_model.compile(optimizer=optimizer, loss=loss, metrics=[metrics])
classifier_model.fit(
x=train_dataset,
validation_data=validation_dataset,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_steps=validation_steps)
```
## Export for inference
You will create a final model that has the preprocessing part and the fine-tuned BERT we've just created.
At inference time, preprocessing needs to be part of the model (because there is no longer a separate input queue as for training data that does it). Preprocessing is not just computation; it has its own resources (the vocab table) that must be attached to the Keras Model that is saved for export.
This final assembly is what will be saved.
You are going to save the model on colab and later you can download to keep it for the future (**View -> Table of contents -> Files**).
```
main_save_path = './my_models'
bert_type = tfhub_handle_encoder.split('/')[-2]
saved_model_name = f'{tfds_name.replace("/", "_")}_{bert_type}'
saved_model_path = os.path.join(main_save_path, saved_model_name)
preprocess_inputs = bert_preprocess_model.inputs
bert_encoder_inputs = bert_preprocess_model(preprocess_inputs)
bert_outputs = classifier_model(bert_encoder_inputs)
model_for_export = tf.keras.Model(preprocess_inputs, bert_outputs)
print(f'Saving {saved_model_path}')
# Save everything on the Colab host (even the variables from TPU memory)
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model_for_export.save(saved_model_path, include_optimizer=False, options=save_options)
```
## Test the model
The final step is testing the results of your exported model.
Just to make some comparison, let's reload the model and test it using some inputs from the test split from the dataset.
Note: The test is done on the colab host, not the TPU worker that it has connected to, so it appears below with explicit device placements. You can omit those when loading the SavedModel elsewhere.
```
with tf.device('/job:localhost'):
reloaded_model = tf.saved_model.load(saved_model_path)
#@title Utility methods
def prepare(record):
model_inputs = [[record[ft]] for ft in sentence_features]
return model_inputs
def prepare_serving(record):
model_inputs = {ft: record[ft] for ft in sentence_features}
return model_inputs
def print_bert_results(test, bert_result, dataset_name):
bert_result_class = tf.argmax(bert_result, axis=1)[0]
if dataset_name == 'glue/cola':
print(f'sentence: {test[0].numpy()}')
if bert_result_class == 1:
print(f'This sentence is acceptable')
else:
print(f'This sentence is unacceptable')
elif dataset_name == 'glue/sst2':
print(f'sentence: {test[0]}')
if bert_result_class == 1:
print(f'This sentence has POSITIVE sentiment')
else:
print(f'This sentence has NEGATIVE sentiment')
elif dataset_name == 'glue/mrpc':
print(f'sentence1: {test[0]}')
print(f'sentence2: {test[1]}')
if bert_result_class == 1:
print(f'Are a paraphrase')
else:
print(f'Are NOT a paraphrase')
elif dataset_name == 'glue/qqp':
print(f'question1: {test[0]}')
print(f'question2: {test[1]}')
if bert_result_class == 1:
print(f'Questions are similar')
else:
print(f'Questions are NOT similar')
elif dataset_name == 'glue/mnli':
print(f'premise : {test[0]}')
print(f'hypothesis: {test[1]}')
if bert_result_class == 1:
print(f'This premise is NEUTRAL to the hypothesis')
elif bert_result_class == 2:
print(f'This premise CONTRADICTS the hypothesis')
else:
print(f'This premise ENTAILS the hypothesis')
elif dataset_name == 'glue/qnli':
print(f'question: {test[0]}')
print(f'sentence: {test[1]}')
if bert_result_class == 1:
print(f'The question is NOT answerable by the sentence')
else:
print(f'The question is answerable by the sentence')
elif dataset_name == 'glue/rte':
print(f'sentence1: {test[0]}')
print(f'sentence2: {test[1]}')
if bert_result_class == 1:
print(f'Sentence1 DOES NOT entails sentence2')
else:
print(f'Sentence1 entails sentence2')
elif dataset_name == 'glue/wnli':
print(f'sentence1: {test[0]}')
print(f'sentence2: {test[1]}')
if bert_result_class == 1:
print(f'Sentence1 DOES NOT entails sentence2')
else:
print(f'Sentence1 entails sentence2')
print(f'Bert raw results:{bert_result[0]}')
print()
```
### Test
```
with tf.device('/job:localhost'):
test_dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[test_split])
for test_row in test_dataset.shuffle(1000).map(prepare).take(5):
if len(sentence_features) == 1:
result = reloaded_model(test_row[0])
else:
result = reloaded_model(list(test_row))
print_bert_results(test_row, result, tfds_name)
```
If you want to use your model on [TF Serving](https://www.tensorflow.org/tfx/guide/serving), remember that it will call your SavedModel through one of its named signatures. Notice there are some small differences in the input. In Python, you can test them as follows:
```
with tf.device('/job:localhost'):
serving_model = reloaded_model.signatures['serving_default']
for test_row in test_dataset.shuffle(1000).map(prepare_serving).take(5):
result = serving_model(**test_row)
# The 'prediction' key is the classifier's defined model name.
print_bert_results(list(test_row.values()), result['prediction'], tfds_name)
```
You did it! Your saved model could be used for serving or simple inference in a process, with a simpler api with less code and easier to maintain.
## Next Steps
Now that you've tried one of the base BERT models, you can try other ones to achieve more accuracy or maybe with smaller model versions.
You can also try in other datasets.
| github_jupyter |
# Open, Re-usable Deep Learning Components on the Web
## Learning objectives
- Use [ImJoy](https://imjoy.io/#/) web-based imaging components
- Create a JavaScript-based ImJoy plugin
- Create a Python-based ImJoy plugin
*See also:* the [I2K 2020 Tutorial: ImJoying Interactive Bioimage Analysis
with Deep Learning, ImageJ.JS &
Friends](https://www.janelia.org/sites/default/files/You%20%2B%20Janelia/Conferences/10.pdf)
ImJoy is a plugin powered hybrid computing platform for deploying deep learning applications such as advanced image analysis tools.
ImJoy runs on mobile and desktop environment cross different operating systems, plugins can run in the browser, localhost, remote and cloud servers.
With ImJoy, delivering Deep Learning tools to the end users is simple and easy thanks to
its flexible plugin system and sharable plugin URL. Developer can easily add rich and interactive web interfaces to existing Python code.
<img src="https://github.com/imjoy-team/ImJoy/raw/master/docs/assets/imjoy-overview.jpg" width="600px"></img>
Checkout the documentation for how to get started and more details
for how to develop ImJoy plugins: [ImJoy Docs](https://imjoy.io/docs)
## Key Features of ImJoy
* Minimal and flexible plugin powered web application
* Server-less progressive web application with offline support
* Support mobile devices
* Rich and interactive user interface powered by web technologies
- use any existing web design libraries
- Rendering multi-dimensional data in 3D with webGL, Three.js etc.
* Easy-to-use workflow composition
* Isolated workspaces for grouping plugins
* Self-contained plugin prototyping and development
- Built-in code editor, no extra IDE is needed for development
* Powerful and extendable computational backends for browser, local and cloud computing
- Support Javascript, native Python and web Python
- Concurrent plugin execution through asynchronous programming
- Run Python plugins in the browser with Webassembly
- Browser plugins are isolated with secured sandboxes
- Support `async/await` syntax for Python3 and Javascript
- Support Conda virtual environments and pip packages for Python
- Support libraries hosted on Github or CDNs for javascript
- Easy plugin deployment and sharing through GitHub or Gist
- Deploying your own plugin repository to Github
* Native support for n-dimensional arrays and tensors
- Support ndarrays from Numpy for data exchange
**ImJoy greatly accelerates the development and dissemination of new tools.** You can develop plugins in ImJoy, deploy the plugin file to Github, and share the plugin URL through social networks. Users can then use it by a single click, even on a mobile phone
<a href="https://imjoy.io/#/app?p=imjoy-team/example-plugins:Skin-Lesion-Analyzer" target="_blank">
<img src="https://github.com/imjoy-team/ImJoy/raw/master/docs/assets/imjoy-sharing.jpg" width="500px"></img>
</a>
Examine the ImJoy extension in the notebook toolbar

```
#ciskip
# Create an ImJoy plugin in Python that uses itk-vtk-viewer to visualize images
import imageio
import numpy as np
from imjoy_rpc import api
class ImJoyPlugin():
def setup(self):
api.log('plugin initialized')
async def run(self, ctx):
viewer = await api.showDialog(src="https://kitware.github.io/itk-vtk-viewer/app/")
# show a 3D volume
image_array = np.random.randint(0, 255, [10,10,10], dtype='uint8')
# show a 2D image
# image_array = imageio.imread('imageio:chelsea.png')
await viewer.setImage(image_array)
api.export(ImJoyPlugin())
# Create a JavaScript ImJoy plugin
from IPython.display import HTML
my_plugin_source = HTML('''
<docs lang="markdown">
[TODO: write documentation for this plugin.]
</docs>
<config lang="json">
{
"name": "Untitled Plugin",
"type": "window",
"tags": [],
"ui": "",
"version": "0.1.0",
"cover": "",
"description": "[TODO: describe this plugin with one sentence.]",
"icon": "extension",
"inputs": null,
"outputs": null,
"api_version": "0.1.8",
"env": "",
"permissions": [],
"requirements": [],
"dependencies": [],
"defaults": {"w": 20, "h": 10}
}
</config>
<script lang="javascript">
class ImJoyPlugin {
async setup() {
api.log('initialized')
}
async run(ctx) {
}
}
api.export(new ImJoyPlugin())
</script>
<window lang="html">
<div>
<p>
Hello World
</p>
</div>
</window>
<style lang="css">
</style>
''')
#ciskip
# Register the plugin
from imjoy_rpc import api
class ImJoyPlugin():
async def setup(self):
pass
async def run(self, ctx):
# for regular plugin
# p = await api.getPlugin(my_plugin_source)
# or for window plugin
# await api.createWindow(src=my_plugin_source)
await api.showDialog(src=my_plugin_source)
api.export(ImJoyPlugin())
```
## Exercises
Try out plugins from the [ImJoy reference plugin repository](https://imjoy.io/repo/).
| github_jupyter |
# MULTI-LABEL TEXT CLASSIFICATION FOR STACK OVERFLOW TAG PREDICTION
```
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier
url = 'https://raw.githubusercontent.com/laxmimerit/All-CSV-ML-Data-Files-Download/master/stackoverflow.csv'
df = pd.read_csv(url, index_col=0) # We provide index_col=0 as one of the column is not defined(unnamed)
df.head(5)
```
# Pre-Processing
```
# tf-idf will be used for pre-processing. tf-idf is known as term frequency.
# We need to change the tags column from string to list in order to do one hot encoding.
#In order to do that we are going to evaluate it. For evaluation,
# we have to import the ast library.
import ast
ast.literal_eval(df['Tags'].iloc[0]) # This converts the Tags column from string to list.
df['Tags'] = df['Tags'].apply(lambda x: ast.literal_eval(x))
# To convert all the rows of the Tags to list, we use lambda function
df.head(5)
y = df['Tags'] # We will do One-Hot Encoding and convert y to MultiLabel Binarizer
y
multilabel = MultiLabelBinarizer() # MultiLabelBinazrizer object
y = multilabel.fit_transform(df['Tags'])
y # Converted Tags to one hot encoding(All zeroes and ones)
# To check for which classes y got converted
multilabel.classes_
pd.DataFrame(y, columns = multilabel.classes_) # We need to convert all the text to ones and zeroes to train our model
# Now we will use Tfidf for tokenization. If we select analyzer=word, then it will do tokenization word by word else if we
# choose analyzer=char, it will tokenize character by character.
# lets say you have -> l,e,t,s,...
# max_features ensures that we should not select dictionary size more than the max_features size.
tfidf = TfidfVectorizer(analyzer='word', max_features=10000, ngram_range=(1,3), stop_words='english')
X = tfidf.fit_transform(df['Text'])
X
# tfidf.vocabulary_ (for what are the words the tfidf has done tokenization)
X.shape, y.shape # X has 10,000 columns of Text from stackoverflow and y has 20 columns of Tags from stackoverflow
# total no. of rows are 48976
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
```
# Build Model
```
sgd = SGDClassifier()
lr = LogisticRegression(solver='lbfgs')
svc = LinearSVC()
```
### Jaccard similarity, or Jaccard index is the size of the intersection of the predicted labels and true labels by the size of the union of the predicted and true labels. It ranges from 0 to 1 and 1 is the perfect score.
```
def j_score(y_true, y_pred):
# minimum = intersection, maximum = union
jaccard = np.minimum(y_true, y_pred).sum(axis = 1)/np.maximum(y_true, y_pred).sum(axis = 1)
return jaccard.mean()*100
def print_score(y_pred, clf):
print("Clf: ", clf.__class__.__name__) # It will tell us which classifier we are using
print('Jacard score: {}'.format(j_score(y_test, y_pred))) # To print the Jaccard score
print('----')
for classifier in [LinearSVC(C=1.5, penalty = 'l1', dual=False)]:
clf = OneVsRestClassifier(classifier)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print_score(y_pred, classifier)
for classifier in [sgd, lr, svc]: # Iterating the ML algorithms
clf = OneVsRestClassifier(classifier) # From 20 classes, we will select one at a time
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print_score(y_pred, classifier)
```
## Model Test with Real Data
```
x = [ 'how to write ml code in python and java i have data but do not know how to do it']
xt = tfidf.transform(x) # It will return a sparse matrix
clf.predict(xt)
multilabel.inverse_transform(clf.predict(xt)) # To check which classes has value as 1
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_excel('results fdu.xlsx')
df2 = pd.read_excel('data trait category II 10 Mar 2021 all data - plain text.xlsx')
df2.columns
mean_N = df2.groupby(['Y_category2'])['Code'].count()
mean_N
mean_ = df2.groupby(['Y_category2','design.1'])['Code'].count()
mean_
# data split
indi_effects = df[df['effect'] == 'individual effects']
indi_effects
indi_effects[df['Y'] == 'fitness']
# individual effects - fitness
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'fitness']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Fitness (lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='fitness individual effects.png')
plt.show()
# add significance
# individual effects - Size
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'Size']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Size (lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='Size individual effects.png')
plt.show()
# add significance
```
Fitness 73
Shootallocation 104
Size 240
Size_above 120
Size_below 109
nutrient 150
nutrient_above 108
nutrient_below 43
performance 403
size_above 76
size_below 37
Name: Code, dtype: int64
```
df[Y]
# individual effects - Size aboveground
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'Size beloground']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Size aboveground (lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='Size_above.png')
plt.show()
# add significance
# individual effects - Size belowground
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'Size beloground']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Size belowground (lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='Size_below.png')
plt.show()
# add significance
# individual effects - Nutrient
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'Nutrient']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Nutrient (lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='Nutrient.png')
plt.show()
# add significance
# individual effects - Nutrient aboveground
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'Nutrient above']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Nutrient shoot (lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='Nutrient_shoot.png')
plt.show()
# add significance
# individual effects - Nutrient belowground
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'Nutrient below']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Nutrient root (lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='Nutrient_root.png')
plt.show()
# add significance
# individual effects - Performance
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'Overall performance']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Overall performance(lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='performance.png')
plt.show()
# add significance
# individual effects - Shoot allocation
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(5,4), dpi= 100)
data1 = indi_effects[df['Y'] == 'Shoot allocation']
plt.errorbar(x= "estimate",y= "factor_n", xerr = "ci_bar", data = data1,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('Shoot allocation (lnRR++)')
ax.set_title('Individual effects')
# Plot a Vertical Line
plt.axvline(x=0, ymin=0, ymax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='shoot_allocation.png')
plt.show()
# add significance
# main effect
# size
# nutrient
# fitness
# Performance
# first line
# data
main_effects = df[(df['effect'] == 'Main effect')&(df['Y'] == 'Shoot allocation')]
data2 = main_effects[df['design'] == 'co2']
#创建新的figure
fig,ax = plt.subplots(4,4,figsize=(16, 12), sharex=True, sharey=True,dpi= 100)
plt.xticks([])
plt.yticks([])
# main effect
# size
# nutrient
# fitness
# Performance
# first line
# data
main_effects = df[(df['effect'] == 'Main effect')&(df['Y'] == 'Shoot allocation')]
data2 = main_effects[df['design'] == 'co2']
#创建新的figure
fig,ax = plt.subplots(4,4,figsize=(16, 12), sharex=True, sharey=True,dpi= 100)
plt.xticks([])
plt.yticks([])
fig,ax = plt.subplots(4,4,figsize=(16, 12),dpi = 100)
plt.xticks([])
plt.yticks([])
Fitness 73
Shootallocation 104
Size 240
Size_above 120
Size_below 109
nutrient 150
nutrient_above 108
nutrient_below 43
performance 403
size_above 76
size_below 37
# main effect
# size
# nutrient
# fitness
# Performance
# first line
# data
main_effects = df[df['effect'] == 'Main effect']
#创建新的figure
fig,ax = plt.subplots(4,4,figsize=(16, 12), sharex=True, sharey=True,dpi= 100)
plt.xticks([])
plt.yticks([])
#fig, ax1 = plt.subplots(figsize=(3,3))
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Size')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Size\n (Hedges‘ d++)',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第2个子图的内容
data2 = main_effects[(df['Y'] == 'Size')&(df['design'] == 'drought')]
plt.subplot(4,4,2) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,2)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第3个子图的内容
data2 = main_effects[(df['Y'] == 'Size')&(df['design'] == 'rain')]
plt.subplot(4,4,3) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,3)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,6.01,2)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第4个子图的内容
data2 = main_effects[(df['Y'] == 'Size')&(df['design'] == 'warm')]
plt.subplot(4,4,4) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,4)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第5个子图的内容
data2 = main_effects[(df['Y'] == 'Nutrient')&(df['design'] == 'co2')]
plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,5)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,6.01,2)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Nutrient\n (Hedges‘ d++)',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第6个子图的内容
data2 = main_effects[(df['Y'] == 'Nutrient')&(df['design'] == 'drought')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,6)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第7个子图的内容
data2 = main_effects[(df['Y'] == 'Nutrient')&(df['design'] == 'rain')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,7)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第8个子图的内容
data2 = main_effects[(df['Y'] == 'Nutrient')&(df['design'] == 'warm')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,8)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第9个子图的内容
data2 = main_effects[(df['Y'] == 'fitness')&(df['design'] == 'co2')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,9)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Fitness\n (Hedges‘ d++)',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第10个子图的内容
data2 = main_effects[(df['Y'] == 'fitness')&(df['design'] == 'drought')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,10)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第11个子图的内容
data2 = main_effects[(df['Y'] == 'fitness')&(df['design'] == 'rain')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,11)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第12个子图的内容
data2 = main_effects[(df['Y'] == 'fitness')&(df['design'] == 'warm')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,12)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第13个子图的内容
data2 = main_effects[(df['Y'] == 'Overall performance')&(df['design'] == 'co2')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,13)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Overall performance\n (Hedges‘ d++)',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第14个子图的内容
data2 = main_effects[(df['Y'] == 'Overall performance')&(df['design'] == 'drought')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,14)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第15个子图的内容
data2 = main_effects[(df['Y'] == 'Overall performance')&(df['design'] == 'rain')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,15)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
##第16个子图的内容
data2 = main_effects[(df['Y'] == 'Overall performance')&(df['design'] == 'warm')]
#plt.subplot(4,4,5) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,16)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax1.set_title('Main effects',)
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第2个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(4,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(4,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,2.01,1)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
# add the sample size of each setting
# save file
plt.savefig(fname='hd.png')
plt.show()
# label
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import sem
sns.pointplot(y= "estimate",x= "factor", yerr = "ci_bar", hue="design", data = main_effects)
main_effects
# main effect aboveground and belowground size
# above1234
# below1234
# main effect aboveground and belowground nutrient
# above1234
# below1234
mean_N = df.groupby(['Y']).count()
mean_N
# main effect
# size
# nutrient
# fitness
# Performance
# first line
# data
main_effects = df[df['effect'] == 'Main effect']
#创建新的figure
fig,ax = plt.subplots(1,4,figsize=(18, 4), sharex=True, sharey=True,dpi= 100)
plt.xticks([])
plt.yticks([])
#fig, ax1 = plt.subplots(figsize=(3,3))
#第1个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'co2')]
plt.subplot(1,4,1) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(1,4,1)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,1.51,0.5)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
ax1.set_title('n = x')
plt.margins(x=0.33)
ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第2个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'drought')]
plt.subplot(1,4,2) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(1,4,2)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,1.51,0.5)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
ax1.set_title('n = x')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第3个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'rain')]
plt.subplot(1,4,3) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(1,4,3)
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-2.0,6.01,2)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
ax1.set_title('n = x')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
#ax.set_title('Main effects')
# Plot a horizonal Line
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#第4个子图的内容
data2 = main_effects[(df['Y'] == 'Shoot allocation')&(df['design'] == 'warm')]
#plt.subplot(1,4,4) # 增加一个子图,三个参数分别表示:行数,列数,子图序号。i=0时,添加第一个子图
ax1 = plt.subplot(1,4,4)
#第2个子图的内容
plt.errorbar(y= "estimate",x= "factor", yerr = "ci_bar", data = data2, elinewidth=2,
fmt='-o', color='black',linestyle='None',capthick=0.1)
# ax.set_yticks('factor')
# ax.set_yticklabels('factor')
#plt.xticks(np.arange(3), ['N', 'C', 'N x C']) # 不显示x轴标尺
#ax1.set_yticklabels([])
plt.yticks(np.arange(-1.0,2.51,0.5)) # 显示y轴标尺
#ax1.set_xlabel('CO2')
ax1.set_title('n = x')
plt.margins(x=0.33)
#ax1.set_ylabel('Hedges‘ d++',labelpad=None)
ax1.get_yaxis().get_offset_text().set_x(-0.075)
ax1.get_yaxis().get_offset_text().set_size(10)
plt.axhline(y=0, xmin=0, xmax=1,color = 'b',linewidth=1, linestyle = '--')
#plt.figtext(0.86, 0.81,'n = xx')
plt.figtext(verticalalignment='baseline', horizontalalignment='left', multialignment='left',text ='n = xx')
#ax.set_title('Main effects')
# Plot a horizonal Line
```
| github_jupyter |
# Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242).
**You will learn to**:
- Use object detection on a car detection dataset
- Deal with bounding boxes
Run the following cell to load the packages and dependencies that are going to be useful for your journey!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
```
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`.
## 1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank [drive.ai](https://www.drive.ai/) for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.
</center></caption>
<img src="nb_images/driveai.png" style="width:100px;height:100;">
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption>
If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
## 2 - YOLO
YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
### 2.1 - Model details
First things to know:
- The **input** is a batch of images of shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
Lets look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption>
Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption>
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> **Figure 5** </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)
- Select only one box when several boxes overlap with each other and detect the same object.
### 2.2 - Filtering with a threshold on class scores
You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.
- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
**Exercise**: Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator:
```python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
```
2. For each box, find:
- the index of the class with the maximum box score ([Hint](https://keras.io/backend/#argmax)) (Be careful with what axis you choose; consider using axis=-1)
- the corresponding box score ([Hint](https://keras.io/backend/#max)) (Be careful with what axis you choose; consider using axis=-1)
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. ([Hint](https://www.tensorflow.org/api_docs/python/tf/boolean_mask))
Reminder: to call a Keras function, you should use `K.function(...)`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# print(box_confidence.get_shape())
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence*box_class_probs
#print(box_scores.get_shape())
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores,axis = -1)
#print(box_classes.get_shape())
box_class_scores = K.max(box_scores,axis = -1)
#print(box_class_scores.get_shape())
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores>=threshold
#print(filtering_mask.get_shape())
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores,filtering_mask)
boxes = tf.boolean_mask(boxes,filtering_mask)
classes = tf.boolean_mask(box_classes,filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
### 2.3 - Non-max suppression ###
Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption>
**Exercise**: Implement iou(). Some hints:
- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.
- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)
- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:
- xi1 = maximum of the x1 coordinates of the two boxes
- yi1 = maximum of the y1 coordinates of the two boxes
- xi2 = minimum of the x2 coordinates of the two boxes
- yi2 = minimum of the y2 coordinates of the two boxes
In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
```
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
# print(type(box1))
#print(box1[0])
xi1 = np.maximum(box1[0],box2[0])
yi1 = np.maximum(box1[1],box2[1])
xi2 = np.minimum(box1[2],box2[2])
yi2 = np.minimum(box1[3],box2[3])
inter_area = (yi2-yi1)*(xi2-xi1)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[3]-box1[1])*(box1[2]-box1[0])
box2_area = (box2[3]-box2[1])*(box2[2]-box2[0])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area/union_area
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
```
**Expected Output**:
<table>
<tr>
<td>
**iou = **
</td>
<td>
0.14285714285714285
</td>
</tr>
</table>
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute its overlap with all other boxes, and remove boxes that overlap it more than `iou_threshold`.
3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes,scores,max_boxes_tensor,iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.gather(scores,nms_indices)
boxes = tf.gather(boxes,nms_indices)
classes = tf.gather(classes,nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
### 2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
**Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
```
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence,boxes,box_class_probs,score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores,boxes,classes,max_boxes,iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
<font color='blue'>
**Summary for YOLO**:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
## 3 - Test YOLO pretrained model on images
In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by **creating a session to start your graph**. Run the following cell.
```
sess = K.get_session()
```
### 3.1 - Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell.
The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
```
### 3.2 - Loading a pretrained model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/yolo.h5")
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
### 3.3 - Convert output of the model to usable bounding box tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
```
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
```
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function.
### 3.4 - Filtering boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call `yolo_eval`, which you had previously implemented, to do this.
```
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
```
### 3.5 - Run the graph on an image
Let the fun begin. You have created a (`sess`) graph that can be summarized as follows:
1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font>
2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font>
3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
**Exercise**: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.
The code below also uses the following function:
```python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
```
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
```
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input: image_data, K.learning_phase(): 0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
**Found 7 boxes for test.jpg**
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
**bus**
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption>
<font color='blue'>
**What you should remember**:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's github repository. The pretrained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
**Car detection dataset**:
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are especially grateful to Brody Huval, Chih Hu and Rahul Patel for collecting and providing this dataset.
| github_jupyter |
## Modules
```
from sklearn import metrics
import scikitplot as skplt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn import preprocessing
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn import linear_model, decomposition, datasets
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
from sklearn.decomposition import PCA
import scikitplot as skplt
def clean_my_file(file_name):
# read csv
exp = pd.read_csv(file_name)
# list of freqs
list_of_freqs = exp.columns[12::].to_list()
# index of first real freq
index = list_of_freqs.index('-12')
# normalize
norm_factor = exp[ list_of_freqs[0:index-1] ].iloc[:,-1]
raw_data = exp[ list_of_freqs[index::] ]
# preallocate with zeros
Zspectra = np.zeros(raw_data.shape)
for i in range(raw_data.shape[0]):
Zspectra[i,:] = raw_data.iloc[i,:] / norm_factor[i]
#out[exp.columns[0:11].to_list()] = exp.columns[0:11].copy()
out = exp[ exp.columns[0:11] ].copy()
out['FILE'] = file_name.split('/NewMegaBox1to8_MLData_20200901/')[1]
Z = pd.DataFrame(Zspectra, columns= list_of_freqs[index::])
out[Z.columns] = Z.copy()
return out
```
### Load data
```
%%time
import os
from glob import glob
PATH = "./NewMegaBox1to8_MLData_20200901"
EXT = "*.csv"
all_csv_files = [file
for path, subdir, files in os.walk(PATH)
for file in glob(os.path.join(path, EXT))]
data = pd.DataFrame()
for file in all_csv_files:
exp = clean_my_file(file)
data = pd.concat( (data, exp), sort=False )
data.tail()
```
## Experimental conditions
```
sns.distplot(data['ApproT1(sec)'])
metadata = ['pH', 'Conc(mM)', 'ApproT1(sec)', 'ExpT1(ms)', 'ExpT2(ms)',
'ExpB1(percent)', 'ExpB0(ppm)', 'ExpB0(Hz)', 'Temp',
'SatPower(uT)', 'SatTime(ms)','FILE']
for C in metadata:
print(C)
print(data[C].nunique())
#sns.distplot(data[C])
print('---'*20)
```
# Data
```
%%time
X = data.drop(metadata, axis = 1)
print(len(metadata))
print(X.shape)
pH = data['pH'].copy()
X[data.pH == 6.22].mean(axis = 0).plot()
X[data.pH == 7.30].mean(axis = 0).plot()
pH
```
### PCA -- > Logistic Regression
```
### define function to train model based on cuttoff for pH
def train_logistic_PCA_pipeline(Spectra, pH_observed, min_n=2, max_n= 10, pH_cut_off = 7.0, n_cs=20, ignore_cut = False):
if ignore_cut == False:
# cut off > pH
y = 1*(pH_observed > pH_cut_off)
elif ignore_cut == True:
y = pH_class.copy()
# X data
X = Spectra.copy()
# Logistic
logistic = linear_model.LogisticRegression(solver='liblinear',
penalty='l1',max_iter=1000,random_state=42, n_jobs=2
,class_weight='balanced')
# Scaler
scale = StandardScaler()
#PCA
pca = PCA(random_state=42)
# pipeline
pipe = Pipeline(steps=[('scaler', scale ), ('pca', pca), ('logistic', logistic)])
# Training parameters
num_pca_components = np.arange(min_n,max_n,1)
Cs = np.logspace(-3, 2, n_cs)
param_grid ={
'pca__n_components': num_pca_components,
'logistic__C': Cs,
'logistic__fit_intercept':[True,False]
}
estimator = GridSearchCV(pipe, param_grid, verbose = 1, cv = 3, n_jobs=6, iid = True
, scoring = metrics.make_scorer(metrics.precision_score))
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
# Grid Search of Model
estimator.fit(X_train, y_train)
#AUC
y_probas = estimator.predict_proba(X_test)
skplt.metrics.plot_roc(y_test, y_probas)
#Confusion
skplt.metrics.plot_confusion_matrix(y_test, estimator.predict(X_test), normalize=True)
return estimator.best_estimator_, X_train, X_test, y_train, y_test
```
### -training pH >7.0
### CLASSIFICATION
### Only Z spectra
```
(data.pH> 7).value_counts(normalize = True).round(2)
pH
%%time
#clf1, X_train, X_test, y_train, y_test
model1= train_logistic_PCA_pipeline(X, pH, min_n=5, max_n= 10, pH_cut_off = 7.0, n_cs=10)
print(metrics.classification_report(model1[4],model1[0].predict(model1[2]) ))
# precision = positive predictive value
# recall = True positive Rate or sensitivity
metrics.plot_precision_recall_curve(model1[0], model1[2], model1[4] )
model2 = train_logistic_PCA_pipeline(X, pH, min_n=5, max_n= 10, pH_cut_off = 6.7, n_cs=20)
# precision = positive predictive value
# recall = True positive Rate or sensitivity
metrics.plot_precision_recall_curve(model2[0], model2[2], model2[4] )
print(metrics.classification_report(model2[4],model2[0].predict(model2[2]) ))
```
## Multi class
```
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
pHclass = pd.cut(pH,4)
le.fit(pHclass)
pHclass_coded = le.transform(pHclass)
C = pd.DataFrame(pd.Series(pHclass_coded).unique(),columns=['class_code'])
C['pH range'] = le.inverse_transform(C.class_code)
C
model3 = train_logistic_PCA_pipeline(X, pHclass_coded, min_n=5, max_n= 80, pH_cut_off = 6.7, n_cs=30,ignore_cut=True)
print(metrics.classification_report(model3[4],model3[0].predict(model3[2]) ))
```
## All measurable data for cut off at pH = 7.0
```
drop_cols = ['pH','FILE','ApproT1(sec)','Conc(mM)','Temp']
X = data.drop(drop_cols,axis = 1)
y = data.pH.copy()
clf4, X_train, X_test, y_train, y_test = train_logistic_PCA_pipeline(X, y, min_n=20,
max_n= 60, pH_cut_off = 7.0, n_cs=10
,ignore_cut=False)
LR = clf4['logistic']
LR.coef_
clf4
print(metrics.classification_report(y_test, clf4.predict(X_test)) )
for S in data['SatTime(ms)'].unique():
f = X_test['SatTime(ms)'] == S
score = metrics.recall_score(y_test[f], clf4.predict( X_test[f]) )
print(S,score)
Xdata = X.drop('100',axis = 1).copy()
conc_filter = data['Conc(mM)'] > 20
clf_01, _, _, _, _ = train_logistic_PCA_pipeline( Xdata[conc_filter], pH[conc_filter], min_n=2, max_n= 40, pH_cut_off = 7.0, n_cs= 20)
print(clf_01)
```
### -training pH >7.0
## at 37 celsius
```
f1 = data['Temp'] == 37
for C in metadata:
print(C)
print(data[f1][C].unique())
print('---'*20)
X = data[f1].drop(metadata,axis = 1)
Z = X.apply(foo, axis = 1)
print(X.shape)
print(Z.shape)
pH = data[f1]['pH'].copy()
clf_02, _, _, _, _ = train_logistic_PCA_pipeline( Z, pH, min_n=2, max_n= 40, pH_cut_off = 7.0, n_cs= 20)
print(clf_02)
print( metrics.classification_report(pH > 7, clf_02.predict(Z)) )
```
## training pH >7.0
### - at 37 celsius & T1 = 3.4
```
f1 = data['Temp'] == 37
f2 = data['ApproT1(sec)'] == 3.4
for C in metadata:
print(C)
print(data[f1&f2][C].unique())
print('---'*20)
X = data[f1&f2].drop(metadata,axis = 1)
Z = X.apply(foo, axis = 1)
print(X.shape)
print(Z.shape)
pH = data[f1&f2]['pH'].copy()
clf_04, _, _, _, _ = train_logistic_PCA_pipeline( Z, pH, min_n=2, max_n= 40, pH_cut_off = 7.0, n_cs= 20)
print(clf_04)
print( metrics.classification_report(pH > 7, clf_04.predict(Z)) )
print( metrics.confusion_matrix(pH > 7, clf_04.predict(Z)) )
print( metrics.cohen_kappa_score(pH > 7, clf_04.predict(Z)) )
plt.plot(xdata, Z.mean(),'-k')
plt.plot(xdata, Z.mean() + Z.std(),'--r')
plt.xlim([-12,12])
plt.title('Average Nornalized Z-spectra')
```
## training pH >7.0
### - at 42 celsius & T1 = 0.43
```
f1 = data['Temp'] == 42
f2 = data['ApproT1(sec)'] == .43
for C in metadata:
print(C)
print(data[f1&f2][C].unique())
print('---'*20)
X = data[f1&f2].drop(metadata,axis = 1)
Z = X.apply(foo, axis = 1)
print(X.shape)
print(Z.shape)
pH = data[f1&f2]['pH'].copy()
clf_05, _, _, _, _ = train_logistic_PCA_pipeline( Z, pH, min_n=2, max_n= 40, pH_cut_off = 6.5, n_cs= 20)
print(clf_05)
print( metrics.classification_report(pH > 7, clf_05.predict(Z)) )
print( metrics.confusion_matrix(pH > 7, clf_05.predict(Z)) )
print( metrics.cohen_kappa_score(pH > 7, clf_05.predict(Z)) )
pd.Series(pH > 7).value_counts(normalize = 1)
z1 = data[ data.pH == 6.23 ].iloc[:,6::].mean()
z1 = z1 / z1[1]
z2 = data[ data.pH == 7.17 ].iloc[:,6::].mean()
z2 = z2 / z2[1]
z1.plot()
z2.plot()
data.pH.unique()
plt.plot(xdata, Z.mean(),'-k')
plt.plot(xdata, Z.mean() + Z.std(),'--r')
plt.xlim([-12,12])
plt.title('Average Nornalized Z-spectra')
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# BSSN Time-Evolution Equations for the Gauge Fields $\alpha$ and $\beta^i$
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
[comment]: <> (Abstract: TODO, or make the introduction an abstract and addiotnal notes section, and write a new Introduction)
**Module Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** All expressions generated in this module have been validated against a trusted code (the original NRPy+/SENR code, which itself was validated against [Baumgarte's code](https://arxiv.org/abs/1211.6632)).
### NRPy+ Source Code for this module: [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py)
## Introduction:
This tutorial notebook constructs SymPy expressions for the right-hand sides of the time-evolution equations for the gauge fields $\alpha$ (the lapse, governing how much proper time elapses at each point between one timestep in a 3+1 solution to Einstein's equations and the next) and $\beta^i$ (the shift, governing how much proper distance numerical grid points move from one timestep in a 3+1 solution to Einstein's equations and the next).
Though we are completely free to choose gauge conditions (i.e., free to choose the form of the right-hand sides of the gauge time evolution equations), very few have been found robust in the presence of (puncture) black holes. So we focus here only on a few of the most stable choices.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules
1. [Step 2](#lapseconditions): Right-hand side of $\partial_t \alpha$
1. [Step 2.a](#onepluslog): $1+\log$ lapse
1. [Step 2.b](#harmonicslicing): Harmonic slicing
1. [Step 2.c](#frozen): Frozen lapse
1. [Step 3](#shiftconditions): Right-hand side of $\partial_t \beta^i$: Second-order Gamma-driving shift conditions
1. [Step 3.a](#origgammadriving): Original, non-covariant Gamma-driving shift condition
1. [Step 3.b](#covgammadriving): [Brown](https://arxiv.org/abs/0902.3652)'s suggested covariant Gamma-driving shift condition
1. [Step 3.b.i](#partial_beta): The right-hand side of the $\partial_t \beta^i$ equation
1. [Step 3.b.ii](#partial_upper_b): The right-hand side of the $\partial_t B^i$ equation
1. [Step 4](#rescale): Rescale right-hand sides of BSSN gauge equations
1. [Step 5](#code_validation): Code Validation against `BSSN.BSSN_gauge_RHSs` NRPy+ module
1. [Step 6](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Let's start by importing all the needed modules from Python/NRPy+:
```
# Step 1: Import all needed modules from NRPy+:
import sympy as sp
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
import reference_metric as rfm
# Step 1.c: Declare/initialize parameters for this module
thismodule = "BSSN_gauge_RHSs"
par.initialize_param(par.glb_param("char", thismodule, "LapseEvolutionOption", "OnePlusLog"))
par.initialize_param(par.glb_param("char", thismodule, "ShiftEvolutionOption", "GammaDriving2ndOrder_Covariant"))
# Step 1.d: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
# Step 1.e: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.f: Define BSSN scalars & tensors (in terms of rescaled BSSN quantities)
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
Bq.betaU_derivs()
import BSSN.BSSN_RHSs as Brhs
Brhs.BSSN_RHSs()
```
<a id='lapseconditions'></a>
# Step 2: Right-hand side of $\partial_t \alpha$ \[Back to [top](#toc)\]
$$\label{lapseconditions}$$
<a id='onepluslog'></a>
## Step 2.a: $1+\log$ lapse \[Back to [top](#toc)\]
$$\label{onepluslog}$$
The [$1=\log$ lapse condition](https://arxiv.org/abs/gr-qc/0206072) is a member of the [Bona-Masso family of lapse choices](https://arxiv.org/abs/gr-qc/9412071), which has the desirable property of singularity avoidance. As is common (e.g., see [Campanelli *et al* (2005)](https://arxiv.org/abs/gr-qc/0511048)), we make the replacement $\partial_t \to \partial_0 = \partial_t + \beta^i \partial_i$ to ensure lapse characteristics advect with the shift. The bracketed term in the $1+\log$ lapse condition below encodes the shift advection term:
\begin{align}
\partial_0 \alpha &= -2 \alpha K \\
\implies \partial_t \alpha &= \left[\beta^i \partial_i \alpha\right] - 2 \alpha K
\end{align}
```
# Step 2.a: The 1+log lapse condition:
# \partial_t \alpha = \beta^i \alpha_{,i} - 2*\alpha*K
# First import expressions from BSSN_quantities
cf = Bq.cf
trK = Bq.trK
alpha = Bq.alpha
betaU = Bq.betaU
# Implement the 1+log lapse condition
if par.parval_from_str(thismodule+"::LapseEvolutionOption") == "OnePlusLog":
alpha_rhs = -2*alpha*trK
alpha_dupD = ixp.declarerank1("alpha_dupD")
for i in range(DIM):
alpha_rhs += betaU[i]*alpha_dupD[i]
```
<a id='harmonicslicing'></a>
## Step 2.b: Harmonic slicing \[Back to [top](#toc)\]
$$\label{harmonicslicing}$$
As defined on Pg 2 of https://arxiv.org/pdf/gr-qc/9902024.pdf , this is given by
$$
\partial_t \alpha = \partial_t e^{6 \phi} = 6 e^{6 \phi} \partial_t \phi
$$
If
$$\text{cf} = W = e^{-2 \phi},$$
then
$$
6 e^{6 \phi} \partial_t \phi = 6 W^{-3} \partial_t \phi.
$$
However,
$$
\partial_t \phi = -\partial_t \text{cf} / (2 \text{cf})$$
(as described above), so if `cf`$=W$, then
\begin{align}
\partial_t \alpha &= 6 e^{6 \phi} \partial_t \phi \\
&= 6 W^{-3} \left(-\frac{\partial_t W}{2 W}\right) \\
&= -3 \text{cf}^{-4} \text{cf}\_\text{rhs}
\end{align}
**Exercise to students: Implement Harmonic slicing for `cf`$=\chi$**
```
# Step 2.b: Implement the harmonic slicing lapse condition
if par.parval_from_str(thismodule+"::LapseEvolutionOption") == "HarmonicSlicing":
if par.parval_from_str("BSSN.BSSN_quantities::EvolvedConformalFactor_cf") == "W":
alpha_rhs = -3*cf**(-4)*Brhs.cf_rhs
elif par.parval_from_str("BSSN.BSSN_quantities::EvolvedConformalFactor_cf") == "phi":
alpha_rhs = 6*sp.exp(6*cf)*Brhs.cf_rhs
else:
print("Error LapseEvolutionOption==HarmonicSlicing unsupported for EvolvedConformalFactor_cf!=(W or phi)")
exit(1)
```
<a id='frozen'></a>
## Step 2.c: Frozen lapse \[Back to [top](#toc)\]
$$\label{frozen}$$
This slicing condition is given by
$$\partial_t \alpha = 0,$$
which is rarely a stable lapse condition.
```
# Step 2.c: Frozen lapse
# \partial_t \alpha = 0
if par.parval_from_str(thismodule+"::LapseEvolutionOption") == "Frozen":
alpha_rhs = sp.sympify(0)
```
<a id='shiftconditions'></a>
# Step 3: Right-hand side of $\partial_t \beta^i$: Second-order Gamma-driving shift conditions \[Back to [top](#toc)\]
$$\label{shiftconditions}$$
The motivation behind Gamma-driving shift conditions are well documented in the book [*Numerical Relativity* by Baumgarte & Shapiro](https://www.amazon.com/Numerical-Relativity-Einsteins-Equations-Computer/dp/052151407X/).
<a id='origgammadriving'></a>
## Step 3.a: Original, non-covariant Gamma-driving shift condition \[Back to [top](#toc)\]
$$\label{origgammadriving}$$
**Option 1: Non-Covariant, Second-Order Shift**
We adopt the [*shifting (i.e., advecting) shift*](https://arxiv.org/abs/gr-qc/0605030) non-covariant, second-order shift condition:
\begin{align}
\partial_0 \beta^i &= B^{i} \\
\partial_0 B^i &= \frac{3}{4} \partial_{0} \bar{\Lambda}^{i} - \eta B^{i} \\
\implies \partial_t \beta^i &= \left[\beta^j \partial_j \beta^i\right] + B^{i} \\
\partial_t B^i &= \left[\beta^j \partial_j B^i\right] + \frac{3}{4} \partial_{0} \bar{\Lambda}^{i} - \eta B^{i},
\end{align}
where $\eta$ is the shift damping parameter, and $\partial_{0} \bar{\Lambda}^{i}$ in the right-hand side of the $\partial_{0} B^{i}$ equation is computed by adding $\beta^j \partial_j \bar{\Lambda}^i$ to the right-hand side expression given for $\partial_t \bar{\Lambda}^i$ in the BSSN time-evolution equations as listed [here](Tutorial-BSSN_formulation.ipynb), so no explicit time dependence occurs in the right-hand sides of the BSSN evolution equations and the Method of Lines can be applied directly.
```
# Step 3.a: Set \partial_t \beta^i
# First import expressions from BSSN_quantities
BU = Bq.BU
betU = Bq.betU
betaU_dupD = Bq.betaU_dupD
# Define needed quantities
beta_rhsU = ixp.zerorank1()
B_rhsU = ixp.zerorank1()
if par.parval_from_str(thismodule+"::ShiftEvolutionOption") == "GammaDriving2ndOrder_NoCovariant":
# Step 3.a.i: Compute right-hand side of beta^i
# * \partial_t \beta^i = \beta^j \beta^i_{,j} + B^i
for i in range(DIM):
beta_rhsU[i] += BU[i]
for j in range(DIM):
beta_rhsU[i] += betaU[j]*betaU_dupD[i][j]
# Compute right-hand side of B^i:
eta = par.Cparameters("REAL", thismodule, ["eta"],2.0)
# Step 3.a.ii: Compute right-hand side of B^i
# * \partial_t B^i = \beta^j B^i_{,j} + 3/4 * \partial_0 \Lambda^i - eta B^i
# Step 3.a.iii: Define BU_dupD, in terms of derivative of rescaled variable \bet^i
BU_dupD = ixp.zerorank2()
betU_dupD = ixp.declarerank2("betU_dupD","nosym")
for i in range(DIM):
for j in range(DIM):
BU_dupD[i][j] = betU_dupD[i][j]*rfm.ReU[i] + betU[i]*rfm.ReUdD[i][j]
# Step 3.a.iv: Compute \partial_0 \bar{\Lambda}^i = (\partial_t - \beta^i \partial_i) \bar{\Lambda}^j
Lambdabar_partial0 = ixp.zerorank1()
for i in range(DIM):
Lambdabar_partial0[i] = Brhs.Lambdabar_rhsU[i]
for i in range(DIM):
for j in range(DIM):
Lambdabar_partial0[j] += -betaU[i]*Brhs.LambdabarU_dupD[j][i]
# Step 3.a.v: Evaluate RHS of B^i:
for i in range(DIM):
B_rhsU[i] += sp.Rational(3,4)*Lambdabar_partial0[i] - eta*BU[i]
for j in range(DIM):
B_rhsU[i] += betaU[j]*BU_dupD[i][j]
```
<a id='covgammadriving'></a>
## Step 3.b: [Brown](https://arxiv.org/abs/0902.3652)'s suggested covariant Gamma-driving shift condition \[Back to [top](#toc)\]
$$\label{covgammadriving}$$
<a id='partial_beta'></a>
### Step 3.b.i: The right-hand side of the $\partial_t \beta^i$ equation \[Back to [top](#toc)\]
$$\label{partial_beta}$$
This is [Brown's](https://arxiv.org/abs/0902.3652) suggested formulation (Eq. 20b; note that Eq. 20a is the same as our lapse condition, as $\bar{D}_j \alpha = \partial_j \alpha$ for scalar $\alpha$):
$$\partial_t \beta^i = \left[\beta^j \bar{D}_j \beta^i\right] + B^{i}$$
Based on the definition of covariant derivative, we have
$$
\bar{D}_{j} \beta^{i} = \beta^i_{,j} + \bar{\Gamma}^i_{mj} \beta^m,
$$
so the above equation becomes
\begin{align}
\partial_t \beta^i &= \left[\beta^j \left(\beta^i_{,j} + \bar{\Gamma}^i_{mj} \beta^m\right)\right] + B^{i}\\
&= {\underbrace {\textstyle \beta^j \beta^i_{,j}}_{\text{Term 1}}} +
{\underbrace {\textstyle \beta^j \bar{\Gamma}^i_{mj} \beta^m}_{\text{Term 2}}} +
{\underbrace {\textstyle B^i}_{\text{Term 3}}}
\end{align}
```
# Step 3.b: The right-hand side of the \partial_t \beta^i equation
if par.parval_from_str(thismodule+"::ShiftEvolutionOption") == "GammaDriving2ndOrder_Covariant":
# Step 3.b Option 2: \partial_t \beta^i = \left[\beta^j \bar{D}_j \beta^i\right] + B^{i}
# First we need GammabarUDD, defined in Bq.gammabar__inverse_and_derivs()
Bq.gammabar__inverse_and_derivs()
GammabarUDD = Bq.GammabarUDD
# Then compute right-hand side:
# Term 1: \beta^j \beta^i_{,j}
for i in range(DIM):
for j in range(DIM):
beta_rhsU[i] += betaU[j]*betaU_dupD[i][j]
# Term 2: \beta^j \bar{\Gamma}^i_{mj} \beta^m
for i in range(DIM):
for j in range(DIM):
for m in range(DIM):
beta_rhsU[i] += betaU[j]*GammabarUDD[i][m][j]*betaU[m]
# Term 3: B^i
for i in range(DIM):
beta_rhsU[i] += BU[i]
```
<a id='partial_upper_b'></a>
### Step 3.b.ii: The right-hand side of the $\partial_t B^i$ equation \[Back to [top](#toc)\]
$$\label{partial_upper_b}$$
$$\partial_t B^i = \left[\beta^j \bar{D}_j B^i\right] + \frac{3}{4}\left( \partial_t \bar{\Lambda}^{i} - \beta^j \bar{D}_j \bar{\Lambda}^{i} \right) - \eta B^{i}$$
Based on the definition of covariant derivative, we have for vector $V^i$
$$
\bar{D}_{j} V^{i} = V^i_{,j} + \bar{\Gamma}^i_{mj} V^m,
$$
so the above equation becomes
\begin{align}
\partial_t B^i &= \left[\beta^j \left(B^i_{,j} + \bar{\Gamma}^i_{mj} B^m\right)\right] + \frac{3}{4}\left[ \partial_t \bar{\Lambda}^{i} - \beta^j \left(\bar{\Lambda}^i_{,j} + \bar{\Gamma}^i_{mj} \bar{\Lambda}^m\right) \right] - \eta B^{i} \\
&= {\underbrace {\textstyle \beta^j B^i_{,j}}_{\text{Term 1}}} +
{\underbrace {\textstyle \beta^j \bar{\Gamma}^i_{mj} B^m}_{\text{Term 2}}} +
{\underbrace {\textstyle \frac{3}{4}\partial_t \bar{\Lambda}^{i}}_{\text{Term 3}}} -
{\underbrace {\textstyle \frac{3}{4}\beta^j \bar{\Lambda}^i_{,j}}_{\text{Term 4}}} -
{\underbrace {\textstyle \frac{3}{4}\beta^j \bar{\Gamma}^i_{mj} \bar{\Lambda}^m}_{\text{Term 5}}} -
{\underbrace {\textstyle \eta B^i}_{\text{Term 6}}}
\end{align}
```
if par.parval_from_str(thismodule+"::ShiftEvolutionOption") == "GammaDriving2ndOrder_Covariant":
# Step 3.c: Covariant option:
# \partial_t B^i = \beta^j \bar{D}_j B^i
# + \frac{3}{4} ( \partial_t \bar{\Lambda}^{i} - \beta^j \bar{D}_j \bar{\Lambda}^{i} )
# - \eta B^{i}
# = \beta^j B^i_{,j} + \beta^j \bar{\Gamma}^i_{mj} B^m
# + \frac{3}{4}[ \partial_t \bar{\Lambda}^{i}
# - \beta^j (\bar{\Lambda}^i_{,j} + \bar{\Gamma}^i_{mj} \bar{\Lambda}^m)]
# - \eta B^{i}
# Term 1, part a: First compute B^i_{,j} using upwinded derivative
BU_dupD = ixp.zerorank2()
betU_dupD = ixp.declarerank2("betU_dupD","nosym")
for i in range(DIM):
for j in range(DIM):
BU_dupD[i][j] = betU_dupD[i][j]*rfm.ReU[i] + betU[i]*rfm.ReUdD[i][j]
# Term 1: \beta^j B^i_{,j}
for i in range(DIM):
for j in range(DIM):
B_rhsU[i] += betaU[j]*BU_dupD[i][j]
# Term 2: \beta^j \bar{\Gamma}^i_{mj} B^m
for i in range(DIM):
for j in range(DIM):
for m in range(DIM):
B_rhsU[i] += betaU[j]*GammabarUDD[i][m][j]*BU[m]
# Term 3: \frac{3}{4}\partial_t \bar{\Lambda}^{i}
for i in range(DIM):
B_rhsU[i] += sp.Rational(3,4)*Brhs.Lambdabar_rhsU[i]
# Term 4: -\frac{3}{4}\beta^j \bar{\Lambda}^i_{,j}
for i in range(DIM):
for j in range(DIM):
B_rhsU[i] += -sp.Rational(3,4)*betaU[j]*Brhs.LambdabarU_dupD[i][j]
# Term 5: -\frac{3}{4}\beta^j \bar{\Gamma}^i_{mj} \bar{\Lambda}^m
for i in range(DIM):
for j in range(DIM):
for m in range(DIM):
B_rhsU[i] += -sp.Rational(3,4)*betaU[j]*GammabarUDD[i][m][j]*Bq.LambdabarU[m]
# Term 6: - \eta B^i
# eta is a free parameter; we declare it here:
eta = par.Cparameters("REAL", thismodule, ["eta"],2.0)
for i in range(DIM):
B_rhsU[i] += -eta*BU[i]
```
<a id='rescale'></a>
# Step 4: Rescale right-hand sides of BSSN gauge equations \[Back to [top](#toc)\]
$$\label{rescale}$$
Next we rescale the right-hand sides of the BSSN equations so that the evolved variables are $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$
```
# Step 4: Rescale the BSSN gauge RHS quantities so that the evolved
# variables may remain smooth across coord singularities
vet_rhsU = ixp.zerorank1()
bet_rhsU = ixp.zerorank1()
for i in range(DIM):
vet_rhsU[i] = beta_rhsU[i] / rfm.ReU[i]
bet_rhsU[i] = B_rhsU[i] / rfm.ReU[i]
#print(str(Abar_rhsDD[2][2]).replace("**","^").replace("_","").replace("xx","x").replace("sin(x2)","Sin[x2]").replace("sin(2*x2)","Sin[2*x2]").replace("cos(x2)","Cos[x2]").replace("detgbaroverdetghat","detg"))
#print(str(Dbarbetacontraction).replace("**","^").replace("_","").replace("xx","x").replace("sin(x2)","Sin[x2]").replace("detgbaroverdetghat","detg"))
#print(betaU_dD)
#print(str(trK_rhs).replace("xx2","xx3").replace("xx1","xx2").replace("xx0","xx1").replace("**","^").replace("_","").replace("sin(xx2)","Sinx2").replace("xx","x").replace("sin(2*x2)","Sin2x2").replace("cos(x2)","Cosx2").replace("detgbaroverdetghat","detg"))
#print(str(bet_rhsU[0]).replace("xx2","xx3").replace("xx1","xx2").replace("xx0","xx1").replace("**","^").replace("_","").replace("sin(xx2)","Sinx2").replace("xx","x").replace("sin(2*x2)","Sin2x2").replace("cos(x2)","Cosx2").replace("detgbaroverdetghat","detg"))
```
<a id='code_validation'></a>
# Step 5: Code Validation against `BSSN.BSSN_gauge_RHSs` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN gauge equations between
1. this tutorial and
2. the NRPy+ [BSSN.BSSN_gauge_RHSs](../edit/BSSN/BSSN_gauge_RHSs.py) module.
By default, we analyze the RHSs in Spherical coordinates and with the covariant Gamma-driving second-order shift condition, though other coordinate systems & gauge conditions may be chosen.
```
# Step 5: We already have SymPy expressions for BSSN gauge RHS expressions
# in terms of other SymPy variables. Even if we reset the
# list of NRPy+ gridfunctions, these *SymPy* expressions for
# BSSN RHS variables *will remain unaffected*.
#
# Here, we will use the above-defined BSSN gauge RHS expressions
# to validate against the same expressions in the
# BSSN/BSSN_gauge_RHSs.py file, to ensure consistency between
# this tutorial and the module itself.
#
# Reset the list of gridfunctions, as registering a gridfunction
# twice will spawn an error.
gri.glb_gridfcs_list = []
# Step 5.a: Call the BSSN_gauge_RHSs() function from within the
# BSSN/BSSN_gauge_RHSs.py module,
# which should generate exactly the same expressions as above.
import BSSN.BSSN_gauge_RHSs as Bgrhs
par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption","GammaDriving2ndOrder_Covariant")
Bgrhs.BSSN_gauge_RHSs()
print("Consistency check between BSSN.BSSN_gauge_RHSs tutorial and NRPy+ module: ALL SHOULD BE ZERO.")
print("alpha_rhs - bssnrhs.alpha_rhs = " + str(alpha_rhs - Bgrhs.alpha_rhs))
for i in range(DIM):
print("vet_rhsU["+str(i)+"] - bssnrhs.vet_rhsU["+str(i)+"] = " + str(vet_rhsU[i] - Bgrhs.vet_rhsU[i]))
print("bet_rhsU["+str(i)+"] - bssnrhs.bet_rhsU["+str(i)+"] = " + str(bet_rhsU[i] - Bgrhs.bet_rhsU[i]))
```
<a id='latex_pdf_output'></a>
# Step 6: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.pdf](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Aditya-Singla/Banknote-Authentication/blob/master/Banknote_authentication.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Importing the libraries**
```
import pandas as pd
import numpy as np
```
**Uploading the dataset**
```
dataset = pd.read_csv('Bank note authentication.csv')
X = dataset.iloc[:,:-1].values
y = dataset.iloc[:,-1].values
```
**No missing values** (*as specified by the source* https://archive.ics.uci.edu/ml/datasets/banknote+authentication )
**Splitting the dataset into training set and test set**
```
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size =0.2, random_state =0 )
```
*Checking* training and test sets
```
print(x_train)
print(y_train)
print(x_test)
print(y_test)
print(y_test)
```
**Feature Scaling** (**NOT** required for the *independent variable*)
```
from sklearn.preprocessing import StandardScaler
x_sc = StandardScaler()
x_train_sc = x_sc.fit_transform(x_train)
x_test_sc = x_sc.fit_transform(x_test)
```
*Checking* feature scaling
```
print(x_train_sc)
```
**Logistic Regression**
```
from sklearn.linear_model import LogisticRegression
classifier_lr = LogisticRegression(random_state=0)
classifier_lr.fit(x_train_sc,y_train)
y_predict_lr = classifier_lr.predict(x_test_sc)
```
**Evaluating** Logistic Regeression
```
from sklearn.metrics import confusion_matrix, accuracy_score, roc_auc_score
cm_lr = confusion_matrix(y_test,y_predict_lr)
print(cm_lr)
accuracy_score(y_test,y_predict_lr)
auc_score_lr = roc_auc_score(y_test,y_predict_lr)
print(auc_score_lr)
```
**Evaluating** K-Cross Validation Score
```
from sklearn.model_selection import cross_val_score
accuracy_lr = cross_val_score(classifier_lr, x_train_sc, y_train, cv=10 )
print( 'Accuracy:{:.2f}%'.format(accuracy_lr.mean()*100))
print( 'Standard Deviation: {:.2f}%'.format(accuracy_lr.std()*100))
```
**K Nearest Neighbors**
```
from sklearn.neighbors import KNeighborsClassifier
classifier_knn = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski',p=2)
classifier_knn.fit(x_train_sc,y_train)
y_predict_knn = classifier_knn.predict(x_test_sc)
```
**Evaluating** K Nearest Neighbors
```
cm_knn = confusion_matrix(y_test, y_predict_knn)
print(cm_knn)
accuracy_score(y_test,y_predict_knn)
auc_score_knn = roc_auc_score(y_test,y_predict_knn)
print(auc_score_knn)
```
**Evaluating** K-Cross Validation Score
```
from sklearn.model_selection import cross_val_score
accuracy_knn = cross_val_score(classifier_knn, x_train_sc, y_train, cv=10 )
print( 'Accuracy:{:.2f}%'.format(accuracy_knn.mean()*100))
print( 'Standard Deviation: {:.2f}%'.format(accuracy_knn.std()*100))
```
**Support Vector Machines (Kernel SVM)**
```
from sklearn.svm import SVC
classifier_svm = SVC(kernel ='rbf', random_state = 20)
classifier_svm.fit(x_train_sc,y_train)
y_predict_svm = classifier_svm.predict(x_test_sc)
```
**Evaluating** Kernel SVM
```
cm_svm = confusion_matrix(y_test,y_predict_svm)
print(cm_svm)
accuracy_score(y_test,y_predict_svm)
auc_score_svm = roc_auc_score(y_test,y_predict_svm)
print(auc_score_svm)
```
**Evaluating** K-Cross Validation Score
```
accuracy_svm = cross_val_score(classifier_svm, x_train_sc, y_train, cv=10 )
print( 'Accuracy:{:.2f}%'.format(accuracy_svm.mean()*100))
print( 'Standard Deviation: {:.2f}%'.format(accuracy_svm.std()*100))
```
**Naive-Bayes Classification**
```
from sklearn.naive_bayes import GaussianNB
classifier_nb = GaussianNB()
classifier_nb.fit(x_train_sc, y_train)
y_predict_nb = classifier_nb.predict(x_test_sc)
```
**Evaluating** Naive Bayes
```
cm_nb = confusion_matrix(y_test, y_predict_nb)
print (cm_nb)
accuracy_score(y_test, y_predict_nb)
auc_score_nb = roc_auc_score(y_test,y_predict_nb)
print(auc_score_nb)
```
**Evaluating** K-Cross Validation Score
```
accuracy_nb = cross_val_score(classifier_nb, x_train_sc, y_train, cv=10 )
print( 'Accuracy:{:.2f}%'.format(accuracy_nb.mean()*100))
print( 'Standard Deviation: {:.2f}%'.format(accuracy_nb.std()*100))
```
**Decision Tree Classification**
```
from sklearn.tree import DecisionTreeClassifier
classifier_dt = DecisionTreeClassifier(criterion='entropy',random_state=0)
classifier_dt.fit(x_train_sc, y_train)
y_predict_dt = classifier_dt.predict(x_test_sc)
```
**Evaluating** Decision Tree Classifier
```
cm_dt = confusion_matrix(y_test, y_predict_dt)
print (cm_dt)
accuracy_score(y_test, y_predict_dt)
auc_score_dt = roc_auc_score(y_test,y_predict_dt)
print(auc_score_dt)
```
**Evaluating** K-Cross Validation Score
```
accuracy_dt = cross_val_score(classifier_dt, x_train_sc, y_train, cv=10 )
print( 'Accuracy:{:.2f}%'.format(accuracy_dt.mean()*100))
print( 'Standard Deviation: {:.2f}%'.format(accuracy_dt.std()*100))
```
**Random Forest Classification**
```
from sklearn.ensemble import RandomForestClassifier
classifier_rf = RandomForestClassifier(n_estimators=100, criterion='entropy',random_state=0)
classifier_rf.fit(x_train_sc, y_train)
y_predict_rf = classifier_rf.predict(x_test_sc)
```
**Evaluating** Random Forest Classifier
```
cm_rf = confusion_matrix(y_test, y_predict_rf)
print (cm_rf)
accuracy_score(y_test, y_predict_rf)
auc_score_rf = roc_auc_score(y_test,y_predict_rf)
print(auc_score_rf)
```
**Evaluating** K-Cross Validation Score
```
accuracy_rf = cross_val_score(classifier_rf, x_train_sc, y_train, cv=10 )
print( 'Accuracy:{:.2f}%'.format(accuracy_rf.mean()*100))
print( 'Standard Deviation: {:.2f}%'.format(accuracy_rf.std()*100))
```
**Neural Network Classifier**
P.S. This is just for fun!
```
from sklearn.neural_network import MLPClassifier
classifier_neural_network = MLPClassifier(random_state=0)
classifier_neural_network.fit(x_train_sc,y_train)
y_predict_neural_network = classifier_neural_network.predict(x_test_sc)
```
**Evaluating** Neural Network
```
cm_neural_network = confusion_matrix(y_test, y_predict_neural_network)
print(cm_neural_network)
accuracy_score(y_test, y_predict_neural_network)
auc_score_neural_network = roc_auc_score(y_test,y_predict_neural_network)
print(auc_score_neural_network)
```
**Evaluating** K-Cross Validation Score
```
accuracy_neural_network = cross_val_score(classifier_neural_network, x_train_sc, y_train, cv=10 )
print( 'Accuracy:{:.2f}%'.format(accuracy_neural_network.mean()*100))
print( 'Standard Deviation: {:.2f}%'.format(accuracy_neural_network.std()*100))
```
*Neural Network* model as well as *Kernel SVM* gave us the best overall accuracy of 99.27% !!
Well, no tuning is necessary as the accuracy has already pretty much reached the maximum.
| github_jupyter |
# A Char-RNN Implementation in Tensorflow
*This notebook is slightly modified from https://colab.research.google.com/drive/13Vr3PrDg7cc4OZ3W2-grLSVSf0RJYWzb, with the following changes:*
* Main parameters defined at the start instead of middle
* Run all works, because of the added upload_custom_data parameter
* Training time specified in minutes instead of steps, for time-constrained classroom use
---
CharRNN was a well known generative text model (character level LSTM) created by Andrej Karpathy. It allowed easy training and generation of arbitrary text with many hilarious results:
* Music: abc notation
<https://highnoongmt.wordpress.com/2015/05/22/lisls-stis-recurrent-neural-networks-for-folk-music-generation/>,
* Irish folk music
<https://soundcloud.com/seaandsailor/sets/char-rnn-composes-irish-folk-music>-
* Obama speeches
<https://medium.com/@samim/obama-rnn-machine-generated-political-speeches-c8abd18a2ea0>-
* Eminem lyrics
<https://soundcloud.com/mrchrisjohnson/recurrent-neural-shady>- (NSFW ;-))
* Research awards
<http://karpathy.github.io/2015/05/21/rnn-effectiveness/#comment-2073825449>-
* TED Talks
<https://medium.com/@samim/ted-rnn-machine-generated-ted-talks-3dd682b894c0>-
* Movie Titles <http://www.cs.toronto.edu/~graves/handwriting.html>
This notebook contains a reimplementation in Tensorflow. It will let you input a file containing the text you want your generator to mimic, train your model, see the results, and save it for future use.
To get started, start running the cells in order, following the instructions at each step. You will need a sizable text file (try at least 1 MB of text) when prompted to upload one. For exploration you can also use the provided text corpus taken from Shakespeare's works.
The training cell saves a checkpoint every 30 seconds, so you can check the output of your network and not lose any progress.
## Outline
This notebook will guide you through the following steps. Roughly speaking, these will be our steps:
* Upload some data
* Set some training parameters (you can just use the defaults for now)
* Define our Model, training loss function, and data input manager
* Train on a cloud GPU
* Save out model and use it to generate some new text.
Design of the RNN is inspired by [this github project](https://github.com/sherjilozair/char-rnn-tensorflow) which was based on Andrej Karpathy's [char-rnn](https://github.com/karpathy/char-rnn). If you'd like to learn more, Andrej's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) is a great place to start.
### Imports and Values Needed to Run this Code
```
%tensorflow_version 1.x
from __future__ import absolute_import, print_function, division
from google.colab import files
from collections import Counter, defaultdict
from copy import deepcopy
from IPython.display import clear_output
from random import randint
import json
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
CHECKPOINT_DIR = './checkpoints/' #Checkpoints are temporarily kept here.
TEXT_ENCODING = 'utf-8'
```
### Let's define our training parameters.
Feel free to leave these untouched at their default values and just run this cell as is. Later, you can come back here and experiment wth these.
These parameters are just for training. Further down at the inference step, we'll define parameters for the text-generation step.
```
#The most common parameters to change
upload_custom_data = False #if false, use the default Shakespeare data
training_time_minutes = 2 #change this depending on how much time you have
#Neural network and optimization default parameters that usually work ok
num_layers = 2
state_size = 256
batch_size = 64
sequence_length = 256
steps_per_epoch = 500
learning_rate = 0.002
learning_rate_decay = 0.95
gradient_clipping = 5.0
```
### Get the training data.
We can either download the works of Shakespeare to train on or upload our own plain text file that we will be training on.
```
if not upload_custom_data:
shakespeare_url = "https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt"
import urllib
file_contents = urllib.urlopen(shakespeare_url).read()
file_name = "shakespeare"
file_contents = file_contents[10501:] # Skip headers and start at content
print("An excerpt: \n", file_contents[:664])
if upload_custom_data:
uploaded = files.upload()
if type(uploaded) is not dict: uploaded = uploaded.files ## Deal with filedit versions
file_bytes = uploaded[uploaded.keys()[0]]
utf8_string = file_bytes.decode(TEXT_ENCODING)
file_contents = utf8_string if files else ''
file_name = uploaded.keys()[0]
print("An excerpt: \n", file_contents[:664])
```
## Set up the recurrent LSTM network
Before we can do anything, we have to define what our neural network looks like. This next cell creates a class which will contain the tensorflow graph and training parameters that make up the network.
```
class RNN(object):
"""Represents a Recurrent Neural Network using LSTM cells.
Attributes:
num_layers: The integer number of hidden layers in the RNN.
state_size: The size of the state in each LSTM cell.
num_classes: Number of output classes. (E.g. 256 for Extended ASCII).
batch_size: The number of training sequences to process per step.
sequence_length: The number of chars in a training sequence.
batch_index: Index within the dataset to start the next batch at.
on_gpu_sequences: Generates the training inputs for a single batch.
on_gpu_targets: Generates the training labels for a single batch.
input_symbol: Placeholder for a single label for use during inference.
temperature: Used when sampling outputs. A higher temperature will yield
more variance; a lower one will produce the most likely outputs. Value
should be between 0 and 1.
initial_state: The LSTM State Tuple to initialize the network with. This
will need to be set to the new_state computed by the network each cycle.
logits: Unnormalized probability distribution for the next predicted
label, for each timestep in each sequence.
output_labels: A [batch_size, 1] int32 tensor containing a predicted
label for each sequence in a batch. Only generated in infer mode.
"""
def __init__(self,
rnn_num_layers=1,
rnn_state_size=128,
num_classes=256,
rnn_batch_size=1,
rnn_sequence_length=1):
self.num_layers = rnn_num_layers
self.state_size = rnn_state_size
self.num_classes = num_classes
self.batch_size = rnn_batch_size
self.sequence_length = rnn_sequence_length
self.batch_shape = (self.batch_size, self.sequence_length)
print("Built LSTM: ",
self.num_layers ,self.state_size ,self.num_classes ,
self.batch_size ,self.sequence_length ,self.batch_shape)
def build_training_model(self, dropout_rate, data_to_load):
"""Sets up an RNN model for running a training job.
Args:
dropout_rate: The rate at which weights may be forgotten during training.
data_to_load: A numpy array of containing the training data, with each
element in data_to_load being an integer representing a label. For
example, for Extended ASCII, values may be 0 through 255.
Raises:
ValueError: If mode is data_to_load is None.
"""
if data_to_load is None:
raise ValueError('To continue, you must upload training data.')
inputs = self._set_up_training_inputs(data_to_load)
self._build_rnn(inputs, dropout_rate)
def build_inference_model(self):
"""Sets up an RNN model for generating a sequence element by element.
"""
self.input_symbol = tf.placeholder(shape=[1, 1], dtype=tf.int32)
self.temperature = tf.placeholder(shape=(), dtype=tf.float32,
name='temperature')
self.num_options = tf.placeholder(shape=(), dtype=tf.int32,
name='num_options')
self._build_rnn(self.input_symbol, 0.0)
self.temperature_modified_logits = tf.squeeze(
self.logits, 0) / self.temperature
#for beam search
self.normalized_probs = tf.nn.softmax(self.logits)
self.output_labels = tf.multinomial(self.temperature_modified_logits,
self.num_options)
def _set_up_training_inputs(self, data):
self.batch_index = tf.placeholder(shape=(), dtype=tf.int32)
batch_input_length = self.batch_size * self.sequence_length
input_window = tf.slice(tf.constant(data, dtype=tf.int32),
[self.batch_index],
[batch_input_length + 1])
self.on_gpu_sequences = tf.reshape(
tf.slice(input_window, [0], [batch_input_length]), self.batch_shape)
self.on_gpu_targets = tf.reshape(
tf.slice(input_window, [1], [batch_input_length]), self.batch_shape)
return self.on_gpu_sequences
def _build_rnn(self, inputs, dropout_rate):
"""Generates an RNN model using the passed functions.
Args:
inputs: int32 Tensor with shape [batch_size, sequence_length] containing
input labels.
dropout_rate: A floating point value determining the chance that a weight
is forgotten during evaluation.
"""
# Alias some commonly used functions
dropout_wrapper = tf.contrib.rnn.DropoutWrapper
lstm_cell = tf.contrib.rnn.LSTMCell
multi_rnn_cell = tf.contrib.rnn.MultiRNNCell
self._cell = multi_rnn_cell(
[dropout_wrapper(lstm_cell(self.state_size), 1.0, 1.0 - dropout_rate)
for _ in range(self.num_layers)])
self.initial_state = self._cell.zero_state(self.batch_size, tf.float32)
embedding = tf.get_variable('embedding',
[self.num_classes, self.state_size])
embedding_input = tf.nn.embedding_lookup(embedding, inputs)
output, self.new_state = tf.nn.dynamic_rnn(self._cell, embedding_input,
initial_state=self.initial_state)
self.logits = tf.contrib.layers.fully_connected(output, self.num_classes,
activation_fn=None)
```
###Define your loss function
Loss is a measure of how well the neural network is modeling the data distribution.
Pass in your logits and the targets you're training against. In this case, target_weights is a set of multipliers that will put higher emphasis on certain outputs. In this notebook, we'll give all outputs equal importance.
```
def get_loss(logits, targets, target_weights):
with tf.name_scope('loss'):
return tf.contrib.seq2seq.sequence_loss(
logits,
targets,
target_weights,
average_across_timesteps=True)
```
### Define your optimizer
This tells Tensorflow how to reduce the loss. We will use the popular [ADAM algorithm](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
```
def get_optimizer(loss, initial_learning_rate, gradient_clipping, global_step,
decay_steps, decay_rate):
with tf.name_scope('optimizer'):
computed_learning_rate = tf.train.exponential_decay(
initial_learning_rate,
global_step,
decay_steps,
decay_rate,
staircase=True)
optimizer = tf.train.AdamOptimizer(computed_learning_rate)
trained_vars = tf.trainable_variables()
gradients, _ = tf.clip_by_global_norm(
tf.gradients(loss, trained_vars),
gradient_clipping)
training_op = optimizer.apply_gradients(
zip(gradients, trained_vars),
global_step=global_step)
return training_op, computed_learning_rate
```
### This class will let us view the progress of our training as it progresses.
```
class LossPlotter(object):
def __init__(self, history_length):
self.global_steps = []
self.losses = []
self.averaged_loss_x = []
self.averaged_loss_y = []
self.history_length = history_length
def draw_plots(self):
self._update_averages(self.global_steps, self.losses,
self.averaged_loss_x, self.averaged_loss_y)
plt.title('Average Loss Over Time')
plt.xlabel('Global Step')
plt.ylabel('Loss')
plt.plot(self.averaged_loss_x, self.averaged_loss_y, label='Loss/Time (Avg)')
plt.plot()
plt.plot(self.global_steps, self.losses,
label='Loss/Time (Last %d)' % self.history_length,
alpha=.1, color='r')
plt.plot()
plt.legend()
plt.show()
plt.title('Loss for the last 100 Steps')
plt.xlabel('Global Step')
plt.ylabel('Loss')
plt.plot(self.global_steps, self.losses,
label='Loss/Time (Last %d)' % self.history_length, color='r')
plt.plot()
plt.legend()
plt.show()
# The notebook will be slowed down at the end of training if we plot the
# entire history of raw data. Plot only the last 100 steps of raw data,
# and the average of each 100 batches. Don't keep unused data.
self.global_steps = []
self.losses = []
self.learning_rates = []
def log_step(self, global_step, loss):
self.global_steps.append(global_step)
self.losses.append(loss)
def _update_averages(self, x_list, y_list,
averaged_data_x, averaged_data_y):
averaged_data_x.append(x_list[-1])
averaged_data_y.append(sum(y_list) / self.history_length)
```
## Now, we're going to start training our model.
This could take a while, so you might want to grab a coffee. Every 30 seconds of training, we're going to save a checkpoint to make sure we don't lose our progress. To monitor the progress of your training, feel free to stop the training every once in a while and run the inference cell to generate text with your model!
First, we will need to turn the plain text file into arrays of tokens (and, later, back). To do this we will use this token mapper helper class:
```
import string
class TokenMapper(object):
def __init__(self):
self.token_mapping = {}
self.reverse_token_mapping = {}
def buildFromData(self, utf8_string, limit=0.00004):
print("Build token dictionary.")
total_num = len(utf8_string)
sorted_tokens = sorted(Counter(utf8_string.decode('utf8')).items(),
key=lambda x: -x[1])
# Filter tokens: Only allow printable characters (not control chars) and
# limit to ones that are resonably common, i.e. skip strange esoteric
# characters in order to reduce the dictionary size.
filtered_tokens = filter(lambda t: t[0] in string.printable or
float(t[1])/total_num > limit, sorted_tokens)
tokens, counts = zip(*filtered_tokens)
self.token_mapping = dict(zip(tokens, range(len(tokens))))
for c in string.printable:
if c not in self.token_mapping:
print("Skipped token for: ", c)
self.reverse_token_mapping = {
val: key for key, val in self.token_mapping.items()}
print("Created dictionary: %d tokens"%len(self.token_mapping))
def mapchar(self, char):
if char in self.token_mapping:
return self.token_mapping[char]
else:
return self.token_mapping[' ']
def mapstring(self, utf8_string):
return [self.mapchar(c) for c in utf8_string]
def maptoken(self, token):
return self.reverse_token_mapping[token]
def maptokens(self, int_array):
return ''.join([self.reverse_token_mapping[c] for c in int_array])
def size(self):
return len(self.token_mapping)
def alphabet(self):
return ''.join([k for k,v in sorted(self.token_mapping.items(),key=itemgetter(1))])
def print(self):
for k,v in sorted(self.token_mapping.items(),key=itemgetter(1)): print(k, v)
def save(self, path):
with open(path, 'wb') as json_file:
json.dump(self.token_mapping, json_file)
def restore(self, path):
with open(path, 'r') as json_file:
self.token_mapping = {}
self.token_mapping.update(json.load(json_file))
self.reverse_token_mapping = {val: key for key, val in self.token_mapping.items()}
```
Now convert the raw input into a list of tokens.
```
# Clean the checkpoint directory and make a fresh one
!rm -rf {CHECKPOINT_DIR}
!mkdir {CHECKPOINT_DIR}
!ls -lt
chars_in_batch = (sequence_length * batch_size)
file_len = len(file_contents)
unique_sequential_batches = file_len // chars_in_batch
mapper = TokenMapper()
mapper.buildFromData(file_contents)
mapper.save(''.join([CHECKPOINT_DIR, 'token_mapping.json']))
input_values = mapper.mapstring(file_contents)
```
###First, we'll build our neural network and add our training operations to the Tensorflow graph.
If you're continuing training after testing your generator, run the next three cells.
```
tf.reset_default_graph()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
print('Constructing model...')
model = RNN(
rnn_num_layers=num_layers,
rnn_state_size=state_size,
num_classes=mapper.size(),
rnn_batch_size=batch_size,
rnn_sequence_length=sequence_length)
model.build_training_model(0.05, np.asarray(input_values))
print('Constructed model successfully.')
print('Setting up training session...')
neutral_target_weights = tf.constant(
np.ones(model.batch_shape),
tf.float32
)
loss = get_loss(model.logits, model.on_gpu_targets, neutral_target_weights)
global_step = tf.get_variable('global_step', shape=(), trainable=False,
dtype=tf.int32)
training_step, computed_learning_rate = get_optimizer(
loss,
learning_rate,
gradient_clipping,
global_step,
steps_per_epoch,
learning_rate_decay
)
```
The supervisor will manage the training flow and checkpointing.
```
# Create a supervisor that will checkpoint the model in the CHECKPOINT_DIR
sv = tf.train.Supervisor(
logdir=CHECKPOINT_DIR,
global_step=global_step,
save_model_secs=30)
print('Training session ready.')
```
###This next cell will begin the training cycle.
First, we will attempt to pick up training where we left off, if a previous checkpoint exists, then continue the training process.
```
from datetime import datetime
start_time = datetime.now()
with sv.managed_session(config=config) as sess:
print('Training supervisor successfully initialized all variables.')
if not file_len:
raise ValueError('To continue, you must upload training data.')
elif file_len < chars_in_batch:
raise ValueError('To continue, you must upload a larger set of data.')
plotter = LossPlotter(100)
step_number = sess.run(global_step)
zero_state = sess.run([model.initial_state])
max_batch_index = (unique_sequential_batches - 1) * chars_in_batch
while not sv.should_stop() and (datetime.now()-start_time).seconds/60 < training_time_minutes:
feed_dict = {
model.batch_index: randint(0, max_batch_index),
model.initial_state: zero_state
}
[_, _, training_loss, step_number, current_learning_rate, _] = sess.run(
[model.on_gpu_sequences,
model.on_gpu_targets,
loss,
global_step,
computed_learning_rate,
training_step],
feed_dict)
plotter.log_step(step_number, training_loss)
if step_number % 100 == 0:
clear_output(True)
plotter.draw_plots()
print('Latest checkpoint is: %s' %
tf.train.latest_checkpoint(CHECKPOINT_DIR))
print('Learning Rate is: %f' %
current_learning_rate)
if step_number % 10 == 0:
print('global step %d, loss=%f' % (step_number, training_loss))
clear_output(True)
print('Training completed in HH:MM:SS = ', datetime.now()-start_time)
print('Latest checkpoint is: %s' %
tf.train.latest_checkpoint(CHECKPOINT_DIR))
```
## Now, we're going to generate some text!
Here, we'll use the **Beam Search** algorithm to generate some text with our trained model. Beam Search picks N possible next options from each of the current options at every step. This way, if the generator picks an item leading to a bad decision down the line, it can toss the bad result out and keep going with a more likely one.
```
class BeamSearchCandidate(object):
"""Represents a node within the search space during Beam Search.
Attributes:
state: The resulting RNN state after the given sequence has been generated.
sequence: The sequence of selections leading to this node.
probability: The probability of the sequence occurring, computed as the sum
of the probabilty of each character in the sequence at its respective
step.
"""
def __init__(self, init_state, sequence, probability):
self.state = init_state
self.sequence = sequence
self.probability = probability
def search_from(self, tf_sess, rnn_model, temperature, num_options):
"""Expands the num_options most likely next elements in the sequence.
Args:
tf_sess: The Tensorflow session containing the rnn_model.
rnn_model: The RNN to use to generate the next element in the sequence.
temperature: Modifies the probabilities of each character, placing
more emphasis on higher probabilities as the value approaches 0.
num_options: How many potential next options to expand from this one.
Returns: A list of BeamSearchCandidate objects descended from this node.
"""
expanded_set = []
feed = {rnn_model.input_symbol: np.array([[self.sequence[-1]]]),
rnn_model.initial_state: self.state,
rnn_model.temperature: temperature,
rnn_model.num_options: num_options}
[predictions, probabilities, new_state] = tf_sess.run(
[rnn_model.output_labels,
rnn_model.normalized_probs,
rnn_model.new_state], feed)
# Get the indices of the num_beams next picks
picks = [predictions[0][x] for x in range(len(predictions[0]))]
for new_char in picks:
new_seq = deepcopy(self.sequence)
new_seq.append(new_char)
expanded_set.append(
BeamSearchCandidate(new_state, new_seq,
probabilities[0][0][new_char] + self.probability))
return expanded_set
def __eq__(self, other):
return self.sequence == other.sequence
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.sequence())
def beam_search_generate_sequence(tf_sess, rnn_model, primer, temperature=0.85,
termination_condition=None, num_beams=5):
"""Implements a sequence generator using Beam Search.
Args:
tf_sess: The Tensorflow session containing the rnn_model.
rnn_model: The RNN to use to generate the next element in the sequence.
temperature: Controls how 'Creative' the generated sequence is. Values
close to 0 tend to generate the most likely sequence, while values
closer to 1 generate more original sequences. Acceptable values are
within (0, 1].
termination_condition: A function taking one parameter, a list of
integers, that returns True when a condition is met that signals to the
RNN to return what it has generated so far.
num_beams: The number of possible sequences to keep at each step of the
generation process.
Returns: A list of at most num_beams BeamSearchCandidate objects.
"""
candidates = []
rnn_current_state = sess.run([rnn_model.initial_state])
#Initialize the state for the primer
for primer_val in primer[:-1]:
feed = {rnn_model.input_symbol: np.array([[primer_val]]),
rnn_model.initial_state: rnn_current_state
}
[rnn_current_state] = tf_sess.run([rnn_model.new_state], feed)
candidates.append(BeamSearchCandidate(rnn_current_state, primer, num_beams))
while True not in [termination_condition(x.sequence) for x in candidates]:
new_candidates = []
for candidate in candidates:
expanded_candidates = candidate.search_from(
tf_sess, rnn_model, temperature, num_beams)
for new in expanded_candidates:
if new not in new_candidates:
#do not reevaluate duplicates
new_candidates.append(new)
candidates = sorted(new_candidates,
key=lambda x: x.probability, reverse=True)[:num_beams]
return [c for c in candidates if termination_condition(c.sequence)]
```
Input something to start your generated text with, and set how characters long you want the text to be.
"Creativity" refers to how much emphasis your neural network puts on matching a pattern. If you notice looping in the output, try raising this value. If your output seems too random, try lowering it a bit.
If the results don't look too great in general, run the three training cells again for a bit longer. The lower your loss, the more closely your generated text will match the training data.
```
tf.reset_default_graph()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.InteractiveSession(config=config)
model = RNN(
rnn_num_layers=num_layers,
rnn_state_size=state_size,
num_classes=mapper.size(),
rnn_batch_size=1,
rnn_sequence_length=1)
model.build_inference_model()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.global_variables())
ckpt = tf.train.latest_checkpoint(CHECKPOINT_DIR)
saver.restore(sess, ckpt)
def gen(start_with, pred, creativity):
int_array = mapper.mapstring(start_with)
candidates = beam_search_generate_sequence(
sess, model, int_array, temperature=creativity,
termination_condition=pred,
num_beams=1)
gentext = mapper.maptokens(candidates[0].sequence)
return gentext
def lengthlimit(n):
return lambda text: len(text)>n
def sentences(n):
return lambda text: mapper.maptokens(text).count(".")>=n
def paragraph():
return lambda text: mapper.maptokens(text).count("\n")>0
length_of_generated_text = 2000
creativity = 0.85 # Should be greater than 0 but less than 1
print(gen(" ANTONIO: Who is it ?", lengthlimit(length_of_generated_text), creativity))
```
## Let's save a copy of our trained RNN so we can do all kinds of cool things with it later.
```
save_model_to_drive = False ## Set this to true to save directly to Google Drive.
def save_model_hyperparameters(path):
with open(path, 'w') as json_file:
model_params = {
'num_layers': model.num_layers,
'state_size': model.state_size,
'num_classes': model.num_classes
}
json.dump(model_params, json_file)
def save_to_drive(title, content):
# Install the PyDrive wrapper & import libraries.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
newfile = drive.CreateFile({'title': title})
newfile.SetContentFile(content)
newfile.Upload()
print('Uploaded file with ID %s as %s'% (newfile.get('id'),
archive_name))
archive_name = ''.join([file_name,'_seedbank_char-rnn.zip'])
latest_model = tf.train.latest_checkpoint(CHECKPOINT_DIR).split('/')[2]
checkpoints_archive_path = ''.join(['./exports/',archive_name])
if not latest_model:
raise ValueError('You must train a model before you can export one.')
%system mkdir exports
%rm -f {checkpoints_archive_path}
mapper.save(''.join([CHECKPOINT_DIR, 'token_mapping.json']))
save_model_hyperparameters(''.join([CHECKPOINT_DIR, 'model_attributes.json']))
%system zip '{checkpoints_archive_path}' -@ '{CHECKPOINT_DIR}checkpoint' \
'{CHECKPOINT_DIR}token_mapping.json' \
'{CHECKPOINT_DIR}model_attributes.json' \
'{CHECKPOINT_DIR}{latest_model}.'*
if save_model_to_drive:
save_to_drive(archive_name, checkpoints_archive_path)
else:
files.download(checkpoints_archive_path)
```
| github_jupyter |
# Simulations
In this notebook we will show four methods for incorporating new simulations into Coba in order of easy to hard:
1. From an Openml.org dataset with **OpenmlSimulation**
2. From local data sets with **CsvSimulation**, **ArffSimulation**, **LibsvmSimulation**, and **ManikSimulation**.
3. From Python function definitions with **LambdaSimulation**
4. From your own class that implements the **Simulation** interface
## Simulations From Openml.org
Perhaps the easiest way to incorporate new Simulations is to load them from Openml.org. Openml.org is is an online repository of machine learning data sets which currently hosts over 21,000 datasets. Using dataset ids Coba can tap into this repository and download these datasets to create Simulations.
To get a sense of how this works let's say we want to build a simulation from the Covertype data set. We can [do a dataset search](https://www.openml.org/search?type=data) on Openml.org to see if this data set is hosted. [This search](https://www.openml.org/search?q=covertype&type=data) finds several data sets and we simply pick [the first one](https://www.openml.org/d/180). On the dataset's landing page we can look at the URL -- https://www.openml.org/d/180 -- to get the dataset's id of 180. Now, all we have to do to run an experiment with the Covertype data set is:
```
from coba.simulations import OpenmlSimulation
from coba.learners import RandomLearner, VowpalLearner
from coba.benchmarks import Benchmark
Benchmark([OpenmlSimulation(180)], take=1000).evaluate([RandomLearner(), VowpalLearner(epsilon=0.1)]).plot_learners()
```
This same procedure can be repeated for any dataset on Openml.org.
## Simulations From Local Datasets
The next easiest way to incorporate new Simulations is to load them from a local dataset. Coba can create simulations from datasets in the following formats:
* CSV
* ARFF (i.e., https://waikato.github.io/weka-wiki/formats_and_processing/arff_stable/)
* Libsvm (e.g., https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html)
* Manik (e.g., http://manikvarma.org/downloads/XC/XMLRepository.html)
For example, we may want to test against the mnist dataset. This dataset can be download from Libsvm [here](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html#mnist). Once downloaded it we could use it:
```python
from coba.simulations import LibsvmSimulation
from coba.learners import RandomLearner, VowpalLearner
from coba.benchmarks import Benchmark
mnist = [LibsvmSimulation(<path to downloaded mnist>)]
Benchmark(mnist, take=1000).evaluate([RandomLearner(), VowpalLearner(epsilon=0.1)]).plot_learners()
```
The complete list of classes for local simulations is:
* `CsvSimulation(source:str, label_col:Union[str,int], with_header:bool=True)`
* `ArffSimulation(source:str, label_col:Union[str,int])`
* `LibsvmSimulation(source:str)`
* `ManikSimulation(source:str)`
## Simulations from Function Definitions
A third method for creating simulations for use in experiments is via function definitions.
This can be done with **LambdaSimulation** which takes three function definitions -- describing how to generate contexts, actions and rewards -- and the number of interactions you'd like the simulation to have. An example of a **LambdaSimulation** generating random contexts and actions with a linear reward function in [0,1] is provided:
```
from typing import Sequence
from coba.random import CobaRandom
from coba.simulations import LambdaSimulation, Context, Action
from coba.learners import RandomLearner, VowpalLearner
from coba.benchmarks import Benchmark
r = CobaRandom()
n_interactions = 1000
def context(index: int) -> Context:
return tuple(r.randoms(5))
def actions(index: int, context: Context) -> Sequence[Action]:
actions = [ r.randoms(5) for _ in range(3) ]
return [ tuple(a/sum(action) for a in action) for action in actions ]
def rewards(index: int, context: Context, action: Action) -> float:
return sum(c*a for c,a in zip(context,action))
simulations = [LambdaSimulation(n_interactions, context, actions, rewards)]
Benchmark(simulations).evaluate([RandomLearner(), VowpalLearner()]).plot_learners()
```
## Simulations from Scratch
The final, and most involved method, for creating new simulations in Coba is to create your own from scratch. This might be needed if you need to ingest a format that Coba doesn't already support. Or maybe you need your simulation to track some internal state between interactions. By creating your own Simulation there really is no limit to the functionality employed. In order to make your own simulation you'll first need to know a few simple classes/interfaces. We'll start with the Simulation interface.
### Simulation Interface
A Simulation in Coba is any class with the following interface:
```python
class Simulation:
@property
@abstractmethod
def interactions(self) -> Sequence[Interaction]:
...
@property
@abstractmethod
def reward(self) -> Reward:
...
```
So long as your class satisfies this interface it should be completely interoperable with Coba. However, assuming you have access to coba classes there really isn't any reason to implement this interface yourself. In practice it should always suffice to use MemorySimulation (more on this soon).
### Interaction Interface
As seen above the Simulation interface relies on the Interaction interface:
```python
class Interaction:
@property
@abstractmethod
def key(self) -> Key:
...
@property
@abstractmethod
def context(self) -> Context:
...
@property
@abstractmethod
def actions(self) -> Sequence[Action]:
...
```
Once again, while one can satisfy this interface from scratch we recommend developers simply use Coba's Interaction class. The types hints of Key, Context and Action actually have no constraints on them. We simply provide them for semantic interpretation and you are free to actually return anything you'd like as a key, context or collection of actions.
### Reward Interface
The final interface that Simulations depend upon is the Reward interface:
```python
class Reward:
@abstractmethod
def observe(choices: Sequence[Tuple[Key,Context,Action]]) -> Sequence[float]:
...
```
Out of the box Coba provides a ClassificationReward which returns values of 1 or 0 depending on if Action is the true label associated with the observation uniquely identified by Key. For example, if a simulation is loaded from Csv then Key will be the row number of the given observation we are trying to label in the Csv file. Coba also provides MemoryReward which can be useful if Key is hashable and Action are hashable and it is possible to compute all rewards up front.
### Source Interface
Once you have created your custom Simulation there is one more interface to contend with, the Source interface:
```python
class Source[Simulation]:
@abstractmethod
def read(self) -> Simulation:
...
def __repr__(self) -> str:
...
```
When performing experiments Coba's benchmark actually expects to be given a Source that produces a Simulation rather than an actual Simulation. All standard simulations such as OpenmlSimulation, CsvSimulation, and LambdaSimulation are actually Sources.
The source pattern allows Simulations to be lazy loaded in background processes thereby saving time and resources. Converting a custom Simulation to a Source is fairly easy. Below is an example pattern that could be followed:
```
from coba.simulations import Interaction, MemoryReward
from coba.learners import RandomLearner, VowpalLearner
from coba.benchmarks import Benchmark
class MySimulation:
class MyLoadedSimulation:
@property
def interactions(self):
return [ Interaction(1, (1,1), [1,2,3]), Interaction(2, (2,2), [1,2,3]) ]
@property
def reward(self):
return MemoryReward([ (1,1,1), (1,2,2), (1,3,3), (2,1,2), (2,2,100), (2,3,-100) ])
def read(self):
return MySimulation.MyLoadedSimulation()
def __repr__(self):
return "MySimulation"
Benchmark([MySimulation()]).evaluate([RandomLearner(), VowpalLearner()]).plot_learners()
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Recurrent Neural Networks (RNN) with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/rnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/working_with_rnns.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/rnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Introduction
Recurrent neural networks (RNN) are a class of neural networks that is powerful for
modeling sequence data such as time series or natural language.
Schematically, a RNN layer uses a `for` loop to iterate over the timesteps of a
sequence, while maintaining an internal state that encodes information about the
timesteps it has seen so far.
The Keras RNN API is designed with a focus on:
- **Ease of use**: the built-in `keras.layers.RNN`, `keras.layers.LSTM`,
`keras.layers.GRU` layers enable you to quickly build recurrent models without
having to make difficult configuration choices.
- **Ease of customization**: You can also define your own RNN cell layer (the inner
part of the `for` loop) with custom behavior, and use it with the generic
`keras.layers.RNN` layer (the `for` loop itself). This allows you to quickly
prototype different research ideas in a flexible way with minimal code.
## Setup
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
```
## Built-in RNN layers: a simple example
There are three built-in RNN layers in Keras:
1. `keras.layers.SimpleRNN`, a fully-connected RNN where the output from previous
timestep is to be fed to next timestep.
2. `keras.layers.GRU`, first proposed in
[Cho et al., 2014](https://arxiv.org/abs/1406.1078).
3. `keras.layers.LSTM`, first proposed in
[Hochreiter & Schmidhuber, 1997](https://www.bioinf.jku.at/publications/older/2604.pdf).
In early 2015, Keras had the first reusable open-source Python implementations of LSTM
and GRU.
Here is a simple example of a `Sequential` model that processes sequences of integers,
embeds each integer into a 64-dimensional vector, then processes the sequence of
vectors using a `LSTM` layer.
```
model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
model.add(layers.LSTM(128))
# Add a Dense layer with 10 units.
model.add(layers.Dense(10))
model.summary()
```
Built-in RNNs support a number of useful features:
- Recurrent dropout, via the `dropout` and `recurrent_dropout` arguments
- Ability to process an input sequence in reverse, via the `go_backwards` argument
- Loop unrolling (which can lead to a large speedup when processing short sequences on
CPU), via the `unroll` argument
- ...and more.
For more information, see the
[RNN API documentation](https://keras.io/api/layers/recurrent_layers/).
## Outputs and states
By default, the output of a RNN layer contains a single vector per sample. This vector
is the RNN cell output corresponding to the last timestep, containing information
about the entire input sequence. The shape of this output is `(batch_size, units)`
where `units` corresponds to the `units` argument passed to the layer's constructor.
A RNN layer can also return the entire sequence of outputs for each sample (one vector
per timestep per sample), if you set `return_sequences=True`. The shape of this output
is `(batch_size, timesteps, units)`.
```
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary()
```
In addition, a RNN layer can return its final internal state(s). The returned states
can be used to resume the RNN execution later, or
[to initialize another RNN](https://arxiv.org/abs/1409.3215).
This setting is commonly used in the
encoder-decoder sequence-to-sequence model, where the encoder final state is used as
the initial state of the decoder.
To configure a RNN layer to return its internal state, set the `return_state` parameter
to `True` when creating the layer. Note that `LSTM` has 2 state tensors, but `GRU`
only has one.
To configure the initial state of the layer, just call the layer with additional
keyword argument `initial_state`.
Note that the shape of the state needs to match the unit size of the layer, like in the
example below.
```
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None,))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(
encoder_input
)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")(
encoder_embedded
)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None,))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(
decoder_input
)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(64, name="decoder")(
decoder_embedded, initial_state=encoder_state
)
output = layers.Dense(10)(decoder_output)
model = keras.Model([encoder_input, decoder_input], output)
model.summary()
```
## RNN layers and RNN cells
In addition to the built-in RNN layers, the RNN API also provides cell-level APIs.
Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only
processes a single timestep.
The cell is the inside of the `for` loop of a RNN layer. Wrapping a cell inside a
`keras.layers.RNN` layer gives you a layer capable of processing batches of
sequences, e.g. `RNN(LSTMCell(10))`.
Mathematically, `RNN(LSTMCell(10))` produces the same result as `LSTM(10)`. In fact,
the implementation of this layer in TF v1.x was just creating the corresponding RNN
cell and wrapping it in a RNN layer. However using the built-in `GRU` and `LSTM`
layers enable the use of CuDNN and you may see better performance.
There are three built-in RNN cells, each of them corresponding to the matching RNN
layer.
- `keras.layers.SimpleRNNCell` corresponds to the `SimpleRNN` layer.
- `keras.layers.GRUCell` corresponds to the `GRU` layer.
- `keras.layers.LSTMCell` corresponds to the `LSTM` layer.
The cell abstraction, together with the generic `keras.layers.RNN` class, make it
very easy to implement custom RNN architectures for your research.
## Cross-batch statefulness
When processing very long sequences (possibly infinite), you may want to use the
pattern of **cross-batch statefulness**.
Normally, the internal state of a RNN layer is reset every time it sees a new batch
(i.e. every sample seen by the layer is assumed to be independent of the past). The
layer will only maintain a state while processing a given sample.
If you have very long sequences though, it is useful to break them into shorter
sequences, and to feed these shorter sequences sequentially into a RNN layer without
resetting the layer's state. That way, the layer can retain information about the
entirety of the sequence, even though it's only seeing one sub-sequence at a time.
You can do this by setting `stateful=True` in the constructor.
If you have a sequence `s = [t0, t1, ... t1546, t1547]`, you would split it into e.g.
```
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
```
Then you would process it via:
```python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences:
output = lstm_layer(s)
```
When you want to clear the state, you can use `layer.reset_states()`.
> Note: In this setup, sample `i` in a given batch is assumed to be the continuation of
sample `i` in the previous batch. This means that all batches should contain the same
number of samples (batch size). E.g. if a batch contains `[sequence_A_from_t0_to_t100,
sequence_B_from_t0_to_t100]`, the next batch should contain
`[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]`.
Here is a complete example:
```
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
lstm_layer.reset_states()
```
### RNN State Reuse
<a id="rnn_state_reuse"></a>
The recorded states of the RNN layer are not included in the `layer.weights()`. If you
would like to reuse the state from a RNN layer, you can retrieve the states value by
`layer.states` and use it as the
initial state for a new layer via the Keras functional API like `new_layer(inputs,
initial_state=layer.states)`, or model subclassing.
Please also note that sequential model might not be used in this case since it only
supports layers with single input and output, the extra input of initial state makes
it impossible to use here.
```
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
existing_state = lstm_layer.states
new_lstm_layer = layers.LSTM(64)
new_output = new_lstm_layer(paragraph3, initial_state=existing_state)
```
## Bidirectional RNNs
For sequences other than time series (e.g. text), it is often the case that a RNN model
can perform better if it not only processes sequence from start to end, but also
backwards. For example, to predict the next word in a sentence, it is often useful to
have the context around the word, not only just the words that come before it.
Keras provides an easy API for you to build such bidirectional RNNs: the
`keras.layers.Bidirectional` wrapper.
```
model = keras.Sequential()
model.add(
layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10))
)
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(10))
model.summary()
```
Under the hood, `Bidirectional` will copy the RNN layer passed in, and flip the
`go_backwards` field of the newly copied layer, so that it will process the inputs in
reverse order.
The output of the `Bidirectional` RNN will be, by default, the sum of the forward layer
output and the backward layer output. If you need a different merging behavior, e.g.
concatenation, change the `merge_mode` parameter in the `Bidirectional` wrapper
constructor. For more details about `Bidirectional`, please check
[the API docs](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Bidirectional/).
## Performance optimization and CuDNN kernels
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN
kernels by default when a GPU is available. With this change, the prior
`keras.layers.CuDNNLSTM/CuDNNGRU` layers have been deprecated, and you can build your
model without worrying about the hardware it will run on.
Since the CuDNN kernel is built with certain assumptions, this means the layer **will
not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or
GRU layers**. E.g.:
- Changing the `activation` function from `tanh` to something else.
- Changing the `recurrent_activation` function from `sigmoid` to something else.
- Using `recurrent_dropout` > 0.
- Setting `unroll` to True, which forces LSTM/GRU to decompose the inner
`tf.while_loop` into an unrolled `for` loop.
- Setting `use_bias` to False.
- Using masking when the input data is not strictly right padded (if the mask
corresponds to strictly right padded data, CuDNN can still be used. This is the most
common case).
For the detailed list of constraints, please see the documentation for the
[LSTM](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM/) and
[GRU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GRU/) layers.
### Using CuDNN kernels when available
Let's build a simple LSTM model to demonstrate the performance difference.
We'll use as input sequences the sequence of rows of MNIST digits (treating each row of
pixels as a timestep), and we'll predict the digit's label.
```
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(None, input_dim)
)
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
```
Let's load the MNIST dataset:
```
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0]
```
Let's create a model instance and train it.
We choose `sparse_categorical_crossentropy` as the loss function for the model. The
output of the model has shape of `[batch_size, 10]`. The target for the model is an
integer vector, each of the integer is in the range of 0 to 9.
```
model = build_model(allow_cudnn_kernel=True)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
```
Now, let's compare to a model that does not use the CuDNN kernel:
```
noncudnn_model = build_model(allow_cudnn_kernel=False)
noncudnn_model.set_weights(model.get_weights())
noncudnn_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
noncudnn_model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
```
When running on a machine with a NVIDIA GPU and CuDNN installed,
the model built with CuDNN is much faster to train compared to the
model that uses the regular TensorFlow kernel.
The same CuDNN-enabled model can also be used to run inference in a CPU-only
environment. The `tf.device` annotation below is just forcing the device placement.
The model will run on CPU by default if no GPU is available.
You simply don't have to worry about the hardware you're running on anymore. Isn't that
pretty cool?
```
import matplotlib.pyplot as plt
with tf.device("CPU:0"):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print(
"Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label)
)
plt.imshow(sample, cmap=plt.get_cmap("gray"))
```
## RNNs with list/dict inputs, or nested inputs
Nested structures allow implementers to include more information within a single
timestep. For example, a video frame could have audio and video input at the same
time. The data shape in this case could be:
`[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]`
In another example, handwriting data could have both coordinates x and y for the
current position of the pen, as well as pressure information. So the data
representation could be:
`[batch, timestep, {"location": [x, y], "pressure": [force]}]`
The following code provides an example of how to build a custom RNN cell that accepts
such structured inputs.
### Define a custom cell that supports nested input/output
See [Making new Layers & Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models/)
for details on writing your own layers.
```
class NestedCell(keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
i1 = input_shapes[0][1]
i2 = input_shapes[1][1]
i3 = input_shapes[1][2]
self.kernel_1 = self.add_weight(
shape=(i1, self.unit_1), initializer="uniform", name="kernel_1"
)
self.kernel_2_3 = self.add_weight(
shape=(i2, i3, self.unit_2, self.unit_3),
initializer="uniform",
name="kernel_2_3",
)
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = (output_1, output_2_3)
new_states = (state_1, state_2_3)
return output, new_states
def get_config(self):
return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3}
```
### Build a RNN model with nested input/output
Let's build a Keras model that uses a `keras.layers.RNN` layer and the custom cell
we just defined.
```
unit_1 = 10
unit_2 = 20
unit_3 = 30
i1 = 32
i2 = 64
i3 = 32
batch_size = 64
num_batches = 10
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = keras.layers.RNN(cell)
input_1 = keras.Input((None, i1))
input_2 = keras.Input((None, i2, i3))
outputs = rnn((input_1, input_2))
model = keras.models.Model([input_1, input_2], outputs)
model.compile(optimizer="adam", loss="mse", metrics=["accuracy"])
```
### Train the model with randomly generated data
Since there isn't a good candidate dataset for this model, we use random Numpy data for
demonstration.
```
input_1_data = np.random.random((batch_size * num_batches, timestep, i1))
input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))
target_1_data = np.random.random((batch_size * num_batches, unit_1))
target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size)
```
With the Keras `keras.layers.RNN` layer, You are only expected to define the math
logic for individual step within the sequence, and the `keras.layers.RNN` layer
will handle the sequence iteration for you. It's an incredibly powerful way to quickly
prototype new kinds of RNNs (e.g. a LSTM variant).
For more details, please visit the [API docs](https://https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN/).
| github_jupyter |
(tune-mnist-keras)=
# Using Keras & TensorFlow with Tune
```{image} /images/tf_keras_logo.jpeg
:align: center
:alt: Keras & TensorFlow Logo
:height: 120px
:target: https://keras.io
```
```{contents}
:backlinks: none
:local: true
```
## Example
```
import argparse
import os
from filelock import FileLock
from tensorflow.keras.datasets import mnist
import ray
from ray import tune
from ray.tune.schedulers import AsyncHyperBandScheduler
from ray.tune.integration.keras import TuneReportCallback
def train_mnist(config):
# https://github.com/tensorflow/tensorflow/issues/32159
import tensorflow as tf
batch_size = 128
num_classes = 10
epochs = 12
with FileLock(os.path.expanduser("~/.data.lock")):
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential(
[
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(config["hidden"], activation="relu"),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(num_classes, activation="softmax"),
]
)
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.SGD(lr=config["lr"], momentum=config["momentum"]),
metrics=["accuracy"],
)
model.fit(
x_train,
y_train,
batch_size=batch_size,
epochs=epochs,
verbose=0,
validation_data=(x_test, y_test),
callbacks=[TuneReportCallback({"mean_accuracy": "accuracy"})],
)
def tune_mnist(num_training_iterations):
sched = AsyncHyperBandScheduler(
time_attr="training_iteration", max_t=400, grace_period=20
)
analysis = tune.run(
train_mnist,
name="exp",
scheduler=sched,
metric="mean_accuracy",
mode="max",
stop={"mean_accuracy": 0.99, "training_iteration": num_training_iterations},
num_samples=10,
resources_per_trial={"cpu": 2, "gpu": 0},
config={
"threads": 2,
"lr": tune.uniform(0.001, 0.1),
"momentum": tune.uniform(0.1, 0.9),
"hidden": tune.randint(32, 512),
},
)
print("Best hyperparameters found were: ", analysis.best_config)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--smoke-test", action="store_true", help="Finish quickly for testing"
)
parser.add_argument(
"--server-address",
type=str,
default=None,
required=False,
help="The address of server to connect to if using " "Ray Client.",
)
args, _ = parser.parse_known_args()
if args.smoke_test:
ray.init(num_cpus=4)
elif args.server_address:
ray.init(f"ray://{args.server_address}")
tune_mnist(num_training_iterations=5 if args.smoke_test else 300)
```
## More Keras and TensorFlow Examples
- {doc}`/tune/examples/includes/pbt_memnn_example`: Example of training a Memory NN on bAbI with Keras using PBT.
- {doc}`/tune/examples/includes/tf_mnist_example`: Converts the Advanced TF2.0 MNIST example to use Tune
with the Trainable. This uses `tf.function`.
Original code from tensorflow: https://www.tensorflow.org/tutorials/quickstart/advanced
- {doc}`/tune/examples/includes/pbt_tune_cifar10_with_keras`:
A contributed example of tuning a Keras model on CIFAR10 with the PopulationBasedTraining scheduler.
| github_jupyter |
# Proper randomization is important...
I changed the code of the MPI calibration code to do a global randomization (and not a randomization within each operation).
```
import os
import numpy
import pandas
from extract_archive import extract_zip, aggregate_dataframe
archive_names = {'nancy_2018-07-24_1621460.zip' : 'none', 'nancy_2018-07-27_1625117.zip' : 'half', 'nancy_2018-08-03_1645238.zip': 'full'}
alldf = []
aggr = []
for name, shuffled in archive_names.items():
archive = extract_zip(name)
df = archive['exp/exp_Recv.csv']
df['batch_index'] = numpy.floor(df.index / 10).astype(int)
df['batch_index_mod'] = df.batch_index % 50 # 50 batches of 10 calls
info = archive['info.yaml']
deployment = str(info['deployment'])
df['deployment'] = deployment
mpi_version = set([info[key]['mpi'] for key in info.keys() if 'grid5000' in key])
assert len(mpi_version) == 1
mpi_version = mpi_version.pop()
df['mpi'] = mpi_version
df['exp_type'] = mpi_version + ' | ' + deployment
df['shuffled'] = shuffled
alldf.append(df)
aggr.append(aggregate_dataframe(df))
df = pandas.concat(alldf)
df_aggr = pandas.concat(aggr)
print(df.exp_type.unique())
df.head()
df[['index', 'start', 'msg_size', 'duration', 'shuffled']].to_csv('/tmp/mpi_calibration_order.csv', index=False)
```
## Checking if the proper randomization gives better results
```
%matplotlib inline
from plotnine import *
import plotnine; plotnine.options.figure_size = 12, 8
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning) # removing annoying Pandas warning
ggplot(df, aes(x='msg_size', y='duration', color='shuffled')) + geom_point(alpha=0.1) + scale_x_log10() + scale_y_log10() + ggtitle('Durations of MPI_Recv')
ggplot(df_aggr, aes(x='msg_size', y='duration', color='shuffled')) + geom_point(alpha=0.3) + scale_x_log10() + scale_y_log10() + ggtitle('Durations of MPI_Recv aggregated per message size')
```
Let's restrict ourselves to calls with a message size between 100 and 1000 (i.e., one of the places that was problematic).
```
df = df[(df.msg_size >= 1e2) & (df.msg_size < 1e3)].copy() # gosh, I hate pandas' default
df_aggr = df_aggr[(df_aggr.msg_size >= 1e2) & (df_aggr.msg_size < 1e3)].copy()
df = df[df.duration < df.duration.quantile(0.99)] # removing the extreme outliers...
ggplot(df, aes(x='index', y='duration')) + geom_point(alpha=0.1) + ggtitle('Durations of MPI_Recv (temporal evolution)') + facet_wrap('shuffled')
ggplot(df, aes('duration', group='msg_size')) + stat_ecdf() + ggtitle('Distributions of MPI_Recv durations (grouped by message size, with and without shuffling)') + facet_wrap('shuffled')
```
Weird, the bimodality is less important (but is still here), most of the operations have a low duration. Not sure if this due to the better randomization, or because of an external factor (e.g., the G5K team improved the performance tunings).
| github_jupyter |
#### Implementation of Distributional paper for 1-dimensional games, such as Cartpole.
- https://arxiv.org/abs/1707.06887
<br>
Please note: The 2 dimensional image state requires a lot of memory capacity (~50GB) due to the buffer size of 1,000,000 as in DQN paper.
So, one might want to train an agent with a smaller size (this may cause a lower performance).
#### Please NOTE,
The code lines different from Vanila DQN are annotated with '*/*/*/'.
So, by searching '*/*/*/', you can find these lines.
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import gym
import numpy as np
import time
import os
import cv2
import matplotlib.pyplot as plt
from IPython.display import clear_output
class QNetwork(nn.Module):
def __init__(self, input_dim, action_dim, rand_seed=False,
conv_channel_1=32, conv_channel_2=64, conv_channel_3=128,
kernel_1=3, kernel_2=3, kernel_3=3,
stride_1=2, stride_2=2, stride_3=1, n_atoms=51):
super(QNetwork, self).__init__()
self.action_dim = action_dim
self.n_atoms = n_atoms
self.Conv1 = nn.Conv2d(input_dim[0], conv_channel_1, (kernel_1,kernel_1), stride=stride_1)
self.Conv2 = nn.Conv2d(conv_channel_1, conv_channel_2, (kernel_2,kernel_2), stride=stride_2)
self.Conv3 = nn.Conv2d(conv_channel_2, conv_channel_3, (kernel_3,kernel_3), stride=stride_3)
def calculate_conv2d_size(size, kernel_size, stride):
return (size - (kernel_size - 1) - 1) // stride + 1
w, h = input_dim[1], input_dim[2]
convw = calculate_conv2d_size(calculate_conv2d_size(calculate_conv2d_size(w,kernel_1,stride_1),
kernel_2,stride_2),
kernel_3,stride_3)
convh = calculate_conv2d_size(calculate_conv2d_size(calculate_conv2d_size(h,kernel_1,stride_1),
kernel_2,stride_2),
kernel_3,stride_3)
linear_input_size = convw * convh * conv_channel_3
# */*/*/
self.fc1 = nn.Linear(linear_input_size, 512)
self.fc2 = nn.Linear(512, action_dim*n_atoms)
self.relu = nn.ReLU()
# */*/*/
def forward(self, x):
x = self.relu(self.Conv1(x))
x = self.relu(self.Conv2(x))
x = self.relu(self.Conv3(x))
x = x.reshape(x.shape[0], -1)
# */*/*/
Q = self.fc2(self.relu(self.fc1(x))).view(-1, self.action_dim, self.n_atoms)
return F.softmax(Q, dim=2) # Shape: (batch_size, action_dim, n_atoms)
# */*/*/
if __name__ == '__main__':
state_size = (4, 84, 84)
action_size = 10
net = QNetwork(state_size, action_size,
conv_channel_1=32, conv_channel_2=64, conv_channel_3=64)
test = torch.randn(size=(64, 4, 84, 84))
print(net)
print("Network output: ", net(test).shape)
class ReplayBuffer:
""" Experience Replay Buffer in DQN paper """
def __init__(self,
buffer_size: ('int: total size of the Replay Buffer'),
input_dim: ('tuple: a dimension of input data. Ex) (3, 84, 84)'),
batch_size: ('int: a batch size when updating')):
# To check if input image has 3 channels
assert len(input_dim)==3, "The state dimension should be 3-dim! (CHxWxH). Please check if input_dim is right"
self.batch_size = batch_size
self.buffer_size = buffer_size
self.save_count, self.current_size = 0, 0
# One can choose either np.zeros or np.ones.
# The reason using np.ones here is for checking the total memory occupancy of the buffer.
self.state_buffer = np.ones((buffer_size, input_dim[0], input_dim[1], input_dim[2]),
dtype=np.uint8) # data type is np.int8 for saving the memory
self.action_buffer = np.ones(buffer_size, dtype=np.uint8)
self.reward_buffer = np.ones(buffer_size, dtype=np.float32)
self.next_state_buffer = np.ones((buffer_size, input_dim[0], input_dim[1], input_dim[2]),
dtype=np.uint8)
self.done_buffer = np.ones(buffer_size, dtype=np.uint8)
def __len__(self):
return self.current_size
def store(self,
state: np.ndarray,
action: int,
reward: float,
next_state: np.ndarray,
done: int):
self.state_buffer[self.save_count] = state
self.action_buffer[self.save_count] = action
self.reward_buffer[self.save_count] = reward
self.next_state_buffer[self.save_count] = next_state
self.done_buffer[self.save_count] = done
# self.save_count is an index when storing transitions into the replay buffer
self.save_count = (self.save_count + 1) % self.buffer_size
# self.current_size is an indication for how many transitions is stored
self.current_size = min(self.current_size+1, self.buffer_size)
def batch_load(self):
# Selecting samples randomly with a size of self.batch_size
indices = np.random.randint(self.current_size, size=self.batch_size)
return dict(
states=self.state_buffer[indices],
actions=self.action_buffer[indices],
rewards=self.reward_buffer[indices],
next_states=self.next_state_buffer[indices],
dones=self.done_buffer[indices])
class Agent:
def __init__(self,
env: 'Environment',
input_frame: ('int: The number of channels of input image'),
input_dim: ('int: The width and height of pre-processed input image'),
training_frames: ('int: The total number of training frames'),
skipped_frame: ('int: The number of skipped frames in the environment'),
eps_decay: ('float: Epsilon Decay_rate'),
gamma: ('float: Discount Factor'),
update_freq: ('int: Behavior Network Update Frequency'),
target_update_freq: ('int: Target Network Update Frequency'),
update_type: ('str: Update type for target network. Hard or Soft')='hard',
soft_update_tau: ('float: Soft update ratio')=None,
batch_size: ('int: Update batch size')=32,
buffer_size: ('int: Replay buffer size')=1000000,
update_start_buffer_size: ('int: Update starting buffer size')=50000,
learning_rate: ('float: Learning rate')=0.0004,
eps_min: ('float: Epsilon Min')=0.1,
eps_max: ('float: Epsilon Max')=1.0,
device_num: ('int: GPU device number')=0,
rand_seed: ('int: Random seed')=None,
plot_option: ('str: Plotting option')=False,
model_path: ('str: Model saving path')='./',
trained_model_path: ('str: Trained model path')='',
# */*/*/
n_atoms: ('int: The number of atoms')=51,
Vmax: ('int: The maximum Q value')=10,
Vmin: ('int: The minimum Q value')=-10):
# */*/*/
self.action_dim = env.action_space.n
self.device = torch.device(f'cuda:{device_num}' if torch.cuda.is_available() else 'cpu')
self.model_path = model_path
self.env = env
self.input_frames = input_frame
self.input_dim = input_dim
self.training_frames = training_frames
self.skipped_frame = skipped_frame
self.epsilon = eps_max
self.eps_decay = eps_decay
self.eps_min = eps_min
self.gamma = gamma
self.update_freq = update_freq
self.target_update_freq = target_update_freq
self.update_cnt = 0
self.update_type = update_type
self.tau = soft_update_tau
self.batch_size = batch_size
self.buffer_size = buffer_size
self.update_start = update_start_buffer_size
self.seed = rand_seed
self.plot_option = plot_option
# */*/*/
self.n_atoms = n_atoms
self.Vmin = Vmin
self.Vmax = Vmax
self.dz = (Vmax - Vmin) / (n_atoms - 1)
self.support = torch.linspace(Vmin, Vmax, n_atoms).to(self.device)
self.expanded_support = self.support.expand((batch_size, self.action_dim, n_atoms)).to(self.device)
self.q_behave = QNetwork((self.input_frames, self.input_dim, self.input_dim), self.action_dim, n_atoms=self.n_atoms).to(self.device)
self.q_target = QNetwork((self.input_frames, self.input_dim, self.input_dim), self.action_dim, n_atoms=self.n_atoms).to(self.device)
# */*/*/
if trained_model_path: # load a trained model if existing
self.q_behave.load_state_dict(torch.load(trained_model_path))
print("Trained model is loaded successfully.")
# Initialize target network parameters with behavior network parameters
self.q_target.load_state_dict(self.q_behave.state_dict())
self.q_target.eval()
self.optimizer = optim.Adam(self.q_behave.parameters(), lr=learning_rate)
self.memory = ReplayBuffer(self.buffer_size, (self.input_frames, self.input_dim, self.input_dim), self.batch_size)
def select_action(self, state: 'Must be pre-processed in the same way as updating current Q network. See def _compute_loss'):
if np.random.random() < self.epsilon:
return np.zeros(self.action_dim), self.env.action_space.sample()
else:
# if normalization is applied to the image such as devision by 255, MUST be expressed 'state/255' below.
with torch.no_grad():
state = torch.FloatTensor(state).to(self.device).unsqueeze(0)/255
# */*/*/
Qs = self.q_behave(state)*self.expanded_support[0]
Expected_Qs = Qs.sum(2)
# */*/*/
action = Expected_Qs.argmax(1)
# return Q-values and action (Q-values are not required for implementing algorithms. This is just for checking Q-values for each state. Not must-needed)
return Expected_Qs.detach().cpu().numpy()[0], action.detach().item()
def processing_resize_and_gray(self, frame):
''' Convert images to gray scale and resize '''
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
frame = cv2.resize(frame, dsize=(self.input_dim, self.input_dim)).reshape(self.input_dim, self.input_dim).astype(np.uint8)
return frame
def get_init_state(self):
''' return an initial state with a dimension of (self.input_frames, self.input_dim, self.input_dim) '''
init_state = np.zeros((self.input_frames, self.input_dim, self.input_dim))
init_frame = self.env.reset()
init_state[0] = self.processing_resize_and_gray(init_frame)
for i in range(1, self.input_frames):
action = self.env.action_space.sample()
for j in range(self.skipped_frame-1):
state, _, _, _ = self.env.step(action)
state, _, _, _ = self.env.step(action)
init_state[i] = self.processing_resize_and_gray(state)
return init_state
def get_state(self, state, action, skipped_frame=0):
''' return reward, next_state, done '''
next_state = np.zeros((self.input_frames, self.input_dim, self.input_dim))
for i in range(len(state)-1):
next_state[i] = state[i+1]
rewards = 0
dones = 0
for _ in range(skipped_frame-1):
state, reward, done, _ = self.env.step(action)
rewards += reward # reward accumulates for the case that rewards occur while skipping
dones += int(done)
state, reward, done, _ = self.env.step(action)
next_state[-1] = self.processing_resize_and_gray(state)
rewards += reward
dones += int(done)
return rewards, next_state, dones
def store(self, state, action, reward, next_state, done):
self.memory.store(state, action, reward, next_state, done)
def update_behavior_q_net(self):
# update behavior q network with a batch
batch = self.memory.batch_load()
loss = self._compute_loss(batch)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
return loss.item()
def target_soft_update(self):
''' target network is updated with Soft Update. tau is a hyperparameter for the updating ratio betweeen target and behavior network '''
for target_param, current_param in zip(self.q_target.parameters(), self.q_behave.parameters()):
target_param.data.copy_(self.tau*current_param.data + (1.0-self.tau)*target_param.data)
def target_hard_update(self):
''' target network is updated with Hard Update '''
self.update_cnt = (self.update_cnt+1) % self.target_update_freq
if self.update_cnt==0:
self.q_target.load_state_dict(self.q_behave.state_dict())
def train(self):
tic = time.time()
losses = []
scores = []
epsilons = []
avg_scores = [[-10000]] # As an initial score, set an arbitrary score of an episode.
score = 0
print("Storing initial buffer..")
state = self.get_init_state()
for frame_idx in range(1, self.update_start+1):
# Store transitions into the buffer until the number of 'self.update_start' transitions is stored
_, action = self.select_action(state)
reward, next_state, done = self.get_state(state, action, skipped_frame=self.skipped_frame)
self.store(state, action, reward, next_state, done)
state = next_state
if done: state = self.get_init_state()
print("Done. Start learning..")
history_store = []
for frame_idx in range(1, self.training_frames+1):
Qs, action = self.select_action(state)
reward, next_state, done = self.get_state(state, action, skipped_frame=self.skipped_frame)
self.store(state, action, reward, next_state, done)
history_store.append([state, Qs, action, reward, next_state, done]) # history_store is for checking an episode later. Not must-needed.
if (frame_idx % self.update_freq) == 0:
loss = self.update_behavior_q_net()
score += reward
losses.append(loss)
if self.update_type=='hard': self.target_hard_update()
elif self.update_type=='soft': self.target_soft_update()
if done:
# For saving and plotting when an episode is done.
scores.append(score)
if np.mean(scores[-10:]) > max(avg_scores):
torch.save(self.q_behave.state_dict(), self.model_path+'{}_Score:{}.pt'.format(frame_idx, np.mean(scores[-10:])))
training_time = round((time.time()-tic)/3600, 1)
np.save(self.model_path+'{}_history_Score_{}_{}hrs.npy'.format(frame_idx, score, training_time), np.array(history_store))
print(" | Model saved. Recent scores: {}, Training time: {}hrs".format(scores[-10:], training_time), ' /'.join(os.getcwd().split('/')[-3:]))
avg_scores.append(np.mean(scores[-10:]))
if self.plot_option=='inline':
scores.append(score)
epsilons.append(self.epsilon)
self._plot(frame_idx, scores, losses, epsilons)
else:
print(score, end='\r')
score=0
state = self.get_init_state()
history_store = []
else: state = next_state
self._epsilon_step()
print("Total training time: {}(hrs)".format((time.time()-tic)/3600))
def _epsilon_step(self):
''' Controlling epsilon decay. Here is the same as DQN paper, linearly decaying rate. '''
self.epsilon = max(self.epsilon-self.eps_decay, 0.1)
def _compute_loss(self, batch: "Dictionary (S, A, R', S', Dones)"):
''' Compute loss. If normalization is used, it must be applied to both 'state' and 'next_state'. ex) state/255 '''
states = torch.FloatTensor(batch['states']).to(self.device) / 255
next_states = torch.FloatTensor(batch['next_states']).to(self.device) / 255
actions = torch.LongTensor(batch['actions']).to(self.device)
rewards = torch.FloatTensor(batch['rewards'].reshape(-1, 1)).to(self.device)
dones = torch.FloatTensor(batch['dones'].reshape(-1, 1)).to(self.device)
# */*/*/
log_behave_Q_dist = self.q_behave(states)[range(self.batch_size), actions].log()
with torch.no_grad():
# Computing projected distribution for a categorical loss
behave_next_Q_dist = self.q_behave(next_states)
next_actions = torch.sum(behave_next_Q_dist*self.expanded_support, 2).argmax(1)
target_next_Q_dist = self.q_target(next_states)[range(self.batch_size), next_actions] # Double DQN.
Tz = rewards + self.gamma*(1 - dones)*self.expanded_support[:,0]
Tz.clamp_(self.Vmin, self.Vmax)
b = (Tz - self.Vmin) / self.dz
l = b.floor().long()
u = b.ceil().long()
l[(l==u) & (u>0)] -= 1 # avoiding the case when floor index and ceil index have the same values
u[(u==0) & (l==0)] += 1 # (because it causes target_next_Q_dist's value to be counted as zero)
batch_init_indices = torch.linspace(0, (self.batch_size-1)*self.n_atoms, self.batch_size).long().unsqueeze(1).expand(self.batch_size, self.n_atoms).to(self.device)
proj_dist = torch.zeros(self.batch_size, self.n_atoms).to(self.device)
proj_dist.view(-1).index_add_(0, (l+batch_init_indices).view(-1), (target_next_Q_dist*(u-b)).view(-1))
proj_dist.view(-1).index_add_(0, (u+batch_init_indices).view(-1), (target_next_Q_dist*(b-l)).view(-1))
# Compute KL divergence between two distributions
loss = torch.sum(-proj_dist*log_behave_Q_dist, 1).mean()
# */*/*/
return loss
def _plot(self, frame_idx, scores, losses, epsilons):
clear_output(True)
plt.figure(figsize=(20, 5), facecolor='w')
plt.subplot(131)
plt.title('frame %s. score: %s' % (frame_idx, np.mean(scores[-10:])))
plt.plot(scores)
plt.subplot(132)
plt.title('loss')
plt.plot(losses)
plt.subplot(133)
plt.title('epsilons')
plt.plot(epsilons)
plt.show()
```
#### Configurations


```
env_list = {
0: "CartPole-v0",
1: "CartPole-v2",
2: "LunarLander-v2",
3: "Breakout-v4",
4: "BreakoutDeterministic-v4",
5: "BreakoutNoFrameskip-v4",
6: "BoxingDeterministic-v4",
7: "PongDeterministic-v4",
}
env_name = env_list[6]
env = gym.make(env_name)
# Same input size as in DQN paper.
input_dim = 84
input_frame = 4
print("env_name", env_name)
print(env.unwrapped.get_action_meanings(), env.action_space.n)
# starting to update Q-network until ReplayBuffer is filled with the number of samples = update_start_buffer_size
update_start_buffer_size = 10000
# total training frames
training_frames = 10000000
# epsilon for exploration
eps_max = 1.0
eps_min = 0.1
eps_decay = 1/1000000
# gamma (used decaying future rewards)
gamma = 0.99
# size of ReplayBuffer
buffer_size = int(1e6) # this is the same size of the paper
# buffer_size = int(1.5e5) # if don't have an enough memory capacity, lower the value like this. But this may cause a bad training performance.
# update batch size
batch_size = 32
learning_rate = 0.0001 # In the paper, they use RMSProp and learning rate 0.00025. In this notebook, the Adam is used with lr=0.0001.
# updating Q-network with 'soft' or 'hard' updating method
update_freq = 4
update_type = 'hard'
soft_update_tau = 0.002
# target network update frequency (applied when it takes 'hard' update).
# 10000 means the target network is updated once while the behavior network is updated 10000 times.
target_update_freq = 10000
# assign skipped_frame to be 0
# because the word 'Deterministic' in the name 'BoxingDeterministic' means it automatically skips 4 frames in the game.
# assign skipped_frame to be 0 when selecting games such as "BreakoutNoFrameskip".
skipped_frame = 0
# cuda device
device_num = 0
# choose plotting option.
# 'inline' - plots status in jupyter notebook
# 'False' - it prints only reward of the episode
plot_options = {1: 'inline', 2: False}
plot_option = plot_options[2]
# */*/*/
n_atoms = 51
Vmax = 10
Vmin = -10
# */*/*/
# The path for saving a trained model.
rand_seed = None
rand_name = ('').join(map(str, np.random.randint(10, size=(3,))))
folder_name = os.getcwd().split('/')[-1]
model_name = 'Test'
model_save_path = f'./model_save/{model_name}/'
if not os.path.exists('./model_save/'):
os.mkdir('./model_save/')
if not os.path.exists(model_save_path):
os.mkdir(model_save_path)
print("model_save_path:", model_save_path)
trained_model_path = ''
agent = Agent(
env,
input_frame,
input_dim,
training_frames,
skipped_frame,
eps_decay,
gamma,
update_freq,
target_update_freq,
update_type,
soft_update_tau,
batch_size,
buffer_size,
update_start_buffer_size,
learning_rate,
eps_min,
eps_max,
device_num,
rand_seed,
plot_option,
model_save_path,
trained_model_path,
n_atoms,
Vmax,
Vmin
)
agent.train()
```
#### An example of results
Storing initial buffer..
Done. Start learning..
| Model saved. Recent scores: [1.0], Training time: 0.0hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [1.0, -1.0, 2.0, -2.0, 5.0, 2.0], Training time: 0.0hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [2.0, -2.0, 5.0, 2.0, 0.0, 0.0, -2.0, 3.0, 2.0, 6.0], Training time: 0.0hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [3.0, 3.0, -2.0, -4.0, 6.0, -1.0, -5.0, 4.0, 6.0, 7.0], Training time: 0.0hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [4.0, 6.0, 7.0, -4.0, -2.0, -6.0, 1.0, 3.0, 4.0, 6.0], Training time: 0.1hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [6.0, 7.0, -4.0, -2.0, -6.0, 1.0, 3.0, 4.0, 6.0, 9.0], Training time: 0.1hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [7.0, 1.0, 6.0, 5.0, 5.0, 0.0, -2.0, -1.0, 2.0, 5.0], Training time: 0.1hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [-4.0, 10.0, 9.0, -10.0, 9.0, -2.0, -5.0, 6.0, 7.0, 11.0], Training time: 0.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [10.0, 9.0, -10.0, 9.0, -2.0, -5.0, 6.0, 7.0, 11.0, 1.0], Training time: 0.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [6.0, 1.0, 8.0, -1.0, 2.0, 3.0, 1.0, 7.0, 6.0, 14.0], Training time: 0.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [7.0, 6.0, 14.0, 1.0, 3.0, -1.0, 8.0, 4.0, -4.0, 14.0], Training time: 0.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [6.0, 14.0, 1.0, 3.0, -1.0, 8.0, 4.0, -4.0, 14.0, 9.0], Training time: 0.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [6.0, -4.0, -2.0, 27.0, 1.0, 4.0, 5.0, 1.0, 13.0, 10.0], Training time: 0.7hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [27.0, 1.0, 4.0, 5.0, 1.0, 13.0, 10.0, 1.0, 1.0, 16.0], Training time: 0.7hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [1.0, 10.0, 13.0, 19.0, 1.0, 6.0, 4.0, 8.0, 12.0, 13.0], Training time: 1.1hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [10.0, 13.0, 19.0, 1.0, 6.0, 4.0, 8.0, 12.0, 13.0, 10.0], Training time: 1.1hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [5.0, 3.0, 7.0, 18.0, -1.0, 13.0, 9.0, 10.0, 29.0, 8.0], Training time: 1.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [3.0, 7.0, 18.0, -1.0, 13.0, 9.0, 10.0, 29.0, 8.0, 18.0], Training time: 1.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [7.0, 18.0, -1.0, 13.0, 9.0, 10.0, 29.0, 8.0, 18.0, 8.0], Training time: 1.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [13.0, 9.0, 10.0, 29.0, 8.0, 18.0, 8.0, -1.0, 16.0, 27.0], Training time: 1.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [16.0, 27.0, 8.0, 11.0, 2.0, 19.0, 13.0, 19.0, 12.0, 15.0], Training time: 1.3hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [14.0, 11.0, 9.0, 11.0, 20.0, 16.0, 7.0, 13.0, 13.0, 37.0], Training time: 1.4hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [18.0, 7.0, 19.0, 15.0, 5.0, 9.0, 18.0, 29.0, 18.0, 18.0], Training time: 1.6hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [15.0, 11.0, 9.0, 33.0, 5.0, 30.0, 12.0, 17.0, 23.0, 15.0], Training time: 1.7hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [14.0, 22.0, 6.0, 13.0, 16.0, 15.0, 24.0, 28.0, 8.0, 29.0], Training time: 1.9hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [22.0, 6.0, 13.0, 16.0, 15.0, 24.0, 28.0, 8.0, 29.0, 18.0], Training time: 1.9hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [20.0, 16.0, 31.0, 23.0, 24.0, 18.0, 8.0, 15.0, 12.0, 14.0], Training time: 2.5hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [27.0, 5.0, 27.0, 2.0, 11.0, 19.0, 17.0, 20.0, 23.0, 31.0], Training time: 2.5hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [19.0, 20.0, 20.0, 18.0, 10.0, 37.0, 12.0, 9.0, 25.0, 15.0], Training time: 2.7hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [27.0, 8.0, 34.0, 22.0, 17.0, 2.0, 31.0, 13.0, 7.0, 25.0], Training time: 2.8hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [14.0, 18.0, 27.0, 21.0, 22.0, 9.0, -2.0, 28.0, 30.0, 26.0], Training time: 2.8hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [17.0, 23.0, 9.0, 40.0, 9.0, 26.0, 10.0, 26.0, 10.0, 29.0], Training time: 3.0hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [23.0, 9.0, 40.0, 9.0, 26.0, 10.0, 26.0, 10.0, 29.0, 19.0], Training time: 3.0hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [11.0, 23.0, 17.0, 13.0, 19.0, 37.0, 21.0, 26.0, 20.0, 16.0], Training time: 3.0hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [23.0, 17.0, 13.0, 19.0, 37.0, 21.0, 26.0, 20.0, 16.0, 25.0], Training time: 3.0hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [8.0, 25.0, 19.0, 10.0, 27.0, 14.0, 26.0, 39.0, 22.0, 35.0], Training time: 3.2hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [25.0, 19.0, 10.0, 27.0, 14.0, 26.0, 39.0, 22.0, 35.0, 37.0], Training time: 3.2hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [19.0, 10.0, 27.0, 14.0, 26.0, 39.0, 22.0, 35.0, 37.0, 26.0], Training time: 3.2hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [10.0, 27.0, 14.0, 26.0, 39.0, 22.0, 35.0, 37.0, 26.0, 33.0], Training time: 3.2hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [27.0, 14.0, 26.0, 39.0, 22.0, 35.0, 37.0, 26.0, 33.0, 12.0], Training time: 3.2hrs MacaronRL /Value_Based /C51
| Model saved. Recent scores: [39.0, 22.0, 35.0, 37.0, 26.0, 33.0, 12.0, 6.0, 26.0, 39.0], Training time: 3.2hrs MacaronRL /Value_Based /C51
| github_jupyter |
#Document retrieval from wikipedia data
#Fire up GraphLab Create
```
import graphlab
```
#Load some text data - from wikipedia, pages on people
```
people = graphlab.SFrame('people_wiki.gl/')
```
Data contains: link to wikipedia article, name of person, text of article.
```
people.head()
len(people)
```
#Explore the dataset and checkout the text it contains
##Exploring the entry for president Obama
```
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
```
##Exploring the entry for actor George Clooney
```
clooney = people[people['name'] == 'George Clooney']
clooney['text']
```
#Get the word counts for Obama article
```
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
elton = people[people['name'] == 'Elton John']
print elton['word_count']
print obama['word_count']
```
##Sort the word counts for the Obama article
###Turning dictonary of word counts into a table
```
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
elton_word_count_table = elton[['word_count']].stack('word_count', new_column_name = ['word','count'])
elton_word_count_table.sort('count',ascending=False)
```
###Sorting the word counts to show most common words at the top
```
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
```
Most common words include uninformative words like "the", "in", "and",...
#Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
```
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf
people['tfidf'] = tfidf['docs']
```
##Examine the TF-IDF for the Obama article
```
obama = people[people['name'] == 'Barack Obama']
elton = people[people['name'] == 'Elton John']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
elton[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
```
Words with highest TF-IDF are much more informative.
#Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
```
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
```
##Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
```
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
victoria = people[people['name'] == 'Victoria Beckham']
```
#Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
```
paul = people[people['name'] == 'Paul McCartney']
graphlab.distances.cosine(elton['tfidf'][0],victoria['tfidf'][0])
graphlab.distances.cosine(elton['tfidf'][0],paul['tfidf'][0])
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
word_count_model=graphlab.nearest_neighbors.create(people,features=['word_count'],label='name',distance='cosine')
TF_IDF_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name', distance='cosine')
word_count_model.query(elton)
TF_IDF_model.query(elton)
word_count_model.query(victoria)
TF_IDF_model.query(victoria)
```
#Applying the nearest-neighbors model for retrieval
##Who is closest to Obama?
```
knn_model.query(obama)
```
As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
##Other examples of document retrieval
```
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
```
| github_jupyter |
# Bootstrap
## Import and settings
In this example, we need to import `numpy`, `pandas`, and `graphviz` in addition to `lingam`.
```
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
```
## Test data
We create test data consisting of 6 variables.
```
x3 = np.random.uniform(size=10000)
x0 = 3.0*x3 + np.random.uniform(size=10000)
x2 = 6.0*x3 + np.random.uniform(size=10000)
x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=10000)
x5 = 4.0*x0 + np.random.uniform(size=10000)
x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=10000)
X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
X.head()
m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0],
[3.0, 0.0, 2.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 6.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[8.0, 0.0,-1.0, 0.0, 0.0, 0.0],
[4.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
make_dot(m)
```
## Bootstrapping
We call `bootstrap()` method instead of `fit()`. Here, the second argument specifies the number of bootstrap sampling.
```
model = lingam.DirectLiNGAM()
result = model.bootstrap(X, 100)
```
Since `BootstrapResult` object is returned, we can get the ranking of the causal directions extracted by `get_causal_direction_counts()` method. In the following sample code, `n_directions` option is limited to the causal directions of the top 8 rankings, and `min_causal_effect` option is limited to causal directions with a coefficient of 0.01 or more.
```
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)
```
We can check the result by utility function.
```
print_causal_directions(cdc, 100)
```
Also, using the `get_directed_acyclic_graph_counts()` method, we can get the ranking of the DAGs extracted. In the following sample code, `n_dags` option is limited to the dags of the top 3 rankings, and `min_causal_effect` option is limited to causal directions with a coefficient of 0.01 or more.
```
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)
```
We can check the result by utility function.
```
print_dagc(dagc, 100)
```
Using the `get_probabilities()` method, we can get the probability of bootstrapping.
```
prob = result.get_probabilities(min_causal_effect=0.01)
print(prob)
```
| github_jupyter |
```
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
%matplotlib inline
from scipy import stats
```
## Sums of IID Samples ##
After the dry, algebraic discussion of the previous section it is a relief to finally be able to compute some variances.
Let $X_1, X_2, \ldots X_n$ be random variables with sum
$$
S_n = \sum_{i=1}^n X_i
$$
The variance of the sum is
$$
\begin{align*}
Var(S_n) &= Cov(S_n, S_n) \\
&= \sum_{i=1}^n\sum_{j=1}^n Cov(X_i, X_j) ~~~~ \text{(bilinearity)} \\
&= \sum_{i=1}^n Var(X_i) + \mathop{\sum \sum}_{1 \le i \ne j \le n} Cov(X_i, X_j)
\end{align*}
$$
We say that the variance of the sum is the sum of all the variances and all the covariances.
If $X_1, X_2 \ldots , X_n$ are independent, then all the covariance terms in the formula above are 0.
Therefore if $X_1, X_2, \ldots, X_n$ are independent then
$$
Var(S_n) = \sum_{i=1}^n Var(X_i)
$$
Thus for independent random variables $X_1, X_2, \ldots, X_n$, both the expectation and the variance add up nicely:
$$
E(S_n) = \sum_{i=1}^n E(X_i), ~~~~~~ Var(S_n) = \sum_{i=1}^n Var(X_i)
$$
When the random variables are i.i.d., this simplifies even further.
### Sum of an IID Sample ###
Let $X_1, X_2, \ldots, X_n$ be i.i.d., each with mean $\mu$ and $SD$ $\sigma$. You can think of $X_1, X_2, \ldots, X_n$ as draws at random with replacement from a population, or the results of independent replications of the same experiment.
Let $S_n$ be the sample sum, as above. Then
$$
E(S_n) = n\mu ~~~~~~~~~~ Var(S_n) = n\sigma^2 ~~~~~~~~~~ SD(S_n) = \sqrt{n}\sigma
$$
This implies that as the sample size $n$ increases, the distribution of the sum $S_n$ shifts to the right and is more spread out.
Here is one of the most important applications of these results.
### Variance of the Binomial ###
Let $X$ have the binomial $(n, p)$ distribution. We know that
$$
X = \sum_{i=1}^n I_j
$$
where $I_1, I_2, \ldots, I_n$ are i.i.d. indicators, each taking the value 1 with probability $p$. Each of these indicators has expectation $p$ and variance $pq = p(1-p)$. Therefore
$$
E(X) = np ~~~~~~~~~~ Var(X) = npq ~~~~~~~~~~ SD(X) = \sqrt{npq}
$$
For example, if $X$ is the number of heads in 100 tosses of a coin, then
$$
E(X) = 100 \times 0.5 = 50 ~~~~~~~~~~ SD(X) = \sqrt{100 \times 0.5 \times 0.5} = 5
$$
Here is the distribution of $X$. You can see that there is almost no probability outside the range $E(X) \pm 3SD(X)$.
```
k = np.arange(25, 75, 1)
binom_probs = stats.binom.pmf(k, 100, 0.5)
binom_dist = Table().values(k).probability(binom_probs)
Plot(binom_dist, show_ev=True, show_sd=True)
```
| github_jupyter |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import dataframe_image as dfi
import matplotlib.ticker
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
sns.set()
matplotlib.rcParams['figure.figsize'] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
```
### COVID-19 Cases
```
confirmed_df = pd.read_csv("https://covid19tracking.narrativa.com/csv/confirmed.csv")
radar_covid_countries = {"Spain"}
# radar_covid_regions = { ... }
confirmed_df = confirmed_df[confirmed_df["Country_EN"].isin(radar_covid_countries)]
# confirmed_df = confirmed_df[confirmed_df["Region"].isin(radar_covid_regions)]
# set(confirmed_df.Region.tolist()) == radar_covid_regions
confirmed_country_columns = list(filter(lambda x: x.startswith("Country_"), confirmed_df.columns))
confirmed_regional_columns = confirmed_country_columns + ["Region"]
confirmed_df.drop(columns=confirmed_regional_columns, inplace=True)
confirmed_df = confirmed_df.sum().to_frame()
confirmed_df.tail()
confirmed_df.reset_index(inplace=True)
confirmed_df.columns = ["sample_date_string", "cumulative_cases"]
confirmed_df.sort_values("sample_date_string", inplace=True)
confirmed_df["new_cases"] = confirmed_df.cumulative_cases.diff()
confirmed_df["rolling_mean_new_cases"] = confirmed_df.new_cases.rolling(7).mean()
confirmed_df.tail()
extraction_date_confirmed_df = \
confirmed_df[confirmed_df.sample_date_string == extraction_date]
extraction_previous_date_confirmed_df = \
confirmed_df[confirmed_df.sample_date_string == extraction_previous_date].copy()
if extraction_date_confirmed_df.empty and \
not extraction_previous_date_confirmed_df.empty:
extraction_previous_date_confirmed_df["sample_date_string"] = extraction_date
extraction_previous_date_confirmed_df["new_cases"] = \
extraction_previous_date_confirmed_df.rolling_mean_new_cases
extraction_previous_date_confirmed_df["cumulative_cases"] = \
extraction_previous_date_confirmed_df.new_cases + \
extraction_previous_date_confirmed_df.cumulative_cases
confirmed_df = confirmed_df.append(extraction_previous_date_confirmed_df)
confirmed_df.tail()
confirmed_df[["new_cases", "rolling_mean_new_cases"]].plot()
```
### Extract API TEKs
```
from Modules.RadarCOVID import radar_covid
exposure_keys_df = radar_covid.download_last_radar_covid_exposure_keys(days=14)
exposure_keys_df[[
"sample_date_string", "source_url", "region", "key_data"]].head()
exposure_keys_summary_df = \
exposure_keys_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "tek_count"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = exposure_keys_df[["sample_date_string", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
"sample_date").tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
"Data/TEKs/Current/RadarCOVID-TEKs.json",
lines=True, orient="records")
tek_list_df.drop(columns=["extraction_date_with_hour"]).to_json(
"Data/TEKs/Daily/RadarCOVID-TEKs-" + extraction_date + ".json",
lines=True, orient="records")
tek_list_df.to_json(
"Data/TEKs/Hourly/RadarCOVID-TEKs-" + extraction_date_with_hour + ".json",
lines=True, orient="records")
tek_list_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame()
paths = list(reversed(sorted(glob.glob(f"Data/TEKs/{mode}/RadarCOVID-TEKs-*.json"))))
if limit:
paths = paths[:limit]
for path in paths:
logging.info(f"Loading TEKs from '{path}'...")
iteration_extracted_teks_df = pd.read_json(path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
return extracted_teks_df
```
### Daily New TEKs
```
daily_extracted_teks_df = load_extracted_teks(mode="Daily", limit=14)
daily_extracted_teks_df.head()
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "new_tek_count",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.head()
new_tek_devices_df = daily_extracted_teks_df.copy()
new_tek_devices_df["new_sample_extraction_date"] = \
pd.to_datetime(new_tek_devices_df.sample_date) + datetime.timedelta(1)
new_tek_devices_df["extraction_date"] = pd.to_datetime(new_tek_devices_df.extraction_date)
new_tek_devices_df = new_tek_devices_df[
new_tek_devices_df.new_sample_extraction_date == new_tek_devices_df.extraction_date]
new_tek_devices_df.head()
new_tek_devices_df.set_index("extraction_date", inplace=True)
new_tek_devices_df = new_tek_devices_df.tek_list.apply(lambda x: len(set(x))).to_frame()
new_tek_devices_df.reset_index(inplace=True)
new_tek_devices_df.rename(columns={
"extraction_date": "sample_date_string",
"tek_list": "new_tek_devices"}, inplace=True)
new_tek_devices_df["sample_date_string"] = new_tek_devices_df.sample_date_string.dt.strftime("%Y-%m-%d")
new_tek_devices_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(mode="Hourly", limit=24)
hourly_extracted_teks_df.head()
hourly_tek_list_df = hourly_extracted_teks_df.groupby("extraction_date_with_hour").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
hourly_tek_list_df = hourly_tek_list_df.set_index("extraction_date_with_hour").sort_index(ascending=True)
hourly_new_tek_df = hourly_tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
hourly_new_tek_df.rename(columns={
"tek_list": "new_tek_count"}, inplace=True)
hourly_new_tek_df.tail()
hourly_new_tek_devices_df = hourly_extracted_teks_df.copy()
hourly_new_tek_devices_df["new_sample_extraction_date"] = \
pd.to_datetime(hourly_new_tek_devices_df.sample_date) + datetime.timedelta(1)
hourly_new_tek_devices_df["extraction_date"] = pd.to_datetime(hourly_new_tek_devices_df.extraction_date)
hourly_new_tek_devices_df = hourly_new_tek_devices_df[
hourly_new_tek_devices_df.new_sample_extraction_date == hourly_new_tek_devices_df.extraction_date]
hourly_new_tek_devices_df.set_index("extraction_date_with_hour", inplace=True)
hourly_new_tek_devices_df_ = pd.DataFrame()
for i, chunk_df in hourly_new_tek_devices_df.groupby("extraction_date"):
chunk_df = chunk_df.copy()
chunk_df.sort_index(inplace=True)
chunk_tek_count_df = chunk_df.tek_list.apply(lambda x: len(set(x)))
chunk_df = chunk_tek_count_df.diff().fillna(chunk_tek_count_df).to_frame()
hourly_new_tek_devices_df_ = hourly_new_tek_devices_df_.append(chunk_df)
hourly_new_tek_devices_df = hourly_new_tek_devices_df_
hourly_new_tek_devices_df.reset_index(inplace=True)
hourly_new_tek_devices_df.rename(columns={
"tek_list": "new_tek_devices"}, inplace=True)
hourly_new_tek_devices_df.tail()
hourly_summary_df = hourly_new_tek_df.merge(
hourly_new_tek_devices_df, on=["extraction_date_with_hour"], how="outer")
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df.tail()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(new_tek_devices_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(confirmed_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["tek_count_per_new_case"] = \
result_summary_df.tek_count / result_summary_df.rolling_mean_new_cases
result_summary_df["new_tek_count_per_new_case"] = \
result_summary_df.new_tek_count / result_summary_df.rolling_mean_new_cases
result_summary_df["new_tek_devices_per_new_case"] = \
result_summary_df.new_tek_devices / result_summary_df.rolling_mean_new_cases
result_summary_df["new_tek_count_per_new_tek_device"] = \
result_summary_df.new_tek_count / result_summary_df.new_tek_devices
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df.set_index("sample_date", inplace=True)
result_summary_df = result_summary_df.sort_index(ascending=False)
```
## Report Results
### Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[[
"tek_count",
"new_tek_count",
"new_cases",
"rolling_mean_new_cases",
"tek_count_per_new_case",
"new_tek_count_per_new_case",
"new_tek_devices",
"new_tek_devices_per_new_case",
"new_tek_count_per_new_tek_device"]]
result_summary_df
```
### Summary Plots
```
summary_ax_list = result_summary_df[[
"rolling_mean_new_cases",
"tek_count",
"new_tek_count",
"new_tek_devices",
"new_tek_count_per_new_tek_device",
"new_tek_devices_per_new_case"
]].sort_index(ascending=True).plot.bar(
title="Summary", rot=45, subplots=True, figsize=(15, 22))
summary_ax_list[-1].yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df.plot.bar(
title="Last 24h Summary", rot=45, subplots=True)
```
### Publish Results
```
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
media_path = get_temporary_image_path()
dfi.export(df, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(df=result_summary_df)
hourly_summary_plots_image_path = save_temporary_plot_image(ax=hourly_summary_ax_list)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(report_resources_path_prefix + "Summary-Table.html")
_ = shutil.copyfile(summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(summary_table_image_path, report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png")
report_daily_url_pattern = \
"https://github.com/pvieito/RadarCOVID-Report/blob/master/Notebooks/" \
"RadarCOVID-Report/{report_type}/RadarCOVID-Report-{report_date}.ipynb"
report_daily_url = report_daily_url_pattern.format(
report_type="Daily", report_date=extraction_date)
report_hourly_url = report_daily_url_pattern.format(
report_type="Hourly", report_date=extraction_date_with_hour)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
summary_table_html = result_summary_df.to_html()
readme_contents = readme_contents.format(
summary_table_html=summary_table_html,
report_url_with_hour=report_hourly_url,
extraction_date_with_hour=extraction_date_with_hour)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule":
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
hourly_summary_plots_media = api.media_upload(hourly_summary_plots_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
hourly_summary_plots_media.media_id,
]
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
new_teks = extraction_date_result_summary_df.new_tek_count.sum().astype(int)
new_teks_last_hour = extraction_date_result_hourly_summary_df.new_tek_count.sum().astype(int)
new_devices = extraction_date_result_summary_df.new_tek_devices.sum().astype(int)
new_devices_last_hour = extraction_date_result_hourly_summary_df.new_tek_devices.sum().astype(int)
new_tek_count_per_new_tek_device = \
extraction_date_result_summary_df.new_tek_count_per_new_tek_device.sum()
new_tek_devices_per_new_case = \
extraction_date_result_summary_df.new_tek_devices_per_new_case.sum()
status = textwrap.dedent(f"""
Report Update – {extraction_date_with_hour}
#ExposureNotification #RadarCOVID
Shared Diagnoses Day Summary:
- New TEKs: {new_teks} ({new_teks_last_hour:+d} last hour)
- New Devices: {new_devices} ({new_devices_last_hour:+d} last hour, {new_tek_count_per_new_tek_device:.2} TEKs/device)
- Usage Ratio: {new_tek_devices_per_new_case:.2%} devices/case
Report Link: {report_hourly_url}
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
print("Tensorflow Version:",tf.__version__)
```
### INDEX
[1.- LOAD DATA USING IMAGE GENREATORS](#1)
[2.-Create a CNN From Scracth](#2)
<a id='1'></a>
## 1.- LOAD DATA USING IMAGE GENREATORS
```
### Initiate an instnace of ImagaDataGenerator ###
## More in Data Augmentation:
## https://towardsdatascience.com/exploring-image-data-augmentation-with-keras-and-tensorflow-a8162d89b844
path = "Data_set_eye" # Dataset Path
###################################### Create Data Generator ##############################################################
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(validation_split=0.2, # Split for Test/Validation
height_shift_range=0.2, # Height Shift
brightness_range=(0.5, 1.), # Brightness
rescale=0.9, # Rescale
)
############################################## TRAINING DATASET ##############################################################
train_dataset = image_generator.flow_from_directory(batch_size=32, # Batch Size
directory=path, # Directory
shuffle=True, # Shuffle images
target_size=(100, 100), # Resize to 100x100
color_mode="rgb", # Set RGB as default
subset="training", # Set Subset to Training
class_mode='categorical' # Set Data to Categoriacal
)
############################################## TESTING DATASET ##############################################################
validation_dataset = image_generator.flow_from_directory(batch_size=32, # Batch Size
directory=path, # Directory
shuffle=True, # Shuffle images
target_size=(100, 100), # Resize to 100x100
subset="validation", # Set Subset to Validation
color_mode="rgb", # Set RGB as default
class_mode='categorical') # Set Data to Categoriacal
```
### 1.1.- Calculate Steps that have to be taken every epoch
```
val_steps = validation_dataset.n // validation_dataset.batch_size # Steps in an epoch Validation Data
train_steps = train_dataset.n // train_dataset.batch_size # Steps in an epoch for Traninning Data
###################################### INFROM THE USER ABOUT THE STEPS #####################################################
print(f"Train steps per epoch: {train_steps}") # Steps in an epoch for Traninning Data
print(f"Validation steps per epoch: {val_steps}") # Steps in an epoch Validation Data
```
### 1.2.- Get tha labels for each class
```
#### All the labels are stored in Lables.txt file ######
path = "Data_set_eye/Labels.txt" # Path for Label txt file
with open(path,"r") as handler: # Open txt file
labels = handler.read().splitlines() # Create a list based on every new line
print(labels) # Show the labels
```
<br>
<br>
<a id='2'></a>
# 2.-Create a CNN From Scracth
```
def get_new_model(rate=0.5):
"""
Convolutional Neural Network with Droput
"""
############################### NEURAL NETWORK ARCHITECTURE ############################################
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=((100, 100, 3))))
model.add(tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),activation="relu",padding="same",name="conv_1"))
model.add(tf.keras.layers.Dropout(rate))
model.add(tf.keras.layers.Conv2D(filters=16,kernel_size=(3,3),activation="relu",padding="same",name="conv_2"))
model.add(tf.keras.layers.Dropout(rate))
model.add(tf.keras.layers.Conv2D(filters=8,kernel_size=(3,3),activation="relu",padding="same",name="conv_3"))
model.add(tf.keras.layers.Dropout(rate))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(8,8),name="pool_1"))
model.add(tf.keras.layers.Flatten(name="flatten"))
model.add(tf.keras.layers.Dense(units=64,activation="relu",name="dense_1"))
model.add(tf.keras.layers.Dense(units=64,activation="relu",name="dense_2"))
model.add(tf.keras.layers.Dense(units=64,activation="relu",name="dense_3"))
model.add(tf.keras.layers.Dense(units=4,activation="softmax",name="dense_4"))
########################### Compilation of CNN ########################################################
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
return model
def get_test_accuracy(model,data,steps,**kwargs):
"""Test model classification accuracy"""
test_loss, test_acc = model.evaluate_generator(data,steps,**kwargs)
print('accuracy: {acc:0.3f}'.format(acc=test_acc))
def get_checkpoint_best_only():
"""
- saves only the weights that generate the highest validation (testing) accuracy
"""
path = r'C:\Users\Eduardo\Documents\CARRERA\8vo_semestre\BIO_4\Lab\3_Silla_de_ruedas\Python\Weights_eyes\weights'# path to save model
checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath=path, save_best_only=True,save_weights_only=True,verbose=2)
return checkpoint
def get_early_stopping():
"""
This function should return an EarlyStopping callback that stops training when
the validation (testing) accuracy has not improved in the last 5 epochs.
EarlyStopping callback with the correct 'monitor' and 'patience'
"""
return tf.keras.callbacks.EarlyStopping(monitor='accuracy',min_delta=0.01,patience=5,mode="max")
def plot_learning(history):
"""PLOT LEARNING CUVRVES """
figrue, ax = plt.subplots(1,2,figsize=(15,6)) # Create Figure
ax[0].set_title("Loss Vs Epochs") # Set Title
ax[0].plot(history.history['loss'],label=" Trainining Loss") # Plot Training Loss
ax[0].plot(history.history['val_loss'],label="Validation Loss") # Plot Validation Loss
ax[0].legend() # Print Labels in plot
ax[1].set_title("Accuracy Vs Epochs") # Set Title
ax[1].plot(history.history['accuracy'],label=" Trainining Accurcacy") # Plot Training Accuracy
ax[1].plot(history.history['val_accuracy'],label="Validation Accurcacy") # Plot Validation Accuracy
ax[1].legend() # Print Labels in plot
plt.show() # Show plot
## THERE IS NOTHING TO RETURN ##
model = get_new_model() # Initiate Model
get_test_accuracy(model,validation_dataset,val_steps) # Test intial Accruacy (without Trainning)
model.summary() # Get Model Architecture
```
### 2.1 Train Model
```
checkpoint_best_only = get_checkpoint_best_only() # Get best only save
early_stopping = get_early_stopping() # Get Early stopping
callbacks = [checkpoint_best_only, early_stopping] # Put callbacks in a list
### Train model using the callbacks ##
history = model.fit(train_dataset, # Data generator for Training
steps_per_epoch =train_steps, # Steps in an epoch of Training Data
validation_data = validation_dataset, # Data Generator for Validation
validation_steps=val_steps, # Steps in a epoch of Validation Data
epochs=40,callbacks=callbacks # Callbacks
)
plot_learning(history) # Plot learning curves at the end
img = cv2.imread('Data_Set/Back/Back112.jpg') # Get image
img = cv2.imread('Data_Set/Right/Right68.jpg') # Get Right
img = cv2.imread('Data_Set/Left/Left59.jpg') # Left
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB )
prediction = model.predict(img[np.newaxis,...]) # Make Prediction
y_predict = np.argmax(prediction) # Get Maximum Probability
print(labels[y_predict])
```
### 2.2 Test Model in Video
```
## This is just an example to ilustrate how to use Haar Cascades in order to detect objects (LIVE) ##
face = cv2.CascadeClassifier('Haarcascade/haarcascade_frontalface_default.xml') # Face Haar Cascade loading
eye = cv2.CascadeClassifier('Haarcascade/haarcascade_eye.xml') # Eye Haar Cascade Loading
vid = cv2.VideoCapture(0) # Define a video capture object
status = True # Initalize status
width = 100 # Width
height = 100 # Height
dimensions=(width,height) # Dimenssions
font = cv2.FONT_HERSHEY_SIMPLEX
while(status):
status, frame = vid.read() # Capture the video frame by frame
frame2 = np.copy(frame) # Copy frame
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Convert to gray scale
face_info = face.detectMultiScale(gray, 1.3, 5) # Get face infromation
if face_info != (): # If face was capture
(x,y,w,h) = face_info[0] # unpack information
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,255,0),1) # Draw rectangle
eye_info = eye.detectMultiScale(gray) # eye info
if eye_info != ():
(x,y,w,h) = eye_info[0] # unpack information
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,255),1)
cropped_face_color = frame2[y:y+h, x:x+w] # Crop face (color)
res = cv2.resize(cropped_face_color,dimensions, interpolation=cv2.INTER_AREA) # Resize
res = cv2.cvtColor(res, cv2.COLOR_BGR2RGB ) # Convert to RGB
prediction = model.predict(res[np.newaxis,...]) # Make Prediction
y_predict = np.argmax(prediction) # Get Maximum Probability
y_prediction_text = labels[y_predict] # Get Text of prediction
cv2.putText(frame,y_prediction_text,(20,20), font, 1,(255,255,0),2)
cv2.imshow('frame', frame) # Display the resulting frame
wait_key = cv2.waitKey(1) & 0xFF # Store Waitkey object
if wait_key == ord('q'): # If q is pressed
break # Break while loop
vid.release() # After the loop release the cap object
cv2.destroyAllWindows() # Destroy all the windows
```
<br>
<br>
<a id='2'></a>
# 2.- Use Transfer Learning

```
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
image_size = [100,100,3] # Add image size we wish to train Data with
### Intiate VGG16 ###
vgg = VGG16(input_shape=image_size, # Input Shape
weights='imagenet', # Dataset used to train weights
include_top=False # Do not include Top
)
##### MAKE ALL LAYERS UNTRAINABLE ###
maximum = 7
i=0
for layer in vgg.layers: # Iterate over layers
if i < maximum: # if layer index is less than the one we specified
layer.trainable = False # Make layer untrianable
i+=1
vgg.layers # Print VGG Layers
vgg.layers[6].trainable
x = Flatten()(vgg.output)
prediction = Dense(4,activation="softmax")(x)
model2 = tf.keras.models.Model(inputs=vgg.input,outputs=prediction)
model2.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
model2.summary()
#history = model2.fit(train_dataset,steps_per_epoch=train_steps,epochs=4)
history = model2.fit(train_dataset, # Data generator for Training
steps_per_epoch =train_steps, # Steps in an epoch of Training Data
validation_data = validation_dataset, # Data Generator for Validation
validation_steps=val_steps, # Steps in a epoch of Validation Data
epochs=10)
plot_learning(history)
## This is just an example to ilustrate how to use Haar Cascades in order to detect objects (LIVE) ##
face = cv2.CascadeClassifier('Haarcascade/haarcascade_frontalface_default.xml') # Face Haar Cascade loading
eye = cv2.CascadeClassifier('Haarcascade/haarcascade_eye.xml') # Eye Haar Cascade Loading
vid = cv2.VideoCapture(0) # Define a video capture object
status = True # Initalize status
width = 100 # Width
height = 100 # Height
dimensions=(width,height) # Dimenssions
font = cv2.FONT_HERSHEY_SIMPLEX
while(status): # Iterate while status is true
status, frame = vid.read() # Capture the video frame by frame
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Convert to gray scale
face_info = face.detectMultiScale(gray, 1.3, 5) # Get face infromation
for (x,y,w,h) in face_info: # Iterate over this information
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,255,0),1) # Draw rectangle
cropped_face_color = frame[y:y+h, x:x+w] # Crop face (color)
if face_info != (): # If face was capture
res = cv2.resize(cropped_face_color,dimensions, interpolation=cv2.INTER_AREA) # Resize
prediction = model2.predict(res[np.newaxis,...]) # Make Prediction
y_predict = np.argmax(prediction) # Get Maximum Probability
y_prediction_text = labels[y_predict] # Get Text of prediction
cv2.putText(frame,y_prediction_text,(20,20), font, 1,(255,255,0),2)
cv2.imshow('frame', frame) # Display the resulting frame
wait_key = cv2.waitKey(1) & 0xFF # Store Waitkey object
if wait_key == ord('q'): # If q is pressed
break # Break while loop
vid.release() # After the loop release the cap object
cv2.destroyAllWindows() # Destroy all the windows
```
| github_jupyter |
<img align="right" src="images/tf-small.png" width="128"/>
<img align="right" src="images/phblogo.png" width="128"/>
<img align="right" src="images/dans.png"/>
---
Start with [convert](https://nbviewer.jupyter.org/github/annotation/banks/blob/master/programs/convert.ipynb)
---
# Getting data from online repos
We show the various automatic ways by which you can get data that is out there on GitHub to your computer.
The work horse is the function `checkoutRepo()` in `tf.applib.repo`.
Text-Fabric uses this function for all operations where data flows from GitHub to your computer.
There are quite some options, and here we explain all the `checkout` options, i.e. the selection of
data from the history.
See also the [documentation](https://annotation.github.io/text-fabric/tf/advanced/repo.html).
```
%load_ext autoreload
%autoreload 2
```
## Leading example
We use markdown display from IPython purely for presentation.
It is not needed to run `checkoutRepo()`.
```
from tf.advanced.helpers import dm
from tf.advanced.repo import checkoutRepo
```
We work with our tiny example TF app: `banks`.
```
ORG = "annotation"
REPO = "banks"
MAIN = "tf"
MOD = "sim/tf"
```
`MAIN`points to the main data, `MOD` points to a module of data: the similarity feature.
## Presenting the results
The function `do()` just formats the results of a `checkoutRepo()` run.
The result of such a run, after the progress messages, is a tuple.
For the explanation of the tuple, read the [docs](https://annotation.github.io/text-fabric/tf/advanced/repo.html).
```
def do(task):
md = f"""
commit | release | local | base | subdir
--- | --- | --- | --- | ---
`{task[0]}` | `{task[1]}` | `{task[2]}` | `{task[3]}` | `{task[4]}`
"""
dm(md)
```
## All the checkout options
We discuss the meaning and effects of the values you can pass to the `checkout` option.
### `clone`
> Look whether the appropriate folder exists under your `~/github` directory.
This is merely a check whether your data exists in the expected location.
* No online checks take place.
* No data is moved or copied.
**NB**: you cannot select releases and commits in your *local* GitHub clone.
The data will be used as it is found on your file system.
**When to use**
> If you are developing new feature data.
When you develop your data in a repository, your development is private as long as you
do not push to GitHub.
You can test your data, even without locally committing your data.
But, if you are ready to share your data, everything is in place, and you only
have to commit and push, and pass the location on github to others, like
```
myorg/myrepo/subfolder
```
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout="clone"))
```
We show what happens if you do not have a local github clone in `~/github`.
```
%%sh
mv ~/github/annotation/banks/tf ~/github/annotation/banks/tfxxx
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout="clone"))
```
Note that no attempt is made to retrieve online data.
```
%%sh
mv ~/github/annotation/banks/tfxxx ~/github/annotation/banks/tf
```
### `local`
> Look whether the appropriate folder exists under your `~/text-fabric-data` directory.
This is merely a check whether your data exists in the expected location.
* No online checks take place.
* No data is moved or copied.
**When to use**
> If you are using data created and shared by others, and if the data
is already on your system.
You can be sure that no updates are downloaded, and that everything works the same as the last time
you ran your program.
If you do not already have the data, you have to pass `latest` or `hot` or `''` which will be discussed below.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout="local"))
```
You see this data because earlier I have downloaded release `v2.0`, which is a tag for
the commit with hash `9713e71c18fd296cf1860d6411312f9127710ba7`.
If you do not have any corresponding data in your `~/text-fabric-data`, you get this:
```
%%sh
mv ~/text-fabric-data/annotation/banks/tf ~/text-fabric-data/annotation/banks/tfxxx
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout="local"))
%%sh
mv ~/text-fabric-data/annotation/banks/tfxxx ~/text-fabric-data/annotation/banks/tf
```
### `''` (default)
This is about when you omit the `checkout` parameter, or pass `''` to it.
The destination for local data is your `~/text-fabric-data` folder.
If you have already a local copy of the data, that will be used.
If not:
> Note that if your local data is outdated, no new data will be downloaded.
You need `latest` or `hot` for that.
But what is the latest online copy? In this case we mean:
* the latest *release*, and from that release an appropriate attached zip file
* but if there is no such zip file, we take the files from the corresponding commit
* but if there is no release at all, we take the files from the *latest commit*.
**When to use**
> If you need data created/shared by other people and you want to be sure that you always have the
same copy that you initially downloaded.
* If the data provider makes releases after important modifications, you will get those.
* If the data provider is experimenting after the latest release, and commits them to GitHub,
you do not get those.
However, with `hot`, you `can` get the latest commit, to be discussed below.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout=""))
```
Note that no data has been downloaded, because it has detected that there is already local data on your computer.
If you do not have any checkout of this data on your computer, the data will be downloaded.
```
%%sh
rm -rf ~/text-fabric-data/annotation/banks/tf
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout=""))
```
#### Note about versions and releases
The **version** of the data is not necessarily the same concept as the **release** of it.
It is possible to keep the versions and the releases strictly parallel,
but in text conversion workflows it can be handy to make a distinction between them,
e.g. as follows:
> the version is a property of the input data
> the release is a property of the output data
When you create data from sources using conversion algorithms,
you want to increase the version if you get new input data, e.g. as a result of corrections
made by the author.
But if you modify your conversion algorithm, while still running it on the same input data,
you may release the new output data as a **new release** of the **same version**.
Likewise, when the input data stays the same, but you have corrected typos in the metadata,
you can make a **new release** of the **same version** of the data.
The conversion delivers the features under a specific version,
and Text-Fabric supports those versions: users of TF can select the version they work with.
Releases are made in the version control system (git and GitHub).
The part of Text-Fabric that auto-downloads data is aware of releases.
But once the data has been downloaded in place, there is no machinery in Text-Fabric to handle
different releases.
Yet the release tag and commit hash are passed on to the point where it comes to recording
the provenance of the data.
#### Download a different version
We download version `0.1` of the data.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.1", checkout=""))
```
Several observations:
* we obtained the older version from the *latest* release, which is still release `v2.0`
* the download looks different from when we downloaded version `0.2`;
this is because the data producer has zipped the `0.2` data and has attached it to release `v2.0`,
but he forgot, or deliberately refused, to attach version `0.1` to that release;
so it has been retrieved directly from the files in the corresponding commit, which is
`9713e71c18fd296cf1860d6411312f9127710ba7`.
For the verification, an online check is needed. The verification consists of checking the release tag and/or commit hash.
If there is no online connection, you get this:
```
%%sh
networksetup -setairportpower en0 off
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.1", checkout="latest"))
```
or if you do not have local data:
```
%%sh
mv ~/text-fabric-data/annotation/banks/tf/0.1 ~/text-fabric-data/annotation/banks/tf/0.1xxx
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.1", checkout="latest"))
%%sh
mv ~/text-fabric-data/annotation/banks/tf/0.1xxx ~/text-fabric-data/annotation/banks/tf/0.1
%%sh
networksetup -setairportpower en0 on
```
### `latest`
> The latest online release will be identified,
and if you do not have that copy locally, it will be downloaded.
**When to use**
> If you need data created/shared by other people and you want to be sure that you always have the
latest *stable* version of that data, unreleased data is not good enough.
One of the difference with `checkout=''` is that if there are no releases, you will not get data.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout="latest"))
```
There is no sim/tf data in any release commit, so if we look it up, it should fail.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MOD, version="0.2", checkout="latest"))
```
But with `checkout=''` it will only be found if you do not have local data already:
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MOD, version="0.2", checkout=""))
```
In that case there is only one way: `hot`:
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MOD, version="0.2", checkout="hot"))
```
### `hot`
> The latest online commit will be identified,
and if you do not have that copy locally, it will be downloaded.
**When to use**
> If you need data created/shared by other people and you want to be sure that you always have the
latest version of that data, whether released or not.
The difference with `checkout=''` is that if there are releases,
you will now get data that may be newer than the latest release.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout="hot"))
```
Observe that data has been downloaded, and that we have now data corresponding to a different commit hash,
and not corresponding to a release.
If we now ask for the latest *stable* data, the data will be downloaded anew.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout="latest"))
```
### `v1.0` a specific release
> Look for a specific online release to get data from.
**When to use**
> When you want to replicate something, and need data from an earlier point in the history.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.1", checkout="v1.0"))
```
We might try to get version `0.2` from this release.
```
do(checkoutRepo(org=ORG, repo=REPO, folder=MAIN, version="0.2", checkout="v1.0"))
```
At that early point in the history there is not yet a version `0.2` of the data.
### `a81746c` a specific commit
> Look for a specific online commit to get data from.
**When to use**
> When you want to replicate something, and need data from an earlier point in the history, and there is no
release for that commit.
```
do(
checkoutRepo(
org=ORG,
repo=REPO,
folder=MAIN,
version="0.1",
checkout="a81746c5f9627637db4dae04c2d5348bda9e511a",
)
)
```
## *source* and *dest*: an alternative for `~/github` and `~/text-fabric-data`
Everything so far uses the hard-wired `~/github` and `~/text-fabric-data` directories.
But you can change that:
* pass *source* as a replacement for `~/github`.
* pass *dest* as a replacement for `~/text-fabric-data`.
**When to use**
> if you do not want to interfere with the `~/text-fabric-data` directory.
Text-Fabric manages the `~/text-fabric-data` directory,
and if you are experimenting outside Text-Fabric
you may not want to touch its data directory.
> if you want to clone data into your `~/github` directory.
Normally, TF uses your `~/github` directory as a source of information,
and never writes into it.
But if you explicitly pass `dest=~/github`, things change: downloads will
arrive under `~/github`. Use this with care.
> if you work with cloned data outside your `~/github` directory,
you can let the system look in *source* instead of `~/github`.
We customize source and destination directories:
* we put them both under `~/Downloads`
* we give them different names
```
MY_GH = "~/Downloads/repoclones"
MY_TFD = "~/Downloads/textbase"
```
Download a fresh copy of the data to `~/Downloads/textbase` instead.
```
do(
checkoutRepo(
org=ORG,
repo=REPO,
folder=MAIN,
version="0.2",
checkout="",
source=MY_GH,
dest=MY_TFD,
)
)
```
Lookup the same data locally.
```
do(
checkoutRepo(
org=ORG,
repo=REPO,
folder=MAIN,
version="0.2",
checkout="",
source=MY_GH,
dest=MY_TFD,
)
)
```
We copy the local github data to the custom location:
```
%%sh
mkdir -p ~/Downloads/repoclones/annotation
cp -R ~/github/annotation/banks ~/Downloads/repoclones/annotation/banks
```
Lookup the data in this alternative directory.
```
do(
checkoutRepo(
org=ORG,
repo=REPO,
folder=MAIN,
version="0.2",
checkout="clone",
source=MY_GH,
dest=MY_TFD,
)
)
```
Note that the directory trees under the customised *source* and *dest* locations have exactly the same shape as before.
## Conclusion
With the help of `checkoutRepo()` you will be able to make local copies of online data in an organized way.
This will help you when
* you use other people's data
* develop your own data
* share and publish your data
* go back in history.
---
All chapters:
* [use](use.ipynb)
* [share](share.ipynb)
* [app](app.ipynb)
* *repo*
* [compose](compose.ipynb)
---
| github_jupyter |
```
#hide
from fastscript.core import *
```
# fastscript
> A fast way to turn your python function into a script.
Part of [fast.ai](https://www.fast.ai)'s toolkit for delightful developer experiences. Written by Jeremy Howard.
## Install
`pip install fastscript`
## Overview
Sometimes, you want to create a quick script, either for yourself, or for others. But in Python, that involves a whole lot of boilerplate and ceremony, especially if you want to support command line arguments, provide help, and other niceties. You can use [argparse](https://docs.python.org/3/library/argparse.html) for this purpose, which comes with Python, but it's complex and verbose.
`fastscript` makes life easier. There are much fancier modules to help you write scripts (we recommend [Python Fire](https://github.com/google/python-fire), and [Click](https://click.palletsprojects.com/en/7.x/) is also popular), but fastscript is very fast and very simple. In fact, it's <50 lines of code! Basically, it's just a little wrapper around `argparse` that uses modern Python features and some thoughtful defaults to get rid of the boilerplate.
## Example
Here's a complete example - it's provided in the fastscript repo as `examples/test_fastscript.py`:
```python
from fastscript import *
@call_parse
def main(msg:Param("The message", str),
upper:Param("Convert to uppercase?", bool_arg)=False):
print(msg.upper() if upper else msg)
````
When you run this script, you'll see:
```
$ python examples/test_fastscript.py
usage: test_fastscript.py [-h] [--upper UPPER] msg
test_fastscript.py: error: the following arguments are required: msg
```
As you see, we didn't need any `if __name__ == "__main__"`, we didn't have to parse arguments, we just wrote a function, added a decorator to it, and added some annotations to our function's parameters. As a bonus, we can also use this function directly from a REPL such as Jupyter Notebook - it's not just for command line scripts!
## Param
Each parameter in your function should have an annotation `Param(...)` (as in the example above). You can pass the following when calling `Param`: `help`,`type`,`opt`,`action`,`nargs`,`const`,`choices`,`required` . Except for `opt`, all of these are just passed directly to `argparse`, so you have all the power of that module at your disposal. Generally you'll want to pass at least `help` (since this is provided as the help string for that parameter) and `type` (to ensure that you get the type of data you expect). `opt` is a bool that defines whether a param is optional or required (positional) - but you'll generally not need to set this manually, because fastscript will set it for you automatically based on *default* values.
You should provide a default (after the `=`) for any *optional* parameters. If you don't provide a default for a parameter, then it will be a *positional* parameter.
## setuptools scripts
There's a really nice feature of pip/setuptools that lets you create commandline scripts directly from functions, makes them available in the `PATH`, and even makes your scripts cross-platform (e.g. in Windows it creates an exe). fastscript supports this feature too. To use it, follow [this example](fastscript/test_cli.py) from `fastscript/test_cli.py` in the repo. As you see, it's basically identical to the script example above, except that we can treat it as a module. The trick to making this available as a script is to add a `console_scripts` section to your setup file, of the form: `script_name=module:function_name`. E.g. in this case we use: `test_fastscript=fastscript.test_cli:main`. With this, you can then just type `test_fastscript` at any time, from any directory, and your script will be called (once it's installed using one of the methods below).
You don't actually have to write a `setup.py` yourself. Instead, just copy the setup.py we have in the fastscript repo, and copy `settings.ini` as well. Then modify `settings.ini` as appropriate for your module/script. Then, to install your script directly, you can type `pip install -e .`. Your script, when installed this way (it's called an [editable install](http://codumentary.blogspot.com/2014/11/python-tip-of-year-pip-install-editable.html), will automatically be up to date even if you edit it - there's no need to reinstall it after editing.
You can even make your module and script available for installation directly from pip by running `make`. There shouldn't be anything else to edit - you just need to make sure you have an account on [pypi](https://pypi.org/) and have set up a [.pypirc file](https://docs.python.org/3.3/distutils/packageindex.html#the-pypirc-file).
| github_jupyter |
# Text Mining DocSouth Slave Narrative Archive
---
*Note:* This is the first in [a series of documents and notebooks](https://jeddobson.github.io/textmining-docsouth/) that will document and evaluate various machine learning and text mining tools for use in literary studies. These notebooks form the practical and critical archive of my book-in-progress, _Digital Humanities and the Search for a Method_. I have published a critique of some existing methods (Dobson 2015) that takes up some of these concerns and provides some theoretical background for my account of computational methods as used within the humanities. Each notebook displays code, data, results, interpretation, and critique. I attempt to provide as much explanation of the individual steps and documentation (along with citations of related papers) of the concepts and justification of choices made.
### Revision Date and Notes:
- 05/10/2017: Initial version (james.e.dobson@dartmouth.edu)
- 08/29/2017: Updated to automatically assign labels and reduced to two classes/periods.
### KNearest Neighbor (kNN) period classification of texts
The following Jupyter cells show a very basic classification task using the [kNN](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) algorithm as implemented in Python and with the [scikit-learn](http://scikit-learn.org/) package.
A simple check to see if the dates in the table of contents ("toc.csv") for the DocSouth ["North American Slave Narratives"](http://docsouth.unc.edu/neh/) can be converted to an integer (date as year) is used to assign one of these two classes:
- antebellum: prior to 1865
- postbellum: after 1865
These period categories are rough and by no means not perfect. Publication year may have little relation to the content of the text, the source for the vectorizing process and eventual categorization. These dates are what Matthew Jockers calls, within the digital humanities context, catalog metadata (Jockers 2013, 35-62). Recently, critics have challenged such divisions (Marrs 2015) that are central to the understanding of field of nineteenth-century American literary studies with concepts like "transbellum" that might be capable of helping to better understand works that address the Civil War and its attendant anxities through the "long nineteenth century." The majority of the texts included in the DocSouth archive are first-person autobiographical narratives of lives lived during the antebellum and Civil War years and published in the years leading up to, including, and after the war.
### Complete (Labeled) Dataset
|class|count|
|---|---|
|antebellum|143|
|postbellum|109|
|unknown or ambiguous|40|
There are 252 texts with four digit years and eighteen texts with ambiguous or unknown publication dates. This script will attempt to classify these texts into one of these two periods following the "fitting" of the labeled training texts. I split the 252 texts with known and certain publication dates into two groups: a training set and a testing test. After "fitting" the training set and establishing the neighbors, the code attempts to categorize the testing set. Many questions can and should be asked about the creation of the training set and the labeling of the data. This labeling practice introduces many subjective decisions into what is perceived as an objective (machine and algorithmically generated) process (Dobson 2015, Gillespie 2016).
### Training Data Set
The training set (the first 252 texts, preserving the order in "toc.csv") over-represents the antebellum period and may account for the ability of the classifier to make good predictions for this class.
|class|count|
|---|---|
|antebellum|96|
|postbellum|81|
### Test Data Set
The "testing" dataset is used to validate the classifier. This dataset contains seventy-five texts with known year of publication. This dataset, like the training dataset, overrepresents the antebellum period.
|class|count|
|---|---|
|antebellum|47|
|postbellum|28|
#### Text Pre-processing
The texts are all used/imported as found in the zip file provided by the DocSouth ["North American Slave Narratives"](http://docsouth.unc.edu/neh/) collection. The texts have been encoded in a combination of UTF-8 Unicode and ASCII. Scikit-learn's HashingVectorizer performs some additional pre-processing and that will be examined in the sections below.
#### kNN Background
The kNN algorithm is a non-parametric algorithm, meaning that it does not require detailed knowledge of the input data and its distribution (Cover and Hart 1967). This algorithm is known as reliable and it is quite simple, especially when compared to some of the more complex machine learning algorithms used as present, to implement and understand. It was originally conceived of as a response to what is called a “discrimination problem”: the categorization of a large number of input points into discrete "boxes." Data are eventually organized into categories, in the case of this script, the three categories of antebellum, postbellum, and twentieth-century.
The algorithm functions in space and produces each input text as a "neighbor" and has each text "vote" for membership into parcellated neighborhoods. Cover and Hart explain: "If the number of samples is large it makes good sense to use, instead of the single nearest neighbor, the majority vote of the nearest k neighbors" (22). The following code uses the value of "12" for the number of neighbors or the 'k' of kNN.
The kNN algorithm may give better results for smaller numbers of classes. The performance of particular implementation of kNN and the feature selection algorithm (HashingVectorizer) was better with just the antebellum and postbellum class. Alternative boundaries for the classes (year markers) might also improve results.
#### Feature Selection
While it is non-parametics, the kNN algorithm does require a set of features in order to categorize the input data, the texts. This script operates according to the _"bag of words"_ method in which each text is treated not as a narrative but a collection of unordered and otherwise undiferentiated words. This means that multiple word phrases (aka ngrams) are ignored and much meaning will be removed from the comparative method because of a loss of context.
In order to select the features by which a text can be compared to another, we need some sort of method that can produce numerical data. I have selected the HashingVectorizer, which is a fast method to generate a list of words/tokens from a file. This returns a numpy compressed sparse row (CSR) matrix that scikit-learn will use in the creation of the neighborhood "map."
The HashingVectorizer removes a standard 318 English-language stop words and by default does not alter or remove any accents or accented characters in the encoded (UTF-8) format. It also converts all words to lowercase, potentially introducing false positives.
**Issues with HashingVectorizer** This vectorizer works well, but it limits the questions we can ask after it has been run. We cannot, for example, interrogate why a certain text might have been misclassified by examining the words/tokens returned by the vectorizer. This is because the HashingVectorizer returns only indices to features and does not keep the string representation of specific words.
```
# load required packages
import sys, os
import re
import operator
import nltk
from nltk import pos_tag, ne_chunk
from nltk.tokenize import wordpunct_tokenize
import seaborn as sn
%matplotlib inline
# load local library
sys.path.append("lib")
import docsouth_utils
# each dictionary entry in the 'list' object returned by load_narratives
# contains the following keys:
# 'author' = Author of the text (first name, last name)
# 'title' = Title of the text
# 'year' = Year published as integer or False if not simple four-digit year
# 'file' = Filename of text
# 'text' = NLTK Text object
neh_slave_archive = docsouth_utils.load_narratives()
# establish two simple classes for kNN classification
# the "date" field has already been converted to an integer
# all texts published before 1865, we'll call "antebellum"
# "postbellum" for those after.
period_classes=list()
for entry in neh_slave_archive:
file = ' '.join(entry['text'])
if entry['year'] != False and entry['year'] < 1865:
period_classes.append([file,"antebellum"])
if entry['year'] != False and entry['year'] > 1865:
period_classes.append([file,"postbellum"])
# create labels and filenames
labels=[i[1] for i in period_classes]
files=[i[0] for i in period_classes]
# create training and test datasets by leaving out the
# last 100 files with integer dates from the toc for testing.
test_size=100
train_labels=labels[:-test_size]
train_files=files[:-test_size]
# the last set of texts (test_size) are the "test" dataset (for validation)
test_labels=labels[-test_size:]
test_files=files[-test_size:]
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
# intialize the vectorizer using occurance counts normalized as
# token frequencies: norm=l2
vectorizer = HashingVectorizer(lowercase=True,
stop_words='english',
norm='l2',
non_negative=True)
training_data = vectorizer.fit_transform(train_files)
test_data=vectorizer.transform(test_files)
# display file counts
print("training data:")
for period in ['postbellum', 'antebellum']:
print(" ",period,":",train_labels.count(period))
print("test data:")
for period in ['postbellum', 'antebellum']:
print(" ",period,":",test_labels.count(period))
# run kNN and fit training data
knn = KNeighborsClassifier(n_neighbors=13)
knn.fit(training_data,train_labels)
# Predict results from the test data and check accuracy
pred = knn.predict(test_data)
score = metrics.accuracy_score(test_labels, pred)
print("accuracy: %0.3f" % score)
print(metrics.classification_report(test_labels, pred))
print("confusion matrix:")
print(metrics.confusion_matrix(test_labels, pred))
# Produce visualization of confusion matrix
sn.heatmap(metrics.confusion_matrix(test_labels, pred),annot=True,cmap='Blues',fmt='g')
```
## Prediction of unclassified data
The following cell loads and vectorizes (using the above HashingVectorizing method, with the exact same parameters used for the training set) and tests against the trained classifier, all the algorithmically uncategorized and ambiguously dated (in the toc.csv) input files.
### Partial list of Unspecified or Ambiguous Publication Dates
|File|Date|
|---|---|
|church-hatcher-hatcher.txt|c1908|
|fpn-jacobs-jacobs.txt|1861,c1860
|neh-aga-aga.txt|[1846]|
|neh-anderson-anderson.txt|1854?|
|neh-brownj-brownj.txt|1856,c1865
|neh-carolinatwin-carolinatwin.txt|[between 1902 and 1912]|
|neh-delaney-delaney.txt|[189?]|
|neh-equiano1-equiano1.txt|[1789]|
|neh-equiano2-equiano2.txt|[1789]|
|neh-henry-henry.txt|[1872]|
|neh-jonestom-jones.txt|[185-?]|
|neh-latta-latta.txt|[1903]|
|neh-leewilliam-lee.txt|c1918|
|neh-millie-christine-millie-christine.txt|[18--?]|
|neh-parkerh-parkerh.txt|186?|
|neh-pomp-pomp.txt|1795|
|neh-washstory-washin.txt|c1901|
|neh-white-white.txt|[c1849]|
```
# predict class or period membership for all texts without
# four digit years
for entry in neh_slave_archive:
if entry['year'] == False:
print(entry['author'],", ",entry['title'])
print(" ",knn.predict(vectorizer.transform([entry['file']])))
```
## Works Cited
Cover T.M. and P. E. Hart. 1967. "Nearest Neighbor Pattern Classification." _IEEE Transactions on Information Theory_ 13, no. 1: 21-27.
Dobson, James E. 2015. [“Can an Algorithm be Disturbed? Machine Learning, Intrinsic Criticism, and the Digital Humanities.”](https://mla.hcommons.org/deposits/item/mla:313/) _College Literature_ 42, no. 4: 543-564.
Gillespie, Tarleton. 2016. “Algorithm.” In _Digital Keywords: A Vocabulary of Information Society and Culture_. Edited by Benjamin Peters. Princeton: Princeton University Press.
Jockers, Matthew. 2013. _Macroanalysis: Digital Methods & Literary History_ Urbana: University of Illinois Press.
Marrs, Cody. 2015. _Nineteenth-Century American Literature and the Long Civil War_. New York: Cambridge University Press.
| github_jupyter |
This dataset is derived from [Kaggle Website](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews/downloads/imdb-dataset-of-50k-movie-reviews.zip/1)!
-------------------------------------------
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.tokenize.toktok import ToktokTokenizer
from nltk.stem import LancasterStemmer,WordNetLemmatizer
from nltk.tokenize import word_tokenize,sent_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import LabelBinarizer
from sklearn.linear_model import LogisticRegression,SGDClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
from wordcloud import WordCloud,STOPWORDS
from bs4 import BeautifulSoup
import spacy
import re,string,unicodedata
from textblob import TextBlob
from textblob import Word
import time
#Tokenization of text
tokenizer=ToktokTokenizer()
#Setting English stopwords
stopword_list=nltk.corpus.stopwords.words('english')
Data = pd.read_csv("imdb-dataset-of-50k-movie-reviews/IMDB Dataset.csv")
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', -1)
data = Data
print(data.shape)
data.head()
data.describe()
data['sentiment'].value_counts()
def train_test_split(data,percentage):
train_size= int(len(data)*percentage)
dataB=data
train_data = pd.DataFrame()
test_data = pd.DataFrame()
random_index = np.random.choice(len(dataB),train_size)
random_index = np.sort(random_index)
random_index = random_index[::-1]
# print(len(dataB), '\n',random_index)
for i in random_index:
train_data = train_data.append(dataB.iloc[i],ignore_index=True)
dataB.drop(dataB.index[i], inplace=True)
test_data = dataB
return train_data , test_data
#Removing the html strips
def strip_html(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
#Removing the square brackets
def remove_between_square_brackets(text):
return re.sub('\[[^]]*\]', '', text)
#Removing the noisy text
def denoise_text(text):
text = strip_html(text)
text = remove_between_square_brackets(text)
return text
#Apply function on review column
data['review']=data['review'].apply(denoise_text)
data.head()
#Define function for removing special characters
def remove_special_characters(text, remove_digits=True):
pattern=r'[^a-zA-z0-9\s]'
text=re.sub(pattern,'',text)
return text
#Apply function on review column
data['review']=data['review'].apply(remove_special_characters)
data.head()
#Stemming the text
def simple_stemmer(text):
ps=nltk.porter.PorterStemmer()
text= ' '.join([ps.stem(word) for word in text.split()])
return text
#Apply function on review column
data['review']=data['review'].apply(simple_stemmer)
data.head(1)
#set stopwords to english
stop=set(stopwords.words('english'))
print(stop)
#removing the stopwords
def remove_stopwords(text, is_lower_case=False):
tokens = tokenizer.tokenize(text)
tokens = [token.strip() for token in tokens]
if is_lower_case:
filtered_tokens = [token for token in tokens if token not in stopword_list]
else:
filtered_tokens = [token for token in tokens if token.lower() not in stopword_list]
filtered_text = ' '.join(filtered_tokens)
return filtered_text
#Apply function on review column
data['review']=data['review'].apply(remove_stopwords)
data.head(1)
train, test = train_test_split(data , 0.8)
train.head()
train.to_csv('IMDB_50k_train_data.csv',index=False)
test.to_csv('IMDB_50k_test_data.csv',index=False)
#normalized train reviews
# norm_train_reviews=imdb_data.review[:40000]
# norm_train_reviews[0]
#convert dataframe to string
#norm_train_string=norm_train_reviews.to_string()
#Spelling correction using Textblob
#norm_train_spelling=TextBlob(norm_train_string)
#norm_train_spelling.correct()
#Tokenization using Textblob
#norm_train_words=norm_train_spelling.words
#norm_train_words
#Normalized test reviews
# norm_test_reviews=imdb_data.review[40000:]
# norm_test_reviews[45005]
##convert dataframe to string
#norm_test_string=norm_test_reviews.to_string()
#spelling correction using Textblob
#norm_test_spelling=TextBlob(norm_test_string)
#print(norm_test_spelling.correct())
#Tokenization using Textblob
#norm_test_words=norm_test_spelling.words
#norm_test_words
x_train = train.review
x_test = test.review
y_train = train.sentiment
y_test = test.sentiment
#Count vectorizer for bag of words
cv=CountVectorizer(min_df=0,max_df=1,binary=False,ngram_range=(1,3))
#transformed train reviews
cv_train_reviews=cv.fit_transform(x_train)
#transformed test reviews
cv_test_reviews=cv.transform(x_test)
print('BOW_cv_train:',cv_train_reviews.shape)
print('BOW_cv_test:',cv_test_reviews.shape)
#vocab=cv.get_feature_names()-toget feature names
#Tfidf vectorizer
tv=TfidfVectorizer(min_df=0,max_df=1,use_idf=True,ngram_range=(1,3))
#transformed train reviews
tv_train_reviews=tv.fit_transform(x_train)
#transformed test reviews
tv_test_reviews=tv.transform(x_test)
print('Tfidf_train:',tv_train_reviews.shape)
print('Tfidf_test:',tv_test_reviews.shape)
#labeling the sentient data
lb=LabelBinarizer()
#transformed sentiment data
sentiment_y_train_data=lb.fit_transform(y_train)
sentiment_y_test_data=lb.fit_transform(y_test)
print(sentiment_y_train_data.shape)
print(sentiment_y_test_data.shape)
sentiment_y_test_data
#training the model
lr=LogisticRegression(penalty='l2',max_iter=500,C=1,random_state=42)
#Fitting the model for Bag of words
lr_bow=lr.fit(cv_train_reviews,sentiment_y_train_data)
print(lr_bow)
#Fitting the model for tfidf features
lr_tfidf=lr.fit(tv_train_reviews,sentiment_y_train_data)
print(lr_tfidf)
#Predicting the model for bag of words
lr_bow_predict=lr.predict(cv_test_reviews)
print(lr_bow_predict)
##Predicting the model for tfidf features
lr_tfidf_predict=lr.predict(tv_test_reviews)
print(lr_tfidf_predict)
#Accuracy score for bag of words
lr_bow_score=accuracy_score(sentiment_y_test_data,lr_bow_predict)
print("lr_bow_score :",lr_bow_score)
#Accuracy score for tfidf features
lr_tfidf_score=accuracy_score(sentiment_y_test_data,lr_tfidf_predict)
print("lr_tfidf_score :",lr_tfidf_score)
```
| github_jupyter |
# Predict Fraud transaction
1. In this document, I will use the transaction data, which include details about each transaction, to predict whether this transaction is fraud or not. The model will be able to applied to future transaction data.
2. I will visualize the data and perform data cleaning before building the fraud detection classifier model. Data cleaning is of vital importance, which will be done before applying any model. To help us know how to do data cleaing, a data visulization part will be performed first
3. The structure of the document is: <br>
a. Data Visualization <br>
b. Data cleaning <br>
c. Feature engineering <br>
d. Apply model and cross-validation <br>
e. Optimize model and run on testing set <br>
f. conclusion part <br>
```
# import packages
import xgboost
from scipy.stats import chisquare
from scipy.stats import pearsonr
import pickle
import pandas as pd
import datetime
import matplotlib
import tensorflow as tf
import sklearn
import math
import matplotlib.pyplot as plt
from xgboost import XGBClassifier
from xgboost import plot_importance
import numpy as np
from sklearn.model_selection import train_test_split
import sklearn
from sklearn.preprocessing import LabelEncoder
import copy
import scipy
import datetime
import time
from sklearn.model_selection import KFold
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
import os
# data preparation
root_path = "Data/"
print(os.listdir(root_path))
train_identity = pd.read_csv(root_path+'train_identity.csv')
train_transaction = pd.read_csv(root_path+"train_transaction.csv")
test_identity = pd.read_csv(root_path+'test_identity.csv')
test_transaction = pd.read_csv(root_path+"test_transaction.csv")
print("finish loading data")
### a few notations:
# The TransactionDT feature is a timedelta from a given reference datetime (not an actual timestamp).
print("There are %d rows and %d columns"%(train_transaction.shape[0],train_transaction.shape[1]))
# print("The column names are %s"%str(df.keys()))
y_fraud = train_transaction["isFraud"]
```
# The following part is for one-hot encoder
```
# fill missing value and one-hot encoder:
def fill_missing_values(df):
''' This function imputes missing values with median for numeric columns
and most frequent value for categorical columns'''
missing = df.isnull().sum()
# select missing data
missing = missing[missing > 0]
for column in list(missing.index):
if df[column].dtype == 'object':
# if it's an object, fill that with the *most common* object in that column
df[column].fillna(df[column].value_counts().index[0], inplace=True)
elif df[column].dtype == 'int64' or 'float64' or 'int16' or 'float16':
df[column].fillna(df[column].median(), inplace=True)
def impute_cats(df):
'''This function converts categorical and non-numeric
columns into numeric columns to feed into a ML algorithm'''
# Find the columns of object type along with their column index
# only select columns with obejcts
object_cols = list(df.select_dtypes(exclude=[np.number]).columns)
# return the index for columns with object
object_cols_ind = []
for col in object_cols:
object_cols_ind.append(df.columns.get_loc(col))
# Encode the categorical columns with numbers
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
# It's still an object but this time with index from 0 to num_features-1
##!! I will modify this part later since I want to rank order these catagorical features by their fraud rate.
label_enc = LabelEncoder()
for i in object_cols_ind:
df.iloc[:, i] = label_enc.fit_transform(df.iloc[:, i])
# combine transaction and identity files:
df = pd.concat([train_transaction.set_index('TransactionID'), train_identity.set_index('TransactionID')], axis=1, sort=False)
```
# In the next part, I will apply a customized encoder (similar to one sum encoder) for categorical data, and perform filter method in feature engineering.
For encoder, I use a customized encoder (similar to one sum encoder) rather than normal encoder or one hot encoder. And I will show my method gives a better results than normal one hot encoder
This is how it works: Find the categorical column. Calculate the fraction of fraud transaction for each kind of categorical value in that column. Then we have a hashtable that mapping the name of the category and the fraction of fraud transaction for that category. Finally, replace these categories with their fraction of fraud transaction. This makes more sense than tranditional normal encoder. And this save extra space compared to one hot encoder (There will be a lot of new columns when we apply one hot encoder to categorical dataset that have a lot of categories)
This method is similar to the one sum encoder, but it's a little different. I will prove our methods works better than ont hot encoder in the following part. (Our method gives a higher AUROC, which is more important in a highly imbalanced dataset than accuracy.)
```
# Let's do encoding method that is different from One-hot encoder:
# Encode them with fraction of fraud transaction:
def helper(name):
temp = df.groupby([name,'isFraud']).size()
mapping = {}
fraud_array = {}
nofraud_array = {}
count=0
for i in range(len(temp.index)):
name_i = temp.index[i][0]
fraud_array[name_i] = 0
nofraud_array[name_i] = 0
for i in range(len(temp.index)):
name_i = temp.index[i][0]
if temp.index[i][1]==True:
fraud_array[name_i] = temp[i]
if temp.index[i][1]==False:
nofraud_array[name_i] = temp[i]
mapping = {x:fraud_array[x]/(fraud_array[x]+nofraud_array[x]) for x in fraud_array.keys()}
return mapping
#deal with nan data: For categorical col, I replace them with most frequenct value. For numerical col, I repalce with mean (Although it's not a perfect idea since it make variance smaller)
fill_missing_values(df)
# Feature engineering: Filter method. Use Pearsons r
X = df.drop(['isFraud','TransactionDT'],axis=1)
y = df['isFraud']
label_names_part = X.keys()
y_pearsons_array = []
name_select = []
for i in range(X.shape[1]):
if i%50==0:
print("Doing %d of %d for pearsons correlation"%(i,X.shape[1]))
try:
corr, _ = pearsonr(X[label_names_part[i]], y)
y_pearsons_array.append(corr)
name_select.append(label_names_part[i])
except:
# our own encoder:
print("%s is a categorical data"%(label_names_part[i]))
mapping = helper(label_names_part[i])
#print("Grouping Finished")
X[label_names_part[i]] = X[label_names_part[i]].apply(lambda x:mapping[x])
corr, _ = pearsonr(X[label_names_part[i]], y)
y_pearsons_array.append(corr)
name_select.append(label_names_part[i])
y_pearsons_array = np.array(y_pearsons_array)
#plt.hist(y_pearsons_array)
name_select = np.array(name_select)
# plot part of cols with highest pearsons r:
font = {'family': 'normal','weight': 'bold',
'size': 25}
matplotlib.rc('font', **font)
mask_temp = abs(y_pearsons_array)>np.nanpercentile(abs(y_pearsons_array),70)
plt.plot(label_names_part[mask_temp],y_pearsons_array[mask_temp],"ko",markersize=10)
for i in range(len(label_names_part[mask_temp])):
plt.plot((i,i),(0,y_pearsons_array[mask_temp][i]),"k")
plt.plot((-1,len(label_names_part[mask_temp])+1),(0,0),"k--")
plt.xlabel("Name")
plt.ylabel("Pearsons r")
plt.xticks(rotation=90)
fig = plt.gcf()
fig.set_size_inches(39,20)
plt.show()
# make a Pie Chart:
font = {'family': 'normal','weight': 'bold',
'size': 15}
matplotlib.rc('font', **font)
color_array = ["r","c"]
fig, axs = plt.subplots(1, 2)
# Fraud transaction
labels = 'Fraud transaction', "no Fraud transaction"
f = len(df["isFraud"][df["isFraud"]==True])/len(df["isFraud"])
sizes = [f,1-f]
explode = (0, 0.1)
axs[0].pie(sizes,colors=color_array, explode=explode, autopct='%1.1f%%',shadow=True, startangle=90)
axs[0].axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
axs[0].set_title("Fraction of fraud transaction")
axs[0].legend(labels)
## Transaction amount
labels = 'Fraud transaction amount',"no Fraud transaction amount"
fm = df[df["isFraud"]==True]['TransactionAmt'].sum()/df[df["isFraud"]==False]['TransactionAmt'].sum()
sizes = [fm,1-fm]
explode = (0, 0.1)
axs[1].pie(sizes,colors=color_array, explode=explode, autopct='%1.1f%%',shadow=True, startangle=90)
axs[1].axis('equal')
axs[1].set_title("Fraction of fraud transaction amount")
axs[1].legend(labels)
fig = plt.gcf()
fig.set_size_inches(15,9)
plt.show()
```
# Question 2: Plot
1. plot the histogram of the transaction amount.
2. my assumption is: transaction amount is related to fraud transaction, which I will prove later
```
#transaction value distribution + log[amount] distribution
def log10(x):
if x > 0:
return math.log10(x)
else:
return np.nan
log10 = np.vectorize(log10)
font = {'family': 'normal','weight': 'bold',
'size': 20}
matplotlib.rc('font', **font)
plt.hist(log10(df[df["isFraud"]==True]['TransactionAmt']),density=True,label="Fraud",alpha=0.3,bins=np.linspace(-1,3,51))
plt.hist(log10(df[df["isFraud"]==False]['TransactionAmt']),density=True,label="noFraud",alpha=0.3,bins=np.linspace(-1,3,51))
plt.ylabel("Probability")
plt.suptitle("Transaction Amount (Log) Distribuition")
axes = plt.gca()
axes.set_xlim([0,3])
plt.legend()
fig = plt.gcf()
fig.set_size_inches(9,9)
plt.show()
# This is the percentiles for fraud and nofraud transaction amount
print(pd.concat([df[df['isFraud'] == True]['TransactionAmt']\
.quantile([.01, .1, .25, .5, .75, .9, .99])\
.reset_index(),
df[df['isFraud'] == 0]['TransactionAmt']\
.quantile([.01, .1, .25, .5, .75, .9, .99])\
.reset_index()],
axis=1, keys=['Fraud', "No Fraud"]))
```
# I will do a KS test to test whether fraud and nofraud have the same transaction amount distribution.
```
scipy.stats.ks_2samp(data1=df[df['isFraud'] == True]['TransactionAmt'],data2=df[df['isFraud'] == False]['TransactionAmt'])
# A very small p value and relative big statistic value means we can reject the null hypothesis, which means the two
# distributions are different: Fraud is related to transaction amount
# Let's plot the distribution of "hours in a day" for fraud and nofraud
# There seems to be only a small trend in month
# 'TransactionDT' is in seconds: For 6 month data
day_hour = df['TransactionDT']
df["month"] = df['TransactionDT']//(3600*24*30)
df["day"] = df['TransactionDT']//(3600*24)%30
df["hour"] = df['TransactionDT']//3600%24
df["weekday"] = df['TransactionDT']//(3600*24)%7
feature = "hour"
max_val = max(df[feature])
plt.hist(df[df["isFraud"]==0][feature],label="noFraud",density=True,alpha=0.3,bins=np.arange(1,max_val))
plt.hist(df[df["isFraud"]==1][feature],label="Fraud",density=True,alpha=0.3,bins=np.arange(1,max_val))
plt.xticks(np.arange(1, max_val, step=1))
plt.xlabel("%s"%feature)
plt.ylabel("Probability")
plt.suptitle("Transaction %s Distribuition"%feature)
plt.legend()
fig = plt.gcf()
fig.set_size_inches(15,9)
plt.show()
## There is a clear trend in hour distribution, which means the fraud happens at some specific hours
```
# Question 4: Model
# The model I use is a Gradient Boost tree model
1. The reason I choose gradient boost tree classifier is that it's good at dealing with classification problem with a lot of features. I didn't choose K-mean/K-medoids method because the K-mean method is not complicated enough to include all features here. Also, I want to tune the hyper parameters here to improve the model performace, while K-mean has less hyper-parameters.
2. I didn't choose SVM either. It doesn't mean the SVM method is not good enough in handling this problem. SVM and tree-based algorithm (Eg: random forest) are good at avoiding overfitting. And both can handle non-linear problem. Here I choose Gradient Boost tree algorithm since it's faster in training to reach the same performance as SVM. Since the training data is large, and in future new data will be added, it's better to choose a model that can be trained faster. SVM suffers more scalability and it's hard to increase SVM training speed with more computational resources, while XgBoost (what I choose here) is good at scalability.
3. I use AUROC to evaluate since it's an ultra imbalanced dataset. Also, accuracy is considered but AUROC is more important.
# Our encoder (one sum encoder) (Best AUROC)
For encoder, I use a customized encoder (similar to one sum encoder) rather than normal encoder or one hot encoder. And I will show my method gives a better results than normal one hot encoder
This is how it works: Find the categorical column. Calculate the fraction of fraud transaction for each kind of categorical value in that column. Then we have a hashtable that mapping the name of the category and the fraction of fraud transaction for that category. Finally, replace these categories with their fraction of fraud transaction. This makes more sense than tranditional normal encoder. And this save extra space compared to one hot encoder (There will be a lot of new columns when we apply one hot encoder to categorical dataset that have a lot of categories)
This method is similar to the one sum encoder, but it's a little different. I will prove our methods works better than ont hot encoder in the following part. (Our method gives a higher AUROC, which is more important in a highly imbalanced dataset than accuracy.)
```
# Here I use our own encoder for categorical data
encoder_X = pd.get_dummies(X[name_select])
# Use GPU to do calculation, use XgBoost (gradient boost tree) as classifier
time_start = time.time()
params = {}
params['booster'] = "gbtree"
params['learning_rate'] = 0.01
params['max_depth'] = 12
params['gpu_id'] = 0
params['max_bin'] = 512
params['tree_method'] = 'gpu_hist'
model = XGBClassifier(n_estimators=1000, verbose=2, n_jobs=-1, **params)
X_train, X_test, y_train, y_test = train_test_split(encoder_X, y, test_size=0.2,shuffle=True)
model.fit(X_train, y_train)
# predict:
Y_predict_test = model.predict(X_test).ravel()
mask_good = Y_predict_test == y_test
acc_i = len(Y_predict_test[mask_good]) / len(Y_predict_test)
print("Finish training, time we use =%.3f s"%(time.time()-time_start))
# AUROC:
prob = model.predict_proba(X_test)
AUROC_i = roc_auc_score(y_test, prob[:, 1])
print("Results from our encoder ACC=%.4f AUROC=%.4f"%(acc_i,AUROC_i))
```
# Important information about sample imbalance!
If you care only about the ranking order (AUROC) of your prediction
Balance the positive and negative weights, via scaleposweight
Use AUROC for evaluation.
If you care about predicting the right probability
In such a case, you cannot re-balance the dataset
In such a case, set parameter maxdeltastep to a finite number (say 1,2,3) will help convergence
Here we want to solve the imbalance problem, which means I want to use scale_pos_weight method to do resample of the datset.
The theoretical scale_pos_weight should be noFraud/Fraud, but it depends. Here I want to find a balance point between AUROC and accuracy: Achieve high AUROC without sacrificing to much accuracy.
scale_pos_weight means how many times do you resampling the fraud case. The resampling method is bootstrap.
```
### grid search for best scale_pos_weight
# A grid search to get the best scale_pos_weight value
time_start = time.time()
target_array = []
auroc_array = []
acc_array = []
for i in range(10):
target = 10*i+1
params = {}
params['booster'] = "gbtree"
params['learning_rate'] = 0.01
params['max_depth'] = 12
# control imbalance: Control the balance of positive and negative weights, useful for unbalanced classes
params['scale_pos_weight'] = target
params['gpu_id'] = 0
params['max_bin'] = 512
params['tree_method'] = 'gpu_hist'
model = XGBClassifier(n_estimators=300, verbose=2, n_jobs=-1, **params)
X_train, X_test, y_train, y_test = train_test_split(encoder_X, y, test_size=0.2,shuffle=True)
model.fit(X_train, y_train)
# predict:
Y_predict_test = model.predict(X_test).ravel()
mask_good = Y_predict_test == y_test
acc_i = len(Y_predict_test[mask_good]) / len(Y_predict_test)
acc_array.append(acc_i)
# AUROC:
prob = model.predict_proba(X_test)
AUROC_i = roc_auc_score(y_test, prob[:, 1])
target_array.append(target)
auroc_array.append(AUROC_i)
# print("ACC=%.4f AUROC=%.4f target=%d"%(acc_i,AUROC_i,target))
print("Finish training, time we use =%.3f s"%(time.time()-time_start))
plt.plot(target_array,auroc_array,"k",label="AUROC")
plt.plot(target_array,acc_array,"r",label="Accuracy")
plt.xlabel('Scaling pos weight')
plt.ylabel('Values')
plt.title('AUROC/accuracy vs scaling pos weight')
plt.legend()
fig = plt.gcf()
fig.set_size_inches(9,9)
plt.show()
```
# Now I run a model with our one sum encoder and best resamlping rate (scale_pos_weight) and do a k fold to avoid overfitting.
```
## Run the best fitting results with K-fold:
### Do K fold to avoid overfitting
params = {}
params['subsample'] = 1
params['reg_alpha'] = 0.1
params['reg_lamdba'] = 0.9
params['max_depth'] = 6
params['colsample_bytree'] = 1
params['learning_rate'] = 0.01
params['booster'] = "gbtree"
params['scale_pos_weight'] = 20
params['gpu_id'] = 0
params['max_bin'] = 512
params['tree_method'] = 'gpu_hist'
time_start = time.time()
accuracy_array = []
AUROC_array = []
n_k_fold=3
model_dic = {}
kf = KFold(n_splits=n_k_fold)
kf.get_n_splits(encoder_X)
KFold(n_splits=n_k_fold, random_state=None, shuffle=False)
count=0
for train_index, test_index in kf.split(encoder_X):
print("Doing k fold %d of %d"%(count+1,n_k_fold))
model = XGBClassifier(n_estimators=100, verbose=2, n_jobs=-1, **params)
X_train, X_test = encoder_X.iloc[train_index,:], encoder_X.iloc[test_index,:]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
model.fit(X_train, y_train)
# predict:
Y_predict_test = model.predict(X_test).ravel()
mask_good = Y_predict_test == y_test
acc_i = len(Y_predict_test[mask_good]) / len(Y_predict_test)
# AUROC:
prob = model.predict_proba(X_test)
AUROC_i = roc_auc_score(y_test, prob[:, 1])
print("ACC=%.4f AUROC=%.4f"%(acc_i,AUROC_i))
accuracy_array.append(acc_i)
AUROC_array.append(AUROC_i)
# save model from each k-fold
model_dic[str(count)] = model
# save testing results from each k-fold
if count==0:
prob_all=list(prob[:,1])
y_predict_all=list(Y_predict_test)
y_test_all = list(y_test)
X_test_all = X_test
else:
prob_all.extend(prob[:,1])
y_predict_all.extend(Y_predict_test)
y_test_all.extend(y_test)
X_test_all = np.r_[X_test_all,X_test]
count+=1
accuracy_array = np.array(accuracy_array)
AUROC_array = np.array(AUROC_array)
fusion = np.c_[accuracy_array,AUROC_array]
print("K_fold_results from best fitting")
print("Mean ACC=%.4f , error= %.4f"%(np.nanmedian(accuracy_array),np.nanstd(accuracy_array)))
print("Mean AUROC=%.4f , error= %.4f"%(np.nanmedian(AUROC_array),np.nanstd(AUROC_array)))
print("Time it takes using GPU=%.2f s"%(time.time()-time_start))
```
# Plot confusion matrix
```
def helper_confusion_matrix(y_pred, y_true):
TP = len(y_pred[(y_pred == 1) & (y_true == 1)])
TN = len(y_pred[(y_pred == 1) & (y_true == 0)])
# type1 error : false alarm
FP = len(y_pred[(y_pred == 1) & (y_true == 0)])
# type 2 error. Fail to make alarm
FN = len(y_pred[(y_pred == 0) & (y_true == 1)])
recall = TP / (TP + FN)
precision = TP / (TP + FP)
accuracy = (TP + TN) / len(y_pred)
print(recall,precision,accuracy)
f1_score = 2 / (1 / precision + 1 / recall)
#return TP, TN, FP, FN, recall, precision, accuracy, f1_score
return f1_score
y_test_all = np.array([not x for x in y_test_all],dtype=int)
y_predict_all = np.array([not x for x in y_predict_all],dtype=int)
print("f1 score=%.3f"%(helper_confusion_matrix(y_predict_all,y_test_all)))
# Convusion matrix and f1_score
from sklearn.metrics import confusion_matrix
import seaborn as sns
labels = ["fraud transaction","no fraud transaction"]
cm = confusion_matrix(y_test_all, y_predict_all)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax,cmap=plt.cm.Blues)
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels)
fig = plt.gcf()
fig.set_size_inches(14,14)
plt.show()
```
# Plot ROC curve from best fitting
```
# plot ROC curve using the K-fold results:
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
prob_all = np.array(prob_all)
y_test_all = np.array([not x for x in y_test_all],dtype=int)
fpr, tpr, thresholds = roc_curve(y_test_all, prob_all)
plt.plot(fpr, tpr, color='r', label='AUROC=%.4f'%roc_auc_score(y_test_all, prob_all))
plt.plot([0, 1], [0, 1], color='k',linewidth=4)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend()
fig = plt.gcf()
fig.set_size_inches(9,9)
plt.show()
```
# (Optional) Do a grid search or random search to find the best hyper parameters
```
### grid search + K fold:
if False:
# " booster":["gbtree","dart","gblinear"]
params = {"learning_rate": [0.01, 0.03, 0.05, 0.1, 0.15],"colsample_bytree":[1, 0.85, 0.7],
"max_depth": [6, 9, 12],
" booster": ["gbtree", "gblinear"],
"subsample": [1, 0.85, 0.7], "reg_alpha": [0.05, 0.1, 0.15,0.2]
,"reg_lamdba":[1,0.85,0.75],'scale_pos_weight':[1,11,21,31,41,61,81]}
accuracy_array = []
AUROC_array = []
count = 0
# grid search
keys, values = zip(*params.items())
import itertools
best_model = 0
best_parameteres = 0
for v in itertools.product(*values):
print("Doing %d" % count)
experiment = dict(zip(keys, v))
print(experiment)
experiment['gpu_id'] = 0
experiment['max_bin'] = 512
experiment['tree_method'] = 'gpu_hist'
acc_i = []
auroc_i = []
for i in range(n_k_fold):
X_train, X_test, y_train, y_test = train_test_split(encoder_X, y, test_size=0.2, shuffle=True)
model = XGBClassifier(n_estimators=300, verbose=2, n_jobs=-1, **experiment)
model.fit(X_train, y_train)
# predict:
# test
Y_predict_test = model.predict(X_test)
mask_good = Y_predict_test == y_test
acc_i.append(len(Y_predict_test[mask_good]) / len(Y_predict_test))
prob = model.predict_proba(X_test)
probs = prob[:, 1]
auroc_i.append(roc_auc_score(y_test, probs))
print("Accuracy=%.4f for testing set AUROC=%.4f" % (np.nanmean(acc_i), np.nanmean(auroc_i)))
print(experiment)
accuracy_array.append(np.nanmean(acc_i))
AUROC_array.append(np.nanmean(auroc_i))
if count == 0:
best_model = model
best_parameteres = experiment
# elif acc_i>np.nanmax(accuracy_array):
elif np.nanmean(auroc_i) > np.nanmax(AUROC_array):
best_model = model
best_parameteres = experiment
count += 1
accuracy_array = np.array(accuracy_array)
AUROC_array = np.array(AUROC_array)
np.savetxt("accuracy.txt", accuracy_array)
np.savetxt("AUROC.txt", AUROC_array)
```
# Conclusion
1. By visualizing the data, the correlation between fraud and different parameters can be seen. Eg: There is a high correlation between transaction amount and fraud activity.
2. The average fraud transaction amount is higher than nofraud transactiona mount, which helps us to determine the fraud activity if a transaction amount is extremely high.
3. There seems to be little trend in the fraction of fraud transction as a function of month in a year, although there are more transaction in December, which is reasonable since it's holiday season.
4. Parameters which is highly correlated with fraud activities (by Pearsons r): Transaction amount, account open date, date of last address change and merchant category.
a. We analyse the transaction amount in point 2.
b. The account open date is related to fraud activity, which means accounts opened at a specific date may have a higher fraud fraction than others. Similar for the date of last address change. This is reasonable since address change is related to fraud activity.
c. As for merchant category, it makes sense since different merchants have differnt security level in transaction. Eg: There is no fraud in Play Store since the transaction security level is high.
5. I find one sum encoder works better than normal encoder and one hot encoder. And we apply one hot encoder in building the fraud prediction model.
6. I build a Gradient Boost Tree model using XgBoost. Since data is a highly imbalanced dataset, I apply resampling to solve the imbalance problem. And I find the best resampling rate by doing grid search. The best value is 11, which means we have high AUROC without losing too much accuracy.
7. To improve the modelm, I can do a grid search or random search to further tuning the hyper parameters, which is included at the end of the document.
8. The baseline model reach AUROC=0.851 with errorbar=0.006. Accuracy=0.9717 with errorbar=0.003. Since there is a highly imbalanced dateset, AUROC is more important than accuracy.
9. In future, I want to try neural network based models. If there are more features to use, a neural network model may have better performance than tree based algorithm. But there are only 34 columns here, which means sticking to gradient boost tree algorithm is a good idea.
| github_jupyter |
**Training a RNN to synthesize English text character by character**
Herein I have trained a vanilla RNN with outputs using the text from the book The Globlet of Fire by J.K. Rowling.*
The following implementation will train a recurrent neural network (RNN) that shows how the evolution of the text synthesized by my RNN during training by inclduing a sample of synthesized text (200 characters long) before the first and before every 10,000th update steps when I train for 100,000 update. Furthermore, I also present 1000 characters with the best implementation.
```
#@title Installers
!pip install texttable
#@title Import libraries
from texttable import Texttable
from collections import OrderedDict
from keras import applications
from keras.models import Sequential
from keras.layers import Flatten
from keras.layers import Input
from keras.models import Model
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import to_categorical
from keras.preprocessing.image import ImageDataGenerator
from google.colab import drive
import matplotlib.pyplot as plt
import numpy as np
import random
drive.mount('/content/drive/')
#@title Functions: Read file from drive
def LoadData():
data = open("/content/../goblet_book.txt", "r", encoding='utf8').read()
chars = list(set(data))
data = {"data": data, "chars": chars,
"vocLen": len(chars), "getIndFromChar": OrderedDict(
(char, ix) for ix, char in enumerate(chars)),
"getCharFromInd": OrderedDict((ix, char) for ix, char in
enumerate(chars))}
return data
#@title Functions: Initialization
class RecurrentNeuralNetwork():
def __init__(self, data, m=100, eta=.1, seq_length=25):
self.m, self.eta, self.N = m, eta, seq_length
for k, v in data.items():
setattr(self, k, v)
self.b, self.c, self.U, self.V, self.W = self._init_parameters(self.m, self.vocLen)
@staticmethod
def _init_parameters(m, K, sig=0.01):
c = np.zeros((K, 1))
b = np.zeros((m, 1))
V = np.random.normal(0, sig, size=(K, m))
W = np.random.normal(0, sig, size=(m, m))
U = np.random.normal(0, sig, size=(m, K))
return b, c, U, V, W
#@title Functions: Softmax, EvaluateClassifier
class RNN_With_Functions(RecurrentNeuralNetwork):
@staticmethod
def SoftMax(x):
s = np.exp(x - np.max(x, axis=0)) / np.exp(x - np.max(x, axis=0)).sum(axis=0)
return s
def EvaluateClassifier(self, h, x):
a = self.W@h + self.U@x + self.b
h = np.tanh(a)
o = self.V@h + self.c
p = self.SoftMax(o)
return a, h, o, p
#@title Functions: Synthesize Text
class RNN_Synthesizer(RNN_With_Functions):
def SynthesizeText(self, h, aax, n):
text = ''
nxt = np.zeros((self.vocLen, 1))
nxt[aax] = 1
for s in range(n):
_, h, _, p = self.EvaluateClassifier(h, nxt)
aax = np.random.choice(range(self.vocLen), p=p.flat)
nxt = np.zeros((self.vocLen, 1))
nxt[aax] = 1
text += self.getCharFromInd[aax]
return text
```
The following functions will compute the gradients analytically, numerically and check the gradietns. The difference between the two gradients computation will be iteratively presented when the training is ongoing.
```
#@title Functions: Compute and Check Gradients
class RNN_Gradients(RNN_Synthesizer):
def ComputeGradientsAnalytically(self, inputs, targets, hp):
loss = 0
aa, bb, cc, dd, ee = {}, {}, {}, {}, {}
cc[-1] = np.copy(hp)
nt = len(inputs)
for t in range(nt):
bb[t] = np.zeros((self.vocLen, 1))
bb[t][inputs[t]] = 1
aa[t], cc[t], dd[t], ee[t] = self.EvaluateClassifier(cc[t-1], bb[t])
loss += -np.log(ee[t][targets[t]][0])
gradients = {"W": np.zeros_like(self.W), "U": np.zeros_like(self.U),
"V": np.zeros_like(self.V), "b": np.zeros_like(self.b),
"c": np.zeros_like(self.c), "o": np.zeros_like(ee[0]),
"h": np.zeros_like(cc[0]), "hnxt": np.zeros_like(cc[0]),
"a": np.zeros_like(aa[0])}
for t in reversed(range(nt)):
gradients["o"] = np.copy(ee[t])
gradients["o"][targets[t]] -= 1
gradients["V"] += gradients["o"]@cc[t].T
gradients["c"] += gradients["o"]
gradients["h"] = self.V.T@gradients["o"] + gradients["hnxt"]
gradients["a"] = np.multiply(gradients["h"], (1 - np.square(cc[t])))
gradients["U"] += gradients["a"]@bb[t].T
gradients["W"] += gradients["a"]@cc[t-1].T
gradients["b"] += gradients["a"]
gradients["hnxt"] = self.W.T@gradients["a"]
gradients = {k: gradients[k] for k in gradients if k not in ["o", "h", "hnxt", "a"]}
for gr in gradients: gradients[gr] = np.clip(gradients[gr], -5, 5)
he = cc[nt-1]
return gradients, loss, he
def ComputeGradientsNumerically(self, inputs, targets, hp, h, nc=20):
network_params = {"W": self.W, "U": self.U, "V": self.V, "b": self.b, "c": self.c}
num_gradients = {"W": np.zeros_like(self.W), "U": np.zeros_like(self.U),
"V": np.zeros_like(self.V), "b": np.zeros_like(self.b),
"c": np.zeros_like(self.c)}
for key in network_params:
for i in range(nc):
prevpar = network_params[key].flat[i]
network_params[key].flat[i] = prevpar + h
_, l1, _ = self.ComputeGradientsAnalytically(inputs, targets, hp)
network_params[key].flat[i] = prevpar - h
_, l2, _ = self.ComputeGradientsAnalytically(inputs, targets, hp)
network_params[key].flat[i] = prevpar
num_gradients[key].flat[i] = (l1 - l2) / (2*h)
return num_gradients
def CheckGradients(self, inputs, targets, hp, nc=20):
analytical_gr, _, _ = self.ComputeGradientsAnalytically(inputs, targets, hp)
numerical_gr = self.ComputeGradientsNumerically(inputs, targets, hp, 1e-5)
err = Texttable()
err_data = []
# Compare accurate numerical method with analytical estimation of gradient
err_data.append(['Gradient', 'Method', 'Abs Diff Mean [e-06]'])
print("Gradient checks:")
for grad in analytical_gr:
num = abs(analytical_gr[grad].flat[:nc] - numerical_gr[grad].flat[:nc])
denom = np.asarray([max(abs(a), abs(b)) + 1e-10 for a,b in zip(analytical_gr[grad].flat[:nc],numerical_gr[grad].flat[:nc])])
err_data.append([grad, "ANL vs NUM", str(max(num / denom)*100*10*100)])
err.add_rows(err_data)
print("Method Comparison: Analytical vs Numerical")
print(err.draw())
#@title Functions: Check Gradients
def CompareGradients():
e=0
data = LoadData()
network = RNN_Gradients(data)
hp = np.zeros((network.m, 1))
inputs = [network.getIndFromChar[char] for char in network.data[e:e+network.N]]
targets = [network.getIndFromChar[char] for char in network.data[e+1:e+network.N+1]]
gradients, loss, hp = network.ComputeGradientsAnalytically(inputs, targets, hp)
# Check gradients
network.CheckGradients(inputs, targets, hp)
```
i) The following generates the gradient comparing result (max relative error) that shows that the implemented analytical gradient method is close enough to be regarded as accurate. Within this check, a state sequence of zeros was used as well as hyperparameters of m=100 (hidden state dimensionality) and eta=.1 (learning rate), seq_length=25 and sig=.01. Important to note is that I only check the first initial entries of the gradient matrices for the following resulting max relative errors.
```
CompareGradients()
#@title Functions: Run Training
losses = []
def RunTraining():
num_epochs = 9 #ändra till 3 när jag kört för bästa sen
e, n, epoch = 0, 0, 0
data = LoadData()
network = RNN_Gradients(data)
network_params = {"W": network.W, "U": network.U, "V": network.V, "b": network.b, "c": network.c}
params = {"W": np.zeros_like(network.W), "U": np.zeros_like(network.U), "V": np.zeros_like(network.V), "b": np.zeros_like(network.b),"c": np.zeros_like(network.c)}
while epoch <= num_epochs and n <= 600000: #ändra till 100,000 när denna är klar
if n == 0 or e >= (len(network.data) - network.N - 1):
hp = np.zeros((network.m, 1))
epoch += 1
e = 0
inputs = [network.getIndFromChar[char] for char in network.data[e:e+network.N]]
targets = [network.getIndFromChar[char] for char in network.data[e+1:e+network.N+1]]
gradients, loss, hp = network.ComputeGradientsAnalytically(inputs, targets, hp)
if n == 0 and epoch == 1: smoothloss = loss
smoothloss = 0.999 * smoothloss + 0.001 * loss
if n % 10000 == 0:
text = network.SynthesizeText(hp, inputs[0], 200)
print('\nIterations %i, smooth loss: %f \n %s\n' % (n, smoothloss, text))
for k in network_params:
params[k] += gradients[k] * gradients[k]
network_params[k] -= network.eta / np.sqrt(params[k] + np.finfo(float).eps) * gradients[k]
e += network.N
n += 1
losses.append(smoothloss)
text = network.SynthesizeText(hp, inputs[0], 1000)
print('\nBest performance')
print('\n %s\n' % (text))
```
iii) Next follows the 200 characters of synthesized text before the first and before every 10,000th update steps when I train for 100,000 update steps. Smooth loss is also displayed
iv) Best performance of 1000 characters is also presented (390,000 iterations)
```
RunTraining()
```
ii) A graph of the smooth loss function for a longish training run (3 epochs)
```
#@title Functions: Smooth Loos Plot
def plot():
loss_plot = plt.plot(losses, label="training loss")
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend()
plt.show()
plot()
```
| github_jupyter |
```
%matplotlib inline
```
# Use source space morphing
This example shows how to use source space morphing (as opposed to
SourceEstimate morphing) to create data that can be compared between
subjects.
<div class="alert alert-danger"><h4>Warning</h4><p>Source space morphing will likely lead to source spaces that are
less evenly sampled than source spaces created for individual
subjects. Use with caution and check effects on localization
before use.</p></div>
```
# Authors: Denis A. Engemann <denis.engemann@gmail.com>
# Eric larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import mne
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem',
'sample-5120-bem-sol.fif')
fname_src_fs = op.join(subjects_dir, 'fsaverage', 'bem',
'fsaverage-ico-5-src.fif')
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
# Get relevant channel information
info = mne.io.read_info(raw_fname)
info = mne.pick_info(info, mne.pick_types(info, meg=True, eeg=False,
exclude=[]))
# Morph fsaverage's source space to sample
src_fs = mne.read_source_spaces(fname_src_fs)
src_morph = mne.morph_source_spaces(src_fs, subject_to='sample',
subjects_dir=subjects_dir)
# Compute the forward with our morphed source space
fwd = mne.make_forward_solution(info, trans=fname_trans,
src=src_morph, bem=fname_bem)
mag_map = mne.sensitivity_map(fwd, ch_type='mag')
# Return this SourceEstimate (on sample's surfaces) to fsaverage's surfaces
mag_map_fs = mag_map.to_original_src(src_fs, subjects_dir=subjects_dir)
# Plot the result, which tracks the sulcal-gyral folding
# outliers may occur, we'll place the cutoff at 99 percent.
kwargs = dict(clim=dict(kind='percent', lims=[0, 50, 99]),
# no smoothing, let's see the dipoles on the cortex.
smoothing_steps=1, hemi='rh', views=['lat'])
# Now note that the dipoles on fsaverage are almost equidistant while
# morphing will distribute the dipoles unevenly across the given subject's
# cortical surface to achieve the closest approximation to the average brain.
# Our testing code suggests a correlation of higher than 0.99.
brain_subject = mag_map.plot( # plot forward in subject source space (morphed)
time_label=None, subjects_dir=subjects_dir, **kwargs)
brain_fs = mag_map_fs.plot( # plot forward in original source space (remapped)
time_label=None, subjects_dir=subjects_dir, **kwargs)
```
| github_jupyter |
# MLP Classification with SUBJ Dataset
<hr>
We will build a text classification model using MLP model on the SUBJ Dataset. Since there is no standard train/test split for this dataset, we will use 10-Fold Cross Validation (CV).
## Load the library
```
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
import nltk
import random
from nltk.corpus import stopwords, twitter_samples
# from nltk.tokenize import TweetTokenizer
from sklearn.model_selection import KFold
from nltk.stem import PorterStemmer
from string import punctuation
from sklearn.preprocessing import OneHotEncoder
from tensorflow.keras.preprocessing.text import Tokenizer
import time
%config IPCompleter.greedy=True
%config IPCompleter.use_jedi=False
# nltk.download('twitter_samples')
tf.config.experimental.list_physical_devices('GPU')
```
## Load the Dataset
```
corpus = pd.read_pickle('../../0_data/SUBJ/SUBJ.pkl')
corpus.label = corpus.label.astype(int)
print(corpus.shape)
corpus
corpus.info()
corpus.groupby( by='label').count()
# Separate the sentences and the labels
sentences, labels = list(corpus.sentence), list(corpus.label)
```
## Raw Number of Vocabulary
```
# Build the raw vocobulary for first inspection
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
vocab_raw = tokenizer.word_index
print('\nThe vocabulary size: {}\n'.format(len(vocab_raw)))
print(vocab_raw)
```
<!--## Split Dataset-->
# Data Preprocessing
<hr>
## Define `clean_doc` function
```
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# remove remaining tokens that are not alphabetic
# tokens = [word for word in tokens if word.isalpha()]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
```
## Develop Vocabulary
A part of preparing text for text classification involves defining and tailoring the vocabulary of words supported by the model. **We can do this by loading all of the documents in the dataset and building a set of words.**
The larger the vocabulary, the more sparse the representation of each word or document. So, we may decide to support all of these words, or perhaps discard some. The final chosen vocabulary can then be saved to a file for later use, such as filtering words in new documents in the future.
We can use `Counter` class and create an instance called `vocab` as follows:
```
from collections import Counter
vocab = Counter()
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
# Example
add_doc_to_vocab(sentences, vocab)
print(len(vocab))
vocab
vocab.items()
# #########################
# # Define the vocabulary #
# #########################
# from collections import Counter
# from nltk.corpus import stopwords
# stopwords = stopwords.words('english')
# stemmer = PorterStemmer()
# def clean_doc(doc):
# # split into tokens by white space
# tokens = doc.split()
# # prepare regex for char filtering
# re_punc = re.compile('[%s]' % re.escape(punctuation))
# # remove punctuation from each word
# tokens = [re_punc.sub('', w) for w in tokens]
# # filter out stop words
# tokens = [w for w in tokens if not w in stopwords]
# # filter out short tokens
# tokens = [word for word in tokens if len(word) >= 1]
# # Stem the token
# tokens = [stemmer.stem(token) for token in tokens]
# return tokens
# def add_doc_to_vocab(docs, vocab):
# '''
# input:
# docs: a list of sentences (docs)
# vocab: a vocabulary dictionary
# output:
# return an updated vocabulary
# '''
# for doc in docs:
# tokens = clean_doc(doc)
# vocab.update(tokens)
# return vocab
# # prepare cross validation with 10 splits and shuffle = True
# kfold = KFold(10, True)
# # Separate the sentences and the labels
# sentences, labels = list(corpus.sentence), list(corpus.label)
# acc_list = []
# # kfold.split() will return set indices for each split
# for train, test in kfold.split(sentences):
# # Instantiate a vocab object
# vocab = Counter()
# train_x, test_x = [], []
# train_y, test_y = [], []
# for i in train:
# train_x.append(sentences[i])
# train_y.append(labels[i])
# for i in test:
# test_x.append(sentences[i])
# test_y.append(labels[i])
# vocab = add_doc_to_vocab(train_x, vocab)
# print(len(train_x), len(test_x))
# print(len(vocab))
```
<dir>
# Bag-of-Words Representation
<hr>
Once we define our vocab obtained from the training data, we need to **convert each review into a representation that we can feed to a Multilayer Perceptron Model.**
As a reminder, here are the summary what we will do:
- extract features from the text so the text input can be used with ML algorithms like neural networks
- we do by converting the text into a vector representation. The larger the vocab, the longer the representation.
- we will score the words in a document inside the vector. These scores are placed in the corresponding location in the vector representation.
```
def doc_to_line(doc):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join([token for token in tokens])
return line
def clean_docs(docs):
lines = []
for doc in docs:
line = doc_to_line(doc)
lines.append(line)
return lines
print(sentences[:5])
clean_sentences = clean_docs(sentences[:5])
print()
print( clean_sentences)
```
## Bag-of-Words Vectors
We will use the **Keras API** to **convert sentences to encoded document vectors**. Although the `Tokenizer` class from TF Keras provides cleaning and vocab definition, it's better we do this ourselves so that we know exactly we are doing.
```
def create_tokenizer(sentence):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(lines)
return tokenizer
```
This process determines a consistent way to **convert the vocabulary to a fixed-length vector**, which is the total number of words in the vocabulary `vocab`.
Next, documents can then be encoded using the Tokenizer by calling `texts_to_matrix()`.
The function takes both a list of documents to encode and an encoding mode, which is the method used to score words in the document. Here we specify **freq** to score words based on their frequency in the document.
This can be used to encode the loaded training and test data, for example:
`Xtrain = tokenizer.texts_to_matrix(train_docs, mode='freq')`
`Xtest = tokenizer.texts_to_matrix(test_docs, mode='freq')`
```
# #########################
# # Define the vocabulary #
# #########################
# from collections import Counter
# from nltk.corpus import stopwords
# stopwords = stopwords.words('english')
# stemmer = PorterStemmer()
# def clean_doc(doc):
# # split into tokens by white space
# tokens = doc.split()
# # prepare regex for char filtering
# re_punc = re.compile('[%s]' % re.escape(punctuation))
# # remove punctuation from each word
# tokens = [re_punc.sub('', w) for w in tokens]
# # filter out stop words
# tokens = [w for w in tokens if not w in stopwords]
# # filter out short tokens
# tokens = [word for word in tokens if len(word) >= 1]
# # Stem the token
# tokens = [stemmer.stem(token) for token in tokens]
# return tokens
# def add_doc_to_vocab(docs, vocab):
# '''
# input:
# docs: a list of sentences (docs)
# vocab: a vocabulary dictionary
# output:
# return an updated vocabulary
# '''
# for doc in docs:
# tokens = clean_doc(doc)
# vocab.update(tokens)
# return vocab
# def doc_to_line(doc, vocab):
# tokens = clean_doc(doc)
# # filter by vocab
# tokens = [token for token in tokens if token in vocab]
# line = ' '.join(tokens)
# return line
# def clean_docs(docs, vocab):
# lines = []
# for doc in docs:
# line = doc_to_line(doc, vocab)
# lines.append(line)
# return lines
# def create_tokenizer(sentences):
# tokenizer = Tokenizer()
# tokenizer.fit_on_texts(sentences)
# return tokenizer
# # prepare cross validation with 10 splits and shuffle = True
# kfold = KFold(10, True)
# # Separate the sentences and the labels
# sentences, labels = list(corpus.sentence), list(corpus.label)
# acc_list = []
# # kfold.split() will return set indices for each split
# for train, test in kfold.split(sentences):
# # Instantiate a vocab object
# vocab = Counter()
# train_x, test_x = [], []
# train_y, test_y = [], []
# for i in train:
# train_x.append(sentences[i])
# train_y.append(labels[i])
# for i in test:
# test_x.append(sentences[i])
# test_y.append(labels[i])
# # Turn the labels into a numpy array
# train_y = np.array(train_y)
# test_y = np.array(test_y)
# # Define a vocabulary for each fold
# vocab = add_doc_to_vocab(train_x, vocab)
# print('The number of vocab: ', len(vocab))
# # Clean the sentences
# train_x = clean_docs(train_x, vocab)
# test_x = clean_docs(test_x, vocab)
# # Define the tokenizer
# tokenizer = create_tokenizer(train_x)
# # encode data using freq mode
# Xtrain = tokenizer.texts_to_matrix(train_x, mode='freq')
# Xtest = tokenizer.texts_to_matrix(test_x, mode='freq')
# print(Xtrain.shape)
# print(train_x[0])
# print(Xtrain[0])
# print(Xtest.shape)
# print(test_x[0])
# print(Xtest[0])
```
# Training and Testing the Model 3
## MLP Model 3
Now, we will build Multilayer Perceptron (MLP) models to classify encoded documents as either positive or negative.
As you might have expected, the models are simply feedforward network with fully connected layers called `Dense` in the `Keras` library.
Now, we will define our MLP neural network with very little trial and error so cannot be considered tuned for this problem. The configuration is as follows:
- First hidden layer with 100 neurons and Relu activation function
- Second hidden layer with 50 neurons and Relu activation function
- Dropout Layer for each fully connected layer with p = 0.5
- Output layer with Sigmoid activation function
- Optimizer: Adam (The best learning algorithm so far)
- Loss function: binary cross-entropy (suited for binary classification problem)
```
def train_mlp_3(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=50, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
callbacks = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', min_delta=0,
patience=5, verbose=2,
mode='auto', restore_best_weights=True)
```
## Train and Test the Model
```
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
def create_tokenizer(sentences):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
return tokenizer
def train_mlp_3(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=50, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# prepare cross validation with 10 splits and shuffle = True
kfold = KFold(10, True)
# Separate the sentences and the labels
sentences, labels = list(corpus.sentence), list(corpus.label)
acc_list = []
# kfold.split() will return set indices for each split
for train, test in kfold.split(sentences):
# Instantiate a vocab object
vocab = Counter()
train_x, test_x = [], []
train_y, test_y = [], []
for i in train:
train_x.append(sentences[i])
train_y.append(labels[i])
for i in test:
test_x.append(sentences[i])
test_y.append(labels[i])
# Turn the labels into a numpy array
train_y = np.array(train_y)
test_y = np.array(test_y)
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# Define the tokenizer
tokenizer = create_tokenizer(train_x)
# encode data using freq mode
Xtrain = tokenizer.texts_to_matrix(train_x, mode='freq')
Xtest = tokenizer.texts_to_matrix(test_x, mode='freq')
# train the model
model = train_mlp_3(Xtrain, train_y, Xtest, test_y)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
acc_list.append(acc)
acc_list = np.array(acc_list)
print()
print('The test ccuracy for each training:\n{}'.format(acc_list))
print('The mean of the test accuracy: ', acc_list.mean())
model.summary()
```
## Comparing the Word Scoring Methods
When we use `text_to_matrix()` function, we are given 4 different methods for scoring words:
- `binary`: words are marked as 1 (present) or 0 (absent)
- `count`: words are counted based on their occurrence (integer)
- `tfidf`: words are scored based on their frequency of occurrence in their own document, but also are being penalized if they are common across all documents
- `freq`: wrods are scored based on their frequency of occurrence in their own document
```
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
def train_mlp_3(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=50, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# prepare cross validation with 10 splits and shuffle = True
kfold = KFold(10, True)
# Separate the sentences and the labels
sentences, labels = list(corpus.sentence), list(corpus.label)
# Run Experiment of 4 different modes
modes = ['binary', 'count', 'tfidf', 'freq']
results = pd.DataFrame()
for mode in modes:
print('mode: ', mode)
acc_list = []
# kfold.split() will return set indices for each split
for train, test in kfold.split(sentences):
# Instantiate a vocab object
vocab = Counter()
train_x, test_x = [], []
train_y, test_y = [], []
for i in train:
train_x.append(sentences[i])
train_y.append(labels[i])
for i in test:
test_x.append(sentences[i])
test_y.append(labels[i])
# Turn the labels into a numpy array
train_y = np.array(train_y)
test_y = np.array(test_y)
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# encode data using freq mode
Xtrain, Xtest = prepare_data(train_x, test_x, mode)
# train the model
model = train_mlp_3(Xtrain, train_y, Xtest, test_y, verbose=0)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
acc_list.append(acc)
results[mode] = acc_list
acc_list = np.array(acc_list)
print('The test ccuracy for each training:\n{}'.format(acc_list))
print('The mean of the test accuracy: ', acc_list.mean())
print()
print(results)
import seaborn as sns
results.boxplot()
plt.show()
```
## Summary
```
results
results.describe()
report = results
report = report.to_excel('BoW_MLP_SUBJ_3.xlsx', sheet_name='model_3')
```
# Training and Testing the Model 2
## MLP Model 2
Now, we will build Multilayer Perceptron (MLP) models to classify encoded documents as either positive or negative.
As you might have expected, the models are simply feedforward network with fully connected layers called `Dense` in the `Keras` library.
Now, we will define our MLP neural network with very little trial and error so cannot be considered tuned for this problem. The configuration is as follows:
- First hidden layer with 100 neurons and Relu activation function
- Dropout layer with p = 0.5
- Output layer with Sigmoid activation function
- Optimizer: Adam (The best learning algorithm so far)
- Loss function: binary cross-entropy (suited for binary classification problem)
```
def train_mlp_2(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
```
## Train and Test the Model
```
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
def create_tokenizer(sentences):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
return tokenizer
def train_mlp_2(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# prepare cross validation with 10 splits and shuffle = True
kfold = KFold(10, True)
# Separate the sentences and the labels
sentences, labels = list(corpus.sentence), list(corpus.label)
acc_list = []
# kfold.split() will return set indices for each split
for train, test in kfold.split(sentences):
# Instantiate a vocab object
vocab = Counter()
train_x, test_x = [], []
train_y, test_y = [], []
for i in train:
train_x.append(sentences[i])
train_y.append(labels[i])
for i in test:
test_x.append(sentences[i])
test_y.append(labels[i])
# Turn the labels into a numpy array
train_y = np.array(train_y)
test_y = np.array(test_y)
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# Define the tokenizer
tokenizer = create_tokenizer(train_x)
# encode data using freq mode
Xtrain = tokenizer.texts_to_matrix(train_x, mode='freq')
Xtest = tokenizer.texts_to_matrix(test_x, mode='freq')
# train the model
model = train_mlp_2(Xtrain, train_y, Xtest, test_y)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
acc_list.append(acc)
acc_list = np.array(acc_list)
print()
print('The test ccuracy for each training:\n{}'.format(acc_list))
print('The mean of the test accuracy: ', acc_list.mean())
```
## Comparing the Word Scoring Methods
When we use `text_to_matrix()` function, we are given 4 different methods for scoring words:
- `binary`: words are marked as 1 (present) or 0 (absent)
- `count`: words are counted based on their occurrence (integer)
- `tfidf`: words are scored based on their frequency of occurrence in their own document, but also are being penalized if they are common across all documents
- `freq`: wrods are scored based on their frequency of occurrence in their own document
```
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
def train_mlp_2(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# prepare cross validation with 10 splits and shuffle = True
kfold = KFold(10, True)
# Separate the sentences and the labels
sentences, labels = list(corpus.sentence), list(corpus.label)
# Run Experiment of 4 different modes
modes = ['binary', 'count', 'tfidf', 'freq']
results = pd.DataFrame()
for mode in modes:
print('mode: ', mode)
acc_list = []
# kfold.split() will return set indices for each split
for train, test in kfold.split(sentences):
# Instantiate a vocab object
vocab = Counter()
train_x, test_x = [], []
train_y, test_y = [], []
for i in train:
train_x.append(sentences[i])
train_y.append(labels[i])
for i in test:
test_x.append(sentences[i])
test_y.append(labels[i])
# Turn the labels into a numpy array
train_y = np.array(train_y)
test_y = np.array(test_y)
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# encode data using freq mode
Xtrain, Xtest = prepare_data(train_x, test_x, mode)
# train the model
model = train_mlp_2(Xtrain, train_y, Xtest, test_y, verbose=0)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
acc_list.append(acc)
results[mode] = acc_list
acc_list = np.array(acc_list)
print('The test ccuracy for each training:\n{}'.format(acc_list))
print('The mean of the test accuracy: ', acc_list.mean())
print()
print(results)
results.boxplot()
plt.show()
```
## Summary
```
results
results.describe()
report = results
report = report.to_excel('BoW_MLP_SUBJ_2.xlsx', sheet_name='model_2')
```
# Training and Testing the Model 1
## MLP Model 1
Now, we will build Multilayer Perceptron (MLP) models to classify encoded documents as either positive or negative.
As you might have expected, the models are simply feedforward network with fully connected layers called `Dense` in the `Keras` library.
Now, we will define our MLP neural network with very little trial and error so cannot be considered tuned for this problem. The configuration is as follows:
- First hidden layer with 50 neurons and Relu activation function
- Dropout layer with p = 0.5
- Output layer with Sigmoid activation function
- Optimizer: Adam (The best learning algorithm so far)
- Loss function: binary cross-entropy (suited for binary classification problem)
```
def train_mlp_1(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=50, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
```
## Train and Test the Model
```
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
def create_tokenizer(sentences):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
return tokenizer
def train_mlp_1(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=50, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# prepare cross validation with 10 splits and shuffle = True
kfold = KFold(10, True)
# Separate the sentences and the labels
sentences, labels = list(corpus.sentence), list(corpus.label)
acc_list = []
# kfold.split() will return set indices for each split
for train, test in kfold.split(sentences):
# Instantiate a vocab object
vocab = Counter()
train_x, test_x = [], []
train_y, test_y = [], []
for i in train:
train_x.append(sentences[i])
train_y.append(labels[i])
for i in test:
test_x.append(sentences[i])
test_y.append(labels[i])
# Turn the labels into a numpy array
train_y = np.array(train_y)
test_y = np.array(test_y)
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# Define the tokenizer
tokenizer = create_tokenizer(train_x)
# encode data using freq mode
Xtrain = tokenizer.texts_to_matrix(train_x, mode='freq')
Xtest = tokenizer.texts_to_matrix(test_x, mode='freq')
# train the model
model = train_mlp_1(Xtrain, train_y, Xtest, test_y)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
acc_list.append(acc)
acc_list = np.array(acc_list)
print()
print('The test ccuracy for each training:\n{}'.format(acc_list))
print('The mean of the test accuracy: ', acc_list.mean())
```
## Comparing the Word Scoring Methods
When we use `text_to_matrix()` function, we are given 4 different methods for scoring words:
- `binary`: words are marked as 1 (present) or 0 (absent)
- `count`: words are counted based on their occurrence (integer)
- `tfidf`: words are scored based on their frequency of occurrence in their own document, but also are being penalized if they are common across all documents
- `freq`: wrods are scored based on their frequency of occurrence in their own document
```
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
def train_mlp_1(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=50, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=1, activation='sigmoid')
])
model.compile( loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# prepare cross validation with 10 splits and shuffle = True
kfold = KFold(10, True)
# Separate the sentences and the labels
sentences, labels = list(corpus.sentence), list(corpus.label)
# Run Experiment of 4 different modes
modes = ['binary', 'count', 'tfidf', 'freq']
results = pd.DataFrame()
for mode in modes:
print('mode: ', mode)
acc_list = []
# kfold.split() will return set indices for each split
for train, test in kfold.split(sentences):
# Instantiate a vocab object
vocab = Counter()
train_x, test_x = [], []
train_y, test_y = [], []
for i in train:
train_x.append(sentences[i])
train_y.append(labels[i])
for i in test:
test_x.append(sentences[i])
test_y.append(labels[i])
# Turn the labels into a numpy array
train_y = np.array(train_y)
test_y = np.array(test_y)
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# encode data using freq mode
Xtrain, Xtest = prepare_data(train_x, test_x, mode)
# train the model
model = train_mlp_1(Xtrain, train_y, Xtest, test_y, verbose=0)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
acc_list.append(acc)
results[mode] = acc_list
acc_list = np.array(acc_list)
print('The test ccuracy for each training:\n{}'.format(acc_list))
print('The mean of the test accuracy: ', acc_list.mean())
print()
print(results)
results.boxplot()
plt.show()
```
## Summary
```
results
results.describe()
report = results
report = report.to_excel('BoW_MLP_SUBJ_1.xlsx', sheet_name='model_1')
```
| github_jupyter |
## install prerequisite
```
from utility.preprocessing1 import processing,load_pickle,get_augmentaion,train_test_split
from models.model1 import padding,train_model,load_model,infer,DiagnosisDataset
DATA_SIZE=10000
BASE_PATH=f'data/{DATA_SIZE}'
FILE = f"{BASE_PATH}/AdmissionsDiagnosesCorePopulatedTable.txt"
```
## Run if you want to train your model with new data
```
# processing(FILE,DATA_SIZE)
```
## Load data from pickle
```
data_int,int2token,token2int=load_pickle(DATA_SIZE)
```
## if you need augmentaion
```
data_aug=get_augmentaion(data_int)
```
## Train test split
```
train, val = train_test_split(data_aug,ratio=0.05,random_seed=10)
print(f"train: {len(train)} ,val: {len(val)}")
```
# Train model
```
n_feature=16
n_hidden=128
n_layer=1
drop_prob=0.10
batch_size=32
input_size=11
num_epoch = 150
pad_value=2625
save_path=f"save_model/latest-b{batch_size}-e{num_epoch}_model.pth"
train_model(
n_feature=n_feature,
n_hidden=n_hidden,
n_layer=n_layer,
drop_prob=drop_prob,
batch_size=batch_size,
input_size=input_size,
num_epoch=num_epoch,
pad_value=pad_value,
train=train,
val=val,
save_path=save_path
)
```
## Load model
```
DATA_SIZE=10000
data_int,int2token,token2int=load_pickle(DATA_SIZE)
train, val = train_test_split(data_int,ratio=0.0,random_seed=10)
n_feature=16
n_hidden=128
n_layer=1
drop_prob=0.10
pad_value=2625
input_size=11
save_path="save_model/latest_32_model.pth"
model=load_model(n_feature,n_hidden,n_layer,drop_prob,save_path)
```
## Evaluation
```
import random
import torch
import torch.nn.functional as F
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
eval_data = DiagnosisDataset(train,11,pad_value=2625)
random.seed(1)
list_rand = random.sample(range(8000),8000)
correct = []
wrong = []
for idx in list_rand:
ip = torch.from_numpy(eval_data[idx][0]).view(1,-1)
gt = eval_data[idx][1]
with torch.no_grad():
y_hat, _ = model(ip.to(device))
y_hat = F.softmax(y_hat,1).cpu()
_, indx = torch.max(y_hat,1)
if indx.item() == gt:
correct.append(gt)
else:
wrong.append({"true":gt, "predicted":indx.item(),"index":idx})
total = len(correct) + len(wrong)
print(f"accuracy: {len(correct)/total}")
```
## Test model
```
def predict():
num=int(input("How many Diagnoses code do you have for next code prediction ?\n"))
x_test=[]
for x in range(num):
x_code=input(f"Enter Diagnoses code {x+1} = ").strip().upper()
try:
x_test.append(token2int[x_code])
except:
print("Embedding not present")
x=[int2token[x] for x in x_test]
x_test=padding(x_test,input_size,pad_value)
idy=infer(x_test,model)
y=int2token[idy]
print("\n........Prediction........\n")
print(f"{x} --> {y}")
predict()
[int2token[x] for x in train[73]]
# p=['F32.4', 'Q27.1', 'H05.321', 'M31.30']
```
| github_jupyter |
```
!pip install exoplanet
import exoplanet as xo
exoplanet.utils.docs_setup()
print(f"exoplanet.__version__ = '{exoplanet.__version__}'")
!pip install lightkurve
import numpy as np
import lightkurve as lk
import matplotlib.pyplot as plt
from astropy.io import fits
#1 Download TPF
lc_file = lk.search_lightcurve('WASP-33', mission='TESS').download(quality_bitmask="hardest", flux_column="pdcsap_flux")
lc = lc_file.remove_nans().normalize().remove_outliers()
time = lc.time.value
flux = lc.flux
# For the purposes of this example, we'll discard some of the data
m = (lc.quality == 0) & (
np.random.default_rng(261136679).uniform(size=len(time)) < 0.3
)
with fits.open(lc_file.filename) as hdu:
hdr = hdu[1].header
texp = hdr["FRAMETIM"] * hdr["NUM_FRM"]
texp /= 60.0 * 60.0 * 24.0
ref_time = 0.5 * (np.min(time) + np.max(time))
x = np.ascontiguousarray(time[m] - ref_time, dtype=np.float64)
y = np.ascontiguousarray(1e3 * (flux[m] - 1.0), dtype=np.float64)
plt.plot(x, y, ".k")
plt.xlabel("time [days]")
plt.ylabel("relative flux [ppt]")
_ = plt.xlim(x.min(), x.max())
```
# Transit Search
use the box least squares periodogram from AstroPy, to estimate the period, phase, and depth of the transit.
```
from astropy.timeseries import BoxLeastSquares
period_grid = np.exp(np.linspace(np.log(1), np.log(15), 50000))
bls = BoxLeastSquares(x, y)
bls_power = bls.power(period_grid, 0.1, oversample=20)
# Save the highest peak as the planet candidate
index = np.argmax(bls_power.power)
bls_period = bls_power.period[index]
bls_t0 = bls_power.transit_time[index]
bls_depth = bls_power.depth[index]
transit_mask = bls.transit_mask(x, bls_period, 0.2, bls_t0)
fig, axes = plt.subplots(2, 1, figsize=(10, 10))
# Plot the periodogram
ax = axes[0]
ax.axvline(np.log10(bls_period), color="C1", lw=5, alpha=0.8)
ax.plot(np.log10(bls_power.period), bls_power.power, "k")
ax.annotate(
"period = {0:.4f} d".format(bls_period),
(0, 1),
xycoords="axes fraction",
xytext=(5, -5),
textcoords="offset points",
va="top",
ha="left",
fontsize=12,
)
ax.set_ylabel("bls power")
ax.set_yticks([])
ax.set_xlim(np.log10(period_grid.min()), np.log10(period_grid.max()))
ax.set_xlabel("log10(period)")
# Plot the folded transit
ax = axes[1]
x_fold = (x - bls_t0 + 0.5 * bls_period) % bls_period - 0.5 * bls_period
m = np.abs(x_fold) < 0.4
ax.plot(x_fold[m], y[m], ".k")
# Overplot the phase binned light curve
bins = np.linspace(-0.41, 0.41, 32)
denom, _ = np.histogram(x_fold, bins)
num, _ = np.histogram(x_fold, bins, weights=y)
denom[num == 0] = 1.0
ax.plot(0.5 * (bins[1:] + bins[:-1]), num / denom, color="C1")
ax.set_xlim(-0.3, 0.3)
ax.set_ylabel("de-trended flux [ppt]")
_ = ax.set_xlabel("time since transit")
```
# The transit model in PyMC3
```
!pip install pymc3_ext
!pip install celerite2
import pymc3 as pm
import aesara_theano_fallback.tensor as tt
import pymc3_ext as pmx
from celerite2.theano import terms, GaussianProcess
phase_lc = np.linspace(-0.3, 0.3, 100)
def build_model(mask=None, start=None):
if mask is None:
mask = np.ones(len(x), dtype=bool)
with pm.Model() as model:
# Parameters for the stellar properties
mean = pm.Normal("mean", mu=0.0, sd=10.0)
u_star = xo.QuadLimbDark("u_star")
star = xo.LimbDarkLightCurve(u_star)
# Stellar parameters from exo.mast
M_star = 1.50, 0.03
R_star = 1.44, 0.03
BoundedNormal = pm.Bound(pm.Normal, lower=0, upper=3)
m_star = BoundedNormal(
"m_star", mu=M_star[0], sd=M_star[1]
)
r_star = BoundedNormal(
"r_star", mu=R_star[0], sd=R_star[1]
)
# Orbital parameters for the planets
t0 = pm.Normal("t0", mu=bls_t0, sd=1)
log_period = pm.Normal("log_period", mu=np.log(bls_period), sd=1)
period = pm.Deterministic("period", tt.exp(log_period))
# Fit in terms of transit depth (assuming b<1)
b = pm.Uniform("b", lower=0, upper=1)
log_depth = pm.Normal("log_depth", mu=np.log(bls_depth), sigma=2.0)
ror = pm.Deterministic(
"ror",
star.get_ror_from_approx_transit_depth(
1e-3 * tt.exp(log_depth), b
),
)
r_pl = pm.Deterministic("r_pl", ror * r_star)
# log_r_pl = pm.Normal(
# "log_r_pl",
# sd=1.0,
# mu=0.5 * np.log(1e-3 * np.array(bls_depth))
# + np.log(R_star_huang[0]),
# )
# r_pl = pm.Deterministic("r_pl", tt.exp(log_r_pl))
# ror = pm.Deterministic("ror", r_pl / r_star)
# b = xo.distributions.ImpactParameter("b", ror=ror)
ecs = pmx.UnitDisk("ecs", testval=np.array([0.01, 0.0]))
ecc = pm.Deterministic("ecc", tt.sum(ecs ** 2))
omega = pm.Deterministic("omega", tt.arctan2(ecs[1], ecs[0]))
xo.eccentricity.kipping13("ecc_prior", fixed=True, observed=ecc)
# Transit jitter & GP parameters
log_sigma_lc = pm.Normal(
"log_sigma_lc", mu=np.log(np.std(y[mask])), sd=10
)
log_rho_gp = pm.Normal("log_rho_gp", mu=0, sd=10)
log_sigma_gp = pm.Normal(
"log_sigma_gp", mu=np.log(np.std(y[mask])), sd=10
)
# Orbit model
orbit = xo.orbits.KeplerianOrbit(
r_star=r_star,
m_star=m_star,
period=period,
t0=t0,
b=b,
ecc=ecc,
omega=omega,
)
# Compute the model light curve
light_curves = (
star.get_light_curve(orbit=orbit, r=r_pl, t=x[mask], texp=texp)
* 1e3
)
light_curve = tt.sum(light_curves, axis=-1) + mean
resid = y[mask] - light_curve
# GP model for the light curve
kernel = terms.SHOTerm(
sigma=tt.exp(log_sigma_gp),
rho=tt.exp(log_rho_gp),
Q=1 / np.sqrt(2),
)
gp = GaussianProcess(kernel, t=x[mask], yerr=tt.exp(log_sigma_lc))
gp.marginal("gp", observed=resid)
# pm.Deterministic("gp_pred", gp.predict(resid))
# Compute and save the phased light curve models
pm.Deterministic(
"lc_pred",
1e3
* star.get_light_curve(
orbit=orbit, r=r_pl, t=t0 + phase_lc, texp=texp
)[..., 0],
)
# Fit for the maximum a posteriori parameters, I've found that I can get
# a better solution by trying different combinations of parameters in turn
if start is None:
start = model.test_point
map_soln = pmx.optimize(
start=start, vars=[log_sigma_lc, log_sigma_gp, log_rho_gp]
)
map_soln = pmx.optimize(start=map_soln, vars=[log_depth])
map_soln = pmx.optimize(start=map_soln, vars=[b])
map_soln = pmx.optimize(start=map_soln, vars=[log_period, t0])
map_soln = pmx.optimize(start=map_soln, vars=[u_star])
map_soln = pmx.optimize(start=map_soln, vars=[log_depth])
map_soln = pmx.optimize(start=map_soln, vars=[b])
map_soln = pmx.optimize(start=map_soln, vars=[ecs])
map_soln = pmx.optimize(start=map_soln, vars=[mean])
map_soln = pmx.optimize(
start=map_soln, vars=[log_sigma_lc, log_sigma_gp, log_rho_gp]
)
map_soln = pmx.optimize(start=map_soln)
extras = dict(
zip(
["light_curves", "gp_pred"],
pmx.eval_in_model([light_curves, gp.predict(resid)], map_soln),
)
)
return model, map_soln, extras
model0, map_soln0, extras0 = build_model()
```
plot the initial light curve model
```
def plot_light_curve(soln, extras, mask=None):
if mask is None:
mask = np.ones(len(x), dtype=bool)
fig, axes = plt.subplots(3, 1, figsize=(10, 7), sharex=True)
ax = axes[0]
ax.plot(x[mask], y[mask], "k", label="data")
gp_mod = extras["gp_pred"] + soln["mean"]
ax.plot(x[mask], gp_mod, color="C2", label="gp model")
ax.legend(fontsize=10)
ax.set_ylabel("relative flux [ppt]")
ax = axes[1]
ax.plot(x[mask], y[mask] - gp_mod, "k", label="de-trended data")
for i, l in enumerate("b"):
mod = extras["light_curves"][:, i]
ax.plot(x[mask], mod, label="planet {0}".format(l))
ax.legend(fontsize=10, loc=3)
ax.set_ylabel("de-trended flux [ppt]")
ax = axes[2]
mod = gp_mod + np.sum(extras["light_curves"], axis=-1)
ax.plot(x[mask], y[mask] - mod, "k")
ax.axhline(0, color="#aaaaaa", lw=1)
ax.set_ylabel("residuals [ppt]")
ax.set_xlim(x[mask].min(), x[mask].max())
ax.set_xlabel("time [days]")
return fig
_ = plot_light_curve(map_soln0, extras0)
```
sigma clipping to remove significant outliers
```
mod = (
extras0["gp_pred"]
+ map_soln0["mean"]
+ np.sum(extras0["light_curves"], axis=-1)
)
resid = y - mod
rms = np.sqrt(np.median(resid ** 2))
mask = np.abs(resid) < 5 * rms
plt.figure(figsize=(10, 5))
plt.plot(x, resid, "k", label="data")
plt.plot(x[~mask], resid[~mask], "xr", label="outliers")
plt.axhline(0, color="#aaaaaa", lw=1)
plt.ylabel("residuals [ppt]")
plt.xlabel("time [days]")
plt.legend(fontsize=12, loc=3)
_ = plt.xlim(x.min(), x.max())
```
re-build the model using the data without outlier
```
model, map_soln, extras = build_model(mask, map_soln0)
_ = plot_light_curve(map_soln, extras, mask)
import platform
with model:
trace = pm.sample(
tune=1500,
draws=1000,
start=map_soln,
# Parallel sampling runs poorly or crashes on macos
cores=1 if platform.system() == "Darwin" else 2,
chains=2,
target_accept=0.95,
return_inferencedata=True,
random_seed=[261136679, 261136680],
init="adapt_full",
)
import arviz as az
az.summary(
trace,
var_names=[
"omega",
"ecc",
"r_pl",
"b",
"t0",
"period",
"r_star",
"m_star",
"u_star",
"mean",
],
)
flat_samps = trace.posterior.stack(sample=("chain", "draw"))
# Compute the GP prediction
gp_mod = extras["gp_pred"] + map_soln["mean"] # np.median(
# flat_samps["gp_pred"].values + flat_samps["mean"].values[None, :], axis=-1
# )
# Get the posterior median orbital parameters
p = np.median(flat_samps["period"])
t0 = np.median(flat_samps["t0"])
# Plot the folded data
x_fold = (x[mask] - t0 + 0.5 * p) % p - 0.5 * p
plt.plot(x_fold, y[mask] - gp_mod, ".k", label="data", zorder=-1000)
# Overplot the phase binned light curve
bins = np.linspace(-0.41, 0.41, 50)
denom, _ = np.histogram(x_fold, bins)
num, _ = np.histogram(x_fold, bins, weights=y[mask])
denom[num == 0] = 1.0
plt.plot(
0.5 * (bins[1:] + bins[:-1]), num / denom, "o", color="C2", label="binned"
)
# Plot the folded model
pred = np.percentile(flat_samps["lc_pred"], [16, 50, 84], axis=-1)
plt.plot(phase_lc, pred[1], color="C1", label="model")
art = plt.fill_between(
phase_lc, pred[0], pred[2], color="C1", alpha=0.5, zorder=1000
)
art.set_edgecolor("none")
# Annotate the plot with the planet's period
txt = "period = {0:.5f} +/- {1:.5f} d".format(
np.mean(flat_samps["period"].values), np.std(flat_samps["period"].values)
)
plt.annotate(
txt,
(0, 0),
xycoords="axes fraction",
xytext=(5, 5),
textcoords="offset points",
ha="left",
va="bottom",
fontsize=12,
)
plt.legend(fontsize=10, loc=4)
plt.xlim(-0.5 * p, 0.5 * p)
plt.xlabel("time since transit [days]")
plt.ylabel("de-trended flux")
_ = plt.xlim(-0.15, 0.15)
```
| github_jupyter |
## 1. The NIST Special Publication 800-63B
<p>If you – 50 years ago – needed to come up with a secret password you were probably part of a secret espionage organization or (more likely) you were pretending to be a spy when playing as a kid. Today, many of us are forced to come up with new passwords <em>all the time</em> when signing into sites and apps. As a password <em>inventeur</em> it is your responsibility to come up with good, hard-to-crack passwords. But it is also in the interest of sites and apps to make sure that you use good passwords. The problem is that it's really hard to define what makes a good password. However, <em>the National Institute of Standards and Technology</em> (NIST) knows what the second best thing is: To make sure you're at least not using a <em>bad</em> password. </p>
<p>In this notebook, we will go through the rules in <a href="https://pages.nist.gov/800-63-3/sp800-63b.html">NIST Special Publication 800-63B</a> which details what checks a <em>verifier</em> (what the NIST calls a second party responsible for storing and verifying passwords) should perform to make sure users don't pick bad passwords. We will go through the passwords of users from a fictional company and use python to flag the users with bad passwords. But us being able to do this already means the fictional company is breaking one of the rules of 800-63B:</p>
<blockquote>
<p>Verifiers SHALL store memorized secrets in a form that is resistant to offline attacks. Memorized secrets SHALL be salted and hashed using a suitable one-way key derivation function.</p>
</blockquote>
<p>That is, never save users' passwords in plaintext, always encrypt the passwords! Keeping this in mind for the next time we're building a password management system, let's load in the data.</p>
<p><em>Warning: The list of passwords and the fictional user database both contain <strong>real</strong> passwords leaked from <strong>real</strong> websites. These passwords have not been filtered in any way and include words that are explicit, derogatory and offensive.</em></p>
```
# Importing the pandas module
import pandas as pd
# Loading in datasets/users.csv
users = pd.read_csv('datasets/users.csv')
# Printing out how many users we've got
print(users)
# Taking a look at the 12 first users
users.head(12)
```
## 2. Passwords should not be too short
<p>If we take a look at the first 12 users above we already see some bad passwords. But let's not get ahead of ourselves and start flagging passwords <em>manually</em>. What is the first thing we should check according to the NIST Special Publication 800-63B?</p>
<blockquote>
<p>Verifiers SHALL require subscriber-chosen memorized secrets to be at least 8 characters in length.</p>
</blockquote>
<p>Ok, so the passwords of our users shouldn't be too short. Let's start by checking that!</p>
```
# Calculating the lengths of users' passwords
import pandas as pd
users = pd.read_csv('datasets/users.csv')
users['length'] = users.password.str.len()
users['too_short'] = users['length'] < 8
print(users['too_short'].sum())
# Taking a look at the 12 first rows
users.head(12)
```
## 3. Common passwords people use
<p>Already this simple rule flagged a couple of offenders among the first 12 users. Next up in Special Publication 800-63B is the rule that</p>
<blockquote>
<p>verifiers SHALL compare the prospective secrets against a list that contains values known to be commonly-used, expected, or compromised.</p>
<ul>
<li>Passwords obtained from previous breach corpuses.</li>
<li>Dictionary words.</li>
<li>Repetitive or sequential characters (e.g. ‘aaaaaa’, ‘1234abcd’).</li>
<li>Context-specific words, such as the name of the service, the username, and derivatives thereof.</li>
</ul>
</blockquote>
<p>We're going to check these in order and start with <em>Passwords obtained from previous breach corpuses</em>, that is, websites where hackers have leaked all the users' passwords. As many websites don't follow the NIST guidelines and encrypt passwords there now exist large lists of the most popular passwords. Let's start by loading in the 10,000 most common passwords which I've taken from <a href="https://github.com/danielmiessler/SecLists/tree/master/Passwords">here</a>.</p>
```
# Reading in the top 10000 passwords
common_passwords = pd.read_csv("datasets/10_million_password_list_top_10000.txt",
header=None,
squeeze=True)
# Taking a look at the top 20
common_passwords.head(20)
```
## 4. Passwords should not be common passwords
<p>The list of passwords was ordered, with the most common passwords first, and so we shouldn't be surprised to see passwords like <code>123456</code> and <code>qwerty</code> above. As hackers also have access to this list of common passwords, it's important that none of our users use these passwords!</p>
<p>Let's flag all the passwords in our user database that are among the top 10,000 used passwords.</p>
```
# Flagging the users with passwords that are common passwords
users['common_password'] = users['password'].isin(common_passwords)
# Counting and printing the number of users using common passwords
print(users['common_password'].sum())
# Taking a look at the 12 first rows
users.head(12)
```
## 5. Passwords should not be common words
<p>Ay ay ay! It turns out many of our users use common passwords, and of the first 12 users there are already two. However, as most common passwords also tend to be short, they were already flagged as being too short. What is the next thing we should check?</p>
<blockquote>
<p>Verifiers SHALL compare the prospective secrets against a list that contains [...] dictionary words.</p>
</blockquote>
<p>This follows the same logic as before: It is easy for hackers to check users' passwords against common English words and therefore common English words make bad passwords. Let's check our users' passwords against the top 10,000 English words from <a href="https://github.com/first20hours/google-10000-english">Google's Trillion Word Corpus</a>.</p>
```
# Reading in a list of the 10000 most common words
words = pd.read_csv("datasets/google-10000-english.txt", header=None,
squeeze=True)
# Flagging the users with passwords that are common words
users['common_word'] = users['password'].str.lower().isin(words)
# Counting and printing the number of users using common words as passwords
print(users['common_word'].sum())
# Taking a look at the 12 first rows
users.head(12)
```
## 6. Passwords should not be your name
<p>It turns out many of our passwords were common English words too! Next up on the NIST list:</p>
<blockquote>
<p>Verifiers SHALL compare the prospective secrets against a list that contains [...] context-specific words, such as the name of the service, the username, and derivatives thereof.</p>
</blockquote>
<p>Ok, so there are many things we could check here. One thing to notice is that our users' usernames consist of their first names and last names separated by a dot. For now, let's just flag passwords that are the same as either a user's first or last name.</p>
```
# Extracting first and last names into their own columns
users['first_name'] = users['user_name'].str.extract(r'(^\w+)', expand=False)
users['last_name'] = users['user_name'].str.extract(r'(\w+$)', expand=False)
# Flagging the users with passwords that matches their names
users['uses_name'] = (users['password'] == users['first_name']) | (users['password'] == users['last_name'])
# Counting and printing the number of users using names as passwords
print(users['uses_name'].count())
# Taking a look at the 12 first rows
users.head(12)
```
## 7. Passwords should not be repetitive
<p>Milford Hubbard (user number 12 above), what where you thinking!? Ok, so the last thing we are going to check is a bit tricky:</p>
<blockquote>
<p>verifiers SHALL compare the prospective secrets [so that they don't contain] repetitive or sequential characters (e.g. ‘aaaaaa’, ‘1234abcd’).</p>
</blockquote>
<p>This is tricky to check because what is <em>repetitive</em> is hard to define. Is <code>11111</code> repetitive? Yes! Is <code>12345</code> repetitive? Well, kind of. Is <code>13579</code> repetitive? Maybe not..? To check for <em>repetitiveness</em> can be arbitrarily complex, but here we're only going to do something simple. We're going to flag all passwords that contain 4 or more repeated characters.</p>
```
### Flagging the users with passwords with >= 4 repeats
users['too_many_repeats'] = users['password'].str.contains(r'(.)\1\1\1')
# Taking a look at the users with too many repeats
users.head(12)
```
## 8. All together now!
<p>Now we have implemented all the basic tests for bad passwords suggested by NIST Special Publication 800-63B! What's left is just to flag all bad passwords and maybe to send these users an e-mail that strongly suggests they change their password.</p>
```
# Flagging all passwords that are bad
users['bad_password'] = (users['too_short'])|(users['common_password'])|(users['common_word'])|(users['uses_name'])|(users['too_many_repeats'])
# Counting and printing the number of bad passwords
print(sum(users['bad_password']))
# Looking at the first 25 bad passwords
users[users['bad_password']==True]['password'].head(25)
```
## 9. Otherwise, the password should be up to the user
<p>In this notebook, we've implemented the password checks recommended by the NIST Special Publication 800-63B. It's certainly possible to better implement these checks, for example, by using a longer list of common passwords. Also note that the NIST checks in no way guarantee that a chosen password is good, just that it's not obviously bad.</p>
<p>Apart from the checks we've implemented above the NIST is also clear with what password rules should <em>not</em> be imposed:</p>
<blockquote>
<p>Verifiers SHOULD NOT impose other composition rules (e.g., requiring mixtures of different character types or prohibiting consecutively repeated characters) for memorized secrets. Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically).</p>
</blockquote>
<p>So the next time a website or app tells you to "include both a number, symbol and an upper and lower case character in your password" you should send them a copy of <a href="https://pages.nist.gov/800-63-3/sp800-63b.html">NIST Special Publication 800-63B</a>.</p>
```
# Enter a password that passes the NIST requirements
# PLEASE DO NOT USE AN EXISTING PASSWORD HERE
new_password = "test@2019"
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/custom_callback"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/custom_callback.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/custom_callback.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/custom_callback.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
# Writing custom Keras callbacks
A custom callback is a powerful tool to customize the behavior of a Keras model during training, evaluation, or inference, including reading/changing the Keras model. Examples include `tf.keras.callbacks.TensorBoard` where the training progress and results can be exported and visualized with TensorBoard, or `tf.keras.callbacks.ModelCheckpoint` where the model is automatically saved during training, and more. In this guide, you will learn what Keras callback is, when it will be called, what it can do, and how you can build your own. Towards the end of this guide, there will be demos of creating a couple of simple callback applications to get you started on your custom callback.
## Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
```
## Introduction to Keras callbacks
In Keras, `Callback` is a python class meant to be subclassed to provide specific functionality, with a set of methods called at various stages of training (including batch/epoch start and ends), testing, and predicting. Callbacks are useful to get a view on internal states and statistics of the model during training. You can pass a list of callbacks (as the keyword argument `callbacks`) to any of `tf.keras.Model.fit()`, `tf.keras.Model.evaluate()`, and `tf.keras.Model.predict()` methods. The methods of the callbacks will then be called at different stages of training/evaluating/inference.
To get started, let's import tensorflow and define a simple Sequential Keras model:
```
# Define the Keras model to add callbacks to
def get_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1, activation = 'linear', input_dim = 784))
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=0.1), loss='mean_squared_error', metrics=['mae'])
return model
```
Then, load the MNIST data for training and testing from Keras datasets API:
```
# Load example MNIST data and pre-process it
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
```
Now, define a simple custom callback to track the start and end of every batch of data. During those calls, it prints the index of the current batch.
```
import datetime
class MyCustomCallback(tf.keras.callbacks.Callback):
def on_train_batch_begin(self, batch, logs=None):
print('Training: batch {} begins at {}'.format(batch, datetime.datetime.now().time()))
def on_train_batch_end(self, batch, logs=None):
print('Training: batch {} ends at {}'.format(batch, datetime.datetime.now().time()))
def on_test_batch_begin(self, batch, logs=None):
print('Evaluating: batch {} begins at {}'.format(batch, datetime.datetime.now().time()))
def on_test_batch_end(self, batch, logs=None):
print('Evaluating: batch {} ends at {}'.format(batch, datetime.datetime.now().time()))
```
Providing a callback to model methods such as `tf.keras.Model.fit()` ensures the methods are called at those stages:
```
model = get_model()
_ = model.fit(x_train, y_train,
batch_size=64,
epochs=1,
steps_per_epoch=5,
verbose=0,
callbacks=[MyCustomCallback()])
```
## Model methods that take callbacks
Users can supply a list of callbacks to the following `tf.keras.Model` methods:
#### [`fit()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#fit), [`fit_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#fit_generator)
Trains the model for a fixed number of epochs (iterations over a dataset, or data yielded batch-by-batch by a Python generator).
#### [`evaluate()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#evaluate), [`evaluate_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#evaluate_generator)
Evaluates the model for given data or data generator. Outputs the loss and metric values from the evaluation.
#### [`predict()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#predict), [`predict_generator()`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model#predict_generator)
Generates output predictions for the input data or data generator.
```
_ = model.evaluate(x_test, y_test, batch_size=128, verbose=0, steps=5,
callbacks=[MyCustomCallback()])
```
## An overview of callback methods
### Common methods for training/testing/predicting
For training, testing, and predicting, following methods are provided to be overridden.
#### `on_(train|test|predict)_begin(self, logs=None)`
Called at the beginning of `fit`/`evaluate`/`predict`.
#### `on_(train|test|predict)_end(self, logs=None)`
Called at the end of `fit`/`evaluate`/`predict`.
#### `on_(train|test|predict)_batch_begin(self, batch, logs=None)`
Called right before processing a batch during training/testing/predicting. Within this method, `logs` is a dict with `batch` and `size` available keys, representing the current batch number and the size of the batch.
#### `on_(train|test|predict)_batch_end(self, batch, logs=None)`
Called at the end of training/testing/predicting a batch. Within this method, `logs` is a dict containing the stateful metrics result.
### Training specific methods
In addition, for training, following are provided.
#### on_epoch_begin(self, epoch, logs=None)
Called at the beginning of an epoch during training.
#### on_epoch_end(self, epoch, logs=None)
Called at the end of an epoch during training.
### Usage of `logs` dict
The `logs` dict contains the loss value, and all the metrics at the end of a batch or epoch. Example includes the loss and mean absolute error.
```
class LossAndErrorPrintingCallback(tf.keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs=None):
print('For batch {}, loss is {:7.2f}.'.format(batch, logs['loss']))
def on_test_batch_end(self, batch, logs=None):
print('For batch {}, loss is {:7.2f}.'.format(batch, logs['loss']))
def on_epoch_end(self, epoch, logs=None):
print('The average loss for epoch {} is {:7.2f} and mean absolute error is {:7.2f}.'.format(epoch, logs['loss'], logs['mae']))
model = get_model()
_ = model.fit(x_train, y_train,
batch_size=64,
steps_per_epoch=5,
epochs=3,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()])
```
Similarly, one can provide callbacks in `evaluate()` calls.
```
_ = model.evaluate(x_test, y_test, batch_size=128, verbose=0, steps=20,
callbacks=[LossAndErrorPrintingCallback()])
```
## Examples of Keras callback applications
The following section will guide you through creating simple Callback applications.
### Early stopping at minimum loss
First example showcases the creation of a `Callback` that stops the Keras training when the minimum of loss has been reached by mutating the attribute `model.stop_training` (boolean). Optionally, the user can provide an argument `patience` to specfify how many epochs the training should wait before it eventually stops.
`tf.keras.callbacks.EarlyStopping` provides a more complete and general implementation.
```
import numpy as np
class EarlyStoppingAtMinLoss(tf.keras.callbacks.Callback):
"""Stop training when the loss is at its min, i.e. the loss stops decreasing.
Arguments:
patience: Number of epochs to wait after min has been hit. After this
number of no improvement, training stops.
"""
def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get('loss')
if np.less(current, self.best):
self.best = current
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print('Restoring model weights from the end of the best epoch.')
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print('Epoch %05d: early stopping' % (self.stopped_epoch + 1))
model = get_model()
_ = model.fit(x_train, y_train,
batch_size=64,
steps_per_epoch=5,
epochs=30,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()])
```
### Learning rate scheduling
One thing that is commonly done in model training is changing the learning rate as more epochs have passed. Keras backend exposes get_value api which can be used to set the variables. In this example, we're showing how a custom Callback can be used to dymanically change the learning rate.
Note: this is just an example implementation see `callbacks.LearningRateScheduler` and `keras.optimizers.schedules` for more general implementations.
```
class LearningRateScheduler(tf.keras.callbacks.Callback):
"""Learning rate scheduler which sets the learning rate according to schedule.
Arguments:
schedule: a function that takes an epoch index
(integer, indexed from 0) and current learning rate
as inputs and returns a new learning rate as output (float).
"""
def __init__(self, schedule):
super(LearningRateScheduler, self).__init__()
self.schedule = schedule
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model.optimizer, 'lr'):
raise ValueError('Optimizer must have a "lr" attribute.')
# Get the current learning rate from model's optimizer.
lr = float(tf.keras.backend.get_value(self.model.optimizer.lr))
# Call schedule function to get the scheduled learning rate.
scheduled_lr = self.schedule(epoch, lr)
# Set the value back to the optimizer before this epoch starts
tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
print('\nEpoch %05d: Learning rate is %6.4f.' % (epoch, scheduled_lr))
LR_SCHEDULE = [
# (epoch to start, learning rate) tuples
(3, 0.05), (6, 0.01), (9, 0.005), (12, 0.001)
]
def lr_schedule(epoch, lr):
"""Helper function to retrieve the scheduled learning rate based on epoch."""
if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:
return lr
for i in range(len(LR_SCHEDULE)):
if epoch == LR_SCHEDULE[i][0]:
return LR_SCHEDULE[i][1]
return lr
model = get_model()
_ = model.fit(x_train, y_train,
batch_size=64,
steps_per_epoch=5,
epochs=15,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), LearningRateScheduler(lr_schedule)])
```
### Standard Keras callbacks
Be sure to check out the existing Keras callbacks by [visiting the api doc](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/callbacks). Applications include logging to CSV, saving the model, visualizing on TensorBoard and a lot more.
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Get-information-from-GFF-file" data-toc-modified-id="Get-information-from-GFF-file-1"><span class="toc-item-num">1 </span>Get information from GFF file</a></span><ul class="toc-item"><li><span><a href="#Convert-GFF-to-Pandas-DataFrame" data-toc-modified-id="Convert-GFF-to-Pandas-DataFrame-1.1"><span class="toc-item-num">1.1 </span>Convert GFF to Pandas DataFrame</a></span></li></ul></li><li><span><a href="#KEGG-and-COGs" data-toc-modified-id="KEGG-and-COGs-2"><span class="toc-item-num">2 </span>KEGG and COGs</a></span><ul class="toc-item"><li><span><a href="#Generate-nucleotide-fasta-files-for-CDS" data-toc-modified-id="Generate-nucleotide-fasta-files-for-CDS-2.1"><span class="toc-item-num">2.1 </span>Generate nucleotide fasta files for CDS</a></span></li><li><span><a href="#Run-EggNOG-Mapper" data-toc-modified-id="Run-EggNOG-Mapper-2.2"><span class="toc-item-num">2.2 </span>Run EggNOG Mapper</a></span></li><li><span><a href="#Get-KEGG-attributes" data-toc-modified-id="Get-KEGG-attributes-2.3"><span class="toc-item-num">2.3 </span>Get KEGG attributes</a></span></li><li><span><a href="#Save-KEGG-information" data-toc-modified-id="Save-KEGG-information-2.4"><span class="toc-item-num">2.4 </span>Save KEGG information</a></span></li><li><span><a href="#Save-COGs-to-annotation-dataframe" data-toc-modified-id="Save-COGs-to-annotation-dataframe-2.5"><span class="toc-item-num">2.5 </span>Save COGs to annotation dataframe</a></span></li></ul></li><li><span><a href="#Uniprot-ID-mapping" data-toc-modified-id="Uniprot-ID-mapping-3"><span class="toc-item-num">3 </span>Uniprot ID mapping</a></span></li><li><span><a href="#Add-Biocyc-Operon-information" data-toc-modified-id="Add-Biocyc-Operon-information-4"><span class="toc-item-num">4 </span>Add Biocyc Operon information</a></span><ul class="toc-item"><li><span><a href="#Assign-unique-IDs-to-operons" data-toc-modified-id="Assign-unique-IDs-to-operons-4.1"><span class="toc-item-num">4.1 </span>Assign unique IDs to operons</a></span></li></ul></li><li><span><a href="#Clean-up-and-save-annotation" data-toc-modified-id="Clean-up-and-save-annotation-5"><span class="toc-item-num">5 </span>Clean up and save annotation</a></span><ul class="toc-item"><li><span><a href="#Final-statistics" data-toc-modified-id="Final-statistics-5.1"><span class="toc-item-num">5.1 </span>Final statistics</a></span></li><li><span><a href="#Fill-missing-values" data-toc-modified-id="Fill-missing-values-5.2"><span class="toc-item-num">5.2 </span>Fill missing values</a></span></li></ul></li><li><span><a href="#GO-Annotations" data-toc-modified-id="GO-Annotations-6"><span class="toc-item-num">6 </span>GO Annotations</a></span></li></ul></div>
```
import sys
sys.path.append('..')
import pymodulon
from pymodulon.gene_util import *
import os
from Bio import SeqIO
pymodulon.__path__
org_dir = '/home/tahani/Documents/elongatus/data/'
kegg_organism_code = 'syf'
seq_dir = '/home/tahani/Documents/elongatus/sequence_files/'
```
# Get information from GFF file
## Convert GFF to Pandas DataFrame
```
annot_list = []
for filename in os.listdir(seq_dir):
if filename.endswith('.gff3'):
gff = os.path.join(seq_dir,filename)
annot_list.append(gff2pandas(gff))
keep_cols = ['accession','start','end','strand','gene_name','locus_tag','old_locus_tag','gene_product','ncbi_protein']
DF_annot = pd.concat(annot_list)[keep_cols]
DF_annot = DF_annot.drop_duplicates('locus_tag')
DF_annot.set_index('locus_tag',drop=True,inplace=True)
tpm_file = os.path.join(org_dir,'0_log_tpm.csv')
DF_log_tpm = pd.read_csv(tpm_file,index_col=0)
```
Check that the genes are the same in the expression dataset as in the annotation dataframe.
```
# Mismatched genes are listed below
test = DF_annot.sort_index().index == DF_log_tpm.sort_index().index
DF_annot[~test]
```
# KEGG and COGs
## Generate nucleotide fasta files for CDS
```
cds_list = []
for filename in os.listdir(seq_dir):
if filename.endswith('.fasta'):
fasta = os.path.join(seq_dir,filename)
seq = SeqIO.read(fasta,'fasta')
# Get gene information for genes in this fasta file
df_genes = DF_annot[DF_annot.accession == seq.id]
for i,row in df_genes.iterrows():
cds = seq[row.start-1:row.end]
if row.strand == '-':
cds = seq[row.start-1:row.end].reverse_complement()
cds.id = row.name
cds.description = row.gene_name if pd.notnull(row.gene_name) else row.name
cds_list.append(cds)
SeqIO.write(cds_list,os.path.join(seq_dir,'CDS.fna'),'fasta')
```
## Run EggNOG Mapper
1. Go to http://eggnog-mapper.embl.de/.
1. Upload the CDS.fna file from your organism directory (within the sequence_files folder)
1. Make sure to limit the taxonomy to the correct level
1. After the job is submitted, you must follow the link in your email to run the job.
1. Once the job completes (after ~4 hrs), download the annotations file.
1. Save the annotation file to `<org_dir>/data/eggNOG.annotations`
## Get KEGG attributes
```
DF_eggnog = pd.read_csv(os.path.join(org_dir,'Synechococcus_elongatus.annotations'),sep='\t',skiprows=4,header=None)
eggnog_cols = ['query_name','seed eggNOG ortholog','seed ortholog evalue','seed ortholog score',
'Predicted taxonomic group','Predicted protein name','Gene Ontology terms',
'EC number','KEGG_orth','KEGG_pathway','KEGG_module','KEGG_reaction',
'KEGG_rclass','BRITE','KEGG_TC','CAZy','BiGG Reaction','tax_scope',
'eggNOG OGs','bestOG_deprecated','COG','eggNOG free text description']
DF_eggnog.columns = eggnog_cols
# Strip last three rows as they are comments
DF_eggnog = DF_eggnog.iloc[:-3]
# Set locus tag as index
DF_eggnog = DF_eggnog.set_index('query_name')
DF_eggnog.index.name = 'locus_tag'
DF_kegg = DF_eggnog[['KEGG_orth','KEGG_pathway','KEGG_module','KEGG_reaction']]
# Melt dataframe
DF_kegg = DF_kegg.reset_index().melt(id_vars='locus_tag')
# Remove null values
DF_kegg = DF_kegg[DF_kegg.value.notnull()]
# Split comma-separated values into their own rows
list2struct = []
for name,row in DF_kegg.iterrows():
for val in row.value.split(','):
list2struct.append([row.locus_tag,row.variable,val])
DF_kegg = pd.DataFrame(list2struct,columns=['gene_id','database','kegg_id'])
# Remove ko entries, as only map entries are searchable in KEGG pathway
DF_kegg = DF_kegg[~DF_kegg.kegg_id.str.startswith('ko')]
DF_kegg.head()
```
## Save KEGG information
```
DF_kegg.to_csv(os.path.join(org_dir,'2_kegg_mapping.csv'))
```
## Save COGs to annotation dataframe
```
DF_annot['COG'] = DF_eggnog.COG
# Make sure COG only has one entry per gene
DF_annot['COG'] = [item[0] if isinstance(item,str) else item for item in DF_annot['COG']]
DF_annot
```
# Uniprot ID mapping
```
# Try the uniprot ID mapping tool - Use EMBL for Genbank file and P_REFSEQ_AC for Refseq file
mapping_uniprot = uniprot_id_mapping(DF_annot.ncbi_protein.fillna(''),input_id='P_REFSEQ_AC',output_id='ACC',
input_name='ncbi_protein',output_name='uniprot')
# Merge with current annotation
DF_annot = pd.merge(DF_annot.reset_index(),mapping_uniprot,how='left',on='ncbi_protein')
DF_annot.set_index('locus_tag',inplace=True)
assert(len(DF_annot) == len(DF_annot))
DF_annot.head()
```
# Add Biocyc Operon information
To obtain operon information, follow the steps below
1. Go to Biocyc.org (you may need to create an account and/or login)
2. Change the organism database to your organism/strain
3. Select SmartTables -> Special SmartTables
4. Select "All genes of <organism>"
5. Select the "Gene Name" column
6. Under "ADD TRANSFORM COLUMN" select "Genes in same transcription unit"
7. Select the "Genes in same transcription unit" column
8. Under "ADD PROPERTY COLUMN" select "Accession-1"
9. Under OPERATIONS, select "Export" -> "to Spreadsheet File..."
10. Select "common names" and click "Export smarttable"
11. Move file to "<org_dir>/data/" and name it as "biocyc_operon_annotations.txt"
12. Run the code cell below this
```
DF_annot.head()
DF_biocyc = pd.read_csv(os.path.join(org_dir,'biocyc_operons_annotations.txt'),sep='\t')
DF_biocyc = DF_biocyc.set_index('Accession-2').sort_values('Left-End-Position')
DF_biocyc
DF_biocyc = DF_biocyc.loc[~DF_biocyc.index.duplicated(keep="first")]
DF_biocyc
# Only keep genes in the final annotation file
DF_biocyc = DF_biocyc.reindex(list(DF_annot.index))
DF_biocyc
```
## Assign unique IDs to operons
```
counter = 0
operon_name_mapper = {}
operon_list = []
for i,row in DF_biocyc.iterrows():
operon = row["Genes in same transcription unit"]
if operon not in operon_name_mapper.keys() or pd.isnull(operon):
counter += 1
operon_name_mapper[operon] = "Op"+str(counter)
operon_list.append(operon_name_mapper[operon])
# Add operons to annotation file
DF_biocyc['operon'] = operon_list
DF_annot['operon'] = DF_biocyc['operon']
DF_biocyc
```
# Clean up and save annotation
```
# Temporarily remove warning
pd.set_option('mode.chained_assignment', None)
# Reorder annotation file
if 'old_locus_tag' in DF_annot.columns:
order = ['gene_name','accession','old_locus_tag','start','end','strand','gene_product','COG','uniprot','operon']
else:
order = ['gene_name','accession','start','end','strand','gene_product','COG','uniprot','operon']
DF_annot = DF_annot[order]
DF_annot.head()
```
## Final statistics
```
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style('ticks')
fig,ax = plt.subplots()
DF_annot.count().plot(kind='bar',ax=ax)
ax.set_ylabel('# of Values',fontsize=18)
ax.tick_params(labelsize=16)
```
## Fill missing values
```
# Fill in missing gene names with locus tag names
DF_annot['tmp_name'] = DF_annot.copy().index.tolist()
DF_annot.gene_name.fillna(DF_annot.tmp_name,inplace=True)
DF_annot.drop('tmp_name',axis=1,inplace=True)
# Fill missing COGs with X
DF_annot['COG'].fillna('X',inplace=True)
# Change single letter COG annotation to full description
DF_annot['COG'] = DF_annot.COG.apply(cog2str)
counts = DF_annot.COG.value_counts()
plt.pie(counts.values,labels=counts.index);
```
## Adding gene names from BioCyc
```
DF_annot
None
from os import path
DF_biocyc = pd.read_csv(os.path.join(org_dir,'1_biocyc_gene_annotation.csv'), sep='\t', error_bad_lines= False)
DF_annot['cycbio_gene_name'] = DF_annot.index
DF_annot['cycbio_gene_product'] = DF_annot.index
#dict(zip(df.A,df.B))
name_dict = dict(zip(DF_biocyc.locus_tag, DF_biocyc.gene_name))
product_dict = dict(zip(DF_biocyc.locus_tag, DF_biocyc.Product))
DF_annot = DF_annot.replace({'cycbio_gene_name':name_dict})
DF_annot = DF_annot.replace({'cycbio_gene_product':product_dict})
```
## Adding Maries Annotation
```
from os import path
DF_marie = pd.read_csv(os.path.join(org_dir,'marie.csv'), error_bad_lines=True)
DF_marie
DF_marie = DF_marie.fillna('no')
DF_marie
DF_annot['marie_gene'] = DF_annot.index
DF_annot['marie_product'] = DF_annot.index
DF_annot['marie_annot'] = DF_annot.index
#dict(zip(df.A,df.B))
name_dict = dict(zip(DF_marie['PCC7942_NCBI-alias'], DF_marie['gene_name']))
product_dict = dict(zip(DF_marie['PCC7942_NCBI-alias'], DF_marie['Annotation_description']))
new_anot = dict(zip(DF_marie['PCC7942_NCBI-alias'],DF_marie['PCC7942_PG_locus']))
name_dict
for key, value in name_dict.items():
if value == 'no':
name_dict[key] = new_anot[key]
name_dict
DF_annot = DF_annot.replace({'marie_gene':name_dict})
DF_annot = DF_annot.replace({'marie_product':product_dict})
DF_annot = DF_annot.replace({'marie_annot':new_anot})
# DF_annot.head()
order = ['old_locus_tag','marie_annot', 'marie_gene', 'marie_product', 'cycbio_gene_name','cycbio_gene_product',
'gene_name', 'gene_product','COG','uniprot','operon','accession','start','end','strand']
DF_annot = DF_annot[order]
DF_annot[30:38]
DF_annot
DF_annot.to_csv(os.path.join(org_dir,'gene_info_operon.csv'))
```
# GO Annotations
To start, download the GO Annotations for your organism from AmiGO 2
1. Go to http://amigo.geneontology.org/amigo/search/annotation
1. Filter for your organism
1. Click `CustomDL`
1. Drag `GO class (direct)` to the end of your Selected Fields
1. Save as `GO_annotations.txt` in the `data` folder of your organism directory
```
DF_GO = pd.read_csv(os.path.join(org_dir,'3_go_annotation.txt'),sep='\t',header=None)
DF_GO = pd.read_csv(os.path.join(org_dir,'3_go_annotation.txt'),sep='\t',header=None,usecols=[3,9,18])
DF_GO.columns = ['gene_name','gene_product','gene_ontology']
DF_GO.gene_name.fillna(DF_GO.gene_name,inplace=True)
DF_GO.head()
```
Take a look at the `gene_id` column:
1. Make sure there are no null entries
2. Check if it uses the new or old locus tag (if applicable)
If it looks like it uses the old locus tag, set old_locus_tag to `True`
```
old_locus_tag = False
DF_GO[DF_GO.gene_name.isnull()]
# if not old_locus_tag:
# convert_tags = {value:key for key,value in DF_annot.old_locus_tag.items()}
# DF_GO.gene_name = DF_GO.gene_name.apply(lambda x: convert_tags[x])
DF_GO.head()
# DF_GO[['gene_id','gene_ontology']].to_csv(os.path.join(org_dir,'data','GO_annotations.csv'))
DF_GO.to_csv(os.path.join(org_dir,'2_GO_annotations.csv'))
```
| github_jupyter |
```
from torchvision import transforms
from torch.utils.data import Dataset, DataLoader
import torch
from torch import optim
from torch.autograd import Variable
import numpy as np
import os
import math
from torch import nn
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import itertools
import random
from sklearn import preprocessing
from scipy import io
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
```
## DataLoader
```
### handle the dataset
class TorchDataset(Dataset):
def __init__(self, trs_file, label_file, trace_num, trace_offset, trace_length):
self.trs_file = trs_file
self.label_file = label_file
self.trace_num = trace_num
self.trace_offset = trace_offset
self.trace_length = trace_length
self.ToTensor = transforms.ToTensor()
def __getitem__(self, i):
index = i % self.trace_num
trace = self.trs_file[index,:]
label = self.label_file[index]
trace = trace[self.trace_offset:self.trace_offset+self.trace_length]
trace = np.reshape(trace,(1,-1))
trace = self.ToTensor(trace)
trace = np.reshape(trace, (1,-1))
label = torch.tensor(label, dtype=torch.long)
return trace.float(), label
def __len__(self):
return self.trace_num
### data loader for training
def load_training(batch_size, kwargs):
data = TorchDataset(**kwargs)
train_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True, drop_last=True, num_workers=1, pin_memory=True)
return train_loader
### data loader for testing
def load_testing(batch_size, kwargs):
data = TorchDataset(**kwargs)
test_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=False, drop_last=True, num_workers=1, pin_memory=True)
return test_loader
```
## Arrays and Functions
```
Sbox = [99, 124, 119, 123, 242, 107, 111, 197, 48, 1, 103, 43, 254, 215, 171, 118, 202, 130, 201, 125, 250, 89, 71,
240, 173, 212, 162, 175, 156, 164, 114, 192, 183, 253, 147, 38, 54, 63, 247, 204, 52, 165, 229, 241, 113, 216,
49, 21, 4, 199, 35, 195, 24, 150, 5, 154, 7, 18, 128, 226, 235, 39, 178, 117, 9, 131, 44, 26, 27, 110, 90, 160,
82, 59, 214, 179, 41, 227, 47, 132, 83, 209, 0, 237, 32, 252, 177, 91, 106, 203, 190, 57, 74, 76, 88, 207, 208,
239, 170, 251, 67, 77, 51, 133, 69, 249, 2, 127, 80, 60, 159, 168, 81, 163, 64, 143, 146, 157, 56, 245, 188,
182, 218, 33, 16, 255, 243, 210, 205, 12, 19, 236, 95, 151, 68, 23, 196, 167, 126, 61, 100, 93, 25, 115, 96,
129, 79, 220, 34, 42, 144, 136, 70, 238, 184, 20, 222, 94, 11, 219, 224, 50, 58, 10, 73, 6, 36, 92, 194, 211,
172, 98, 145, 149, 228, 121, 231, 200, 55, 109, 141, 213, 78, 169, 108, 86, 244, 234, 101, 122, 174, 8, 186,
120, 37, 46, 28, 166, 180, 198, 232, 221, 116, 31, 75, 189, 139, 138, 112, 62, 181, 102, 72, 3, 246, 14, 97,
53, 87, 185, 134, 193, 29, 158, 225, 248, 152, 17, 105, 217, 142, 148, 155, 30, 135, 233, 206, 85, 40, 223, 140,
161, 137, 13, 191, 230, 66, 104, 65, 153, 45, 15, 176, 84, 187, 22]
InvSbox = [82, 9, 106, 213, 48, 54, 165, 56, 191, 64, 163, 158, 129, 243, 215, 251, 124, 227, 57, 130, 155, 47, 255, 135,
52, 142, 67, 68, 196, 222, 233, 203, 84, 123, 148, 50, 166, 194, 35, 61,238, 76, 149, 11, 66, 250, 195, 78, 8,
46, 161, 102, 40, 217, 36, 178, 118, 91, 162, 73, 109, 139, 209, 37, 114, 248, 246, 100, 134, 104, 152, 22, 212,
164, 92, 204, 93, 101, 182, 146, 108, 112, 72, 80, 253, 237, 185, 218, 94, 21, 70, 87, 167, 141, 157, 132, 144,
216, 171, 0, 140, 188, 211, 10, 247, 228, 88, 5, 184, 179, 69, 6, 208, 44, 30, 143, 202, 63, 15, 2, 193, 175, 189,
3, 1, 19, 138, 107, 58, 145, 17, 65, 79, 103, 220, 234, 151, 242, 207, 206, 240, 180, 230, 115, 150, 172, 116, 34,
231, 173, 53, 133, 226, 249, 55, 232, 28, 117, 223, 110, 71, 241, 26, 113, 29, 41, 197, 137, 111, 183, 98, 14, 170,
24,190, 27, 252, 86, 62, 75, 198, 210, 121, 32, 154, 219, 192, 254, 120, 205, 90, 244, 31, 221, 168, 51, 136, 7,
199, 49, 177, 18, 16, 89, 39, 128, 236, 95, 96, 81, 127, 169, 25, 181,74, 13, 45, 229, 122, 159, 147, 201, 156,
239, 160, 224, 59, 77, 174, 42, 245, 176, 200, 235, 187, 60, 131, 83, 153, 97, 23, 43, 4, 126, 186, 119, 214, 38,
225, 105, 20, 99, 85, 33,12, 125]
HW_byte = [0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2,
3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3,
3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3,
4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4,
3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5,
6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4,
4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5,
6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8]
### To train a network
def train(epoch, model, freeze_BN = False):
"""
- epoch : the current epoch
- model : the current model
- freeze_BN : whether to freeze batch normalization layers
"""
if freeze_BN:
model.eval() # enter eval mode to freeze batch normalization layers
else:
model.train() # enter training mode
# Instantiate the Iterator
iter_source = iter(source_train_loader)
# get the number of batches
num_iter = len(source_train_loader)
clf_criterion = nn.CrossEntropyLoss()
# train on each batch of data
for i in range(1, num_iter+1):
source_data, source_label = iter_source.next()
if cuda:
source_data, source_label = source_data.cuda(), source_label.cuda()
source_data, source_label = Variable(source_data), Variable(source_label)
optimizer.zero_grad()
source_preds = model(source_data)
preds = source_preds.data.max(1, keepdim=True)[1]
correct_batch = preds.eq(source_label.data.view_as(preds)).sum()
loss = clf_criterion(source_preds, source_label)
# optimzie the cross-entropy loss
loss.backward()
optimizer.step()
if i % log_interval == 0:
print('Train Epoch {}: [{}/{} ({:.0f}%)]\tLoss: {:.6f}\tAcc: {:.6f}%'.format(
epoch, i * len(source_data), len(source_train_loader) * batch_size,
100. * i / len(source_train_loader), loss.data, float(correct_batch) * 100. /batch_size))
### validation
def validation(model):
# enter evaluation mode
model.eval()
valid_loss = 0
# the number of correct prediction
correct_valid = 0
clf_criterion = nn.CrossEntropyLoss()
for data, label in source_valid_loader:
if cuda:
data, label = data.cuda(), label.cuda()
data, label = Variable(data), Variable(label)
valid_preds = model(data)
# sum up batch loss
valid_loss += clf_criterion(valid_preds, label)
# get the index of the max probability
pred = valid_preds.data.max(1)[1]
# get the number of correct prediction
correct_valid += pred.eq(label.data.view_as(pred)).cpu().sum()
valid_loss /= len(source_valid_loader)
valid_acc = 100. * correct_valid / len(source_valid_loader.dataset)
print('Validation: loss: {:.4f}, accuracy: {}/{} ({:.6f}%)'.format(
valid_loss.data, correct_valid, len(source_valid_loader.dataset),
valid_acc))
return valid_loss, valid_acc
### test/attack
def test(model, device_id, disp_GE=True, model_flag='pretrained'):
"""
- model : the current model
- device_id : id of the tested device
- disp_GE : whether to attack/calculate guessing entropy (GE)
- model_flag : a string for naming GE result
"""
# enter evaluation mode
model.eval()
test_loss = 0
# the number of correct prediction
correct = 0
epoch = 0
clf_criterion = nn.CrossEntropyLoss()
if device_id == source_device_id: # attack on the source domain
test_num = source_test_num
test_loader = source_test_loader
real_key = real_key_01
else: # attack on the target domain
test_num = target_test_num
test_loader = target_test_loader
real_key = real_key_02
# Initialize the prediction and label lists(tensors)
predlist=torch.zeros(0,dtype=torch.long, device='cpu')
lbllist=torch.zeros(0,dtype=torch.long, device='cpu')
test_preds_all = torch.zeros((test_num, class_num), dtype=torch.float, device='cpu')
for data, label in test_loader:
if cuda:
data, label = data.cuda(), label.cuda()
data, label = Variable(data), Variable(label)
test_preds = model(data)
# sum up batch loss
test_loss += clf_criterion(test_preds, label)
# get the index of the max probability
pred = test_preds.data.max(1)[1]
# get the softmax results for attack/showing guessing entropy
softmax = nn.Softmax(dim=1)
test_preds_all[epoch*batch_size:(epoch+1)*batch_size, :] =softmax(test_preds)
# get the predictions (predlist) and real labels (lbllist) for showing confusion matrix
predlist=torch.cat([predlist,pred.view(-1).cpu()])
lbllist=torch.cat([lbllist,label.view(-1).cpu()])
# get the number of correct prediction
correct += pred.eq(label.data.view_as(pred)).cpu().sum()
epoch += 1
test_loss /= len(test_loader)
print('Target test loss: {:.4f}, Target test accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss.data, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# get the confusion matrix
confusion_mat = confusion_matrix(lbllist.numpy(), predlist.numpy())
# show the confusion matrix
plot_sonfusion_matrix(confusion_mat, classes = range(class_num))
# show the guessing entropy and success rate
if disp_GE:
plot_guessing_entropy(test_preds_all.numpy(), real_key, device_id, model_flag)
### fine-tune a pre-trained model
def CDP_train(epoch, model):
"""
- epoch : the current epoch
- model : the current model
"""
# enter evaluation mode to freeze the BN and dropout (if have) layer when fine-tuning
model.eval()
# Instantiate the Iterator for source tprofiling traces
iter_source = iter(source_train_loader)
# Instantiate the Iterator for target traces
iter_target = iter(target_finetune_loader)
num_iter_target = len(target_finetune_loader)
finetune_trace_all = torch.zeros((num_iter_target, batch_size, 1, trace_length))
for i in range(num_iter_target):
finetune_trace_all[i,:,:,:], _ = iter_target.next()
# get the number of batches
num_iter = len(source_train_loader)
clf_criterion = nn.CrossEntropyLoss()
# train on each batch of data
for i in range(1, num_iter+1):
# get traces and labels for source domain
source_data, source_label = iter_source.next()
# get traces for target domain
target_data = finetune_trace_all[(i-1)%num_iter_target,:,:,:]
if cuda:
source_data, source_label = source_data.cuda(), source_label.cuda()
target_data = target_data.cuda()
source_data, source_label = Variable(source_data), Variable(source_label)
target_data = Variable(target_data)
optimizer.zero_grad()
# get predictions and MMD loss
source_preds, mmd_loss = model(source_data, target_data)
preds = source_preds.data.max(1, keepdim=True)[1]
# get classification loss on source doamin
clf_loss = clf_criterion(source_preds, source_label)
# the total loss function
loss = clf_loss + lambda_*mmd_loss
# optimzie the total loss
loss.backward()
optimizer.step()
if i % log_interval == 0:
print('Train Epoch {}: [{}/{} ({:.0f}%)]\ttotal_loss: {:.6f}\tclf_loss: {:.6f}\tmmd_loss: {:.6f}'.format(
epoch, i * len(source_data), len(source_train_loader) * batch_size,
100. * i / len(source_train_loader), loss.data, clf_loss.data, mmd_loss.data))
### Validation for fine-tuning phase
def CDP_validation(model):
# enter evaluation mode
clf_criterion = nn.CrossEntropyLoss()
model.eval()
# Instantiate the Iterator for source validation traces
iter_source = iter(source_valid_loader)
# Instantiate the Iterator for target traces
iter_target = iter(target_finetune_loader)
num_iter_target = len(target_finetune_loader)
finetune_trace_all = torch.zeros((num_iter_target, batch_size, 1, trace_length))
for i in range(num_iter_target):
finetune_trace_all[i,:,:,:], _ = iter_target.next()
# get the number of batches
num_iter = len(source_valid_loader)
# the classification loss
total_clf_loss = 0
# the MMD loss
total_mmd_loss = 0
# the total loss
total_loss = 0
# the number of correct prediction
correct = 0
for i in range(1, num_iter+1):
# get traces and labels for source domain
source_data, source_label = iter_source.next()
# get traces for target domain
target_data = finetune_trace_all[(i-1)%num_iter_target,:,:,:]
if cuda:
source_data, source_label = source_data.cuda(), source_label.cuda()
target_data = target_data.cuda()
source_data, source_label = Variable(source_data), Variable(source_label)
target_data = Variable(target_data)
valid_preds, mmd_loss = model(source_data, target_data)
clf_loss = clf_criterion(valid_preds, source_label)
# sum up batch loss
loss = clf_loss + lambda_*mmd_loss
total_clf_loss += clf_loss
total_mmd_loss += mmd_loss
total_loss += loss
# get the index of the max probability
pred = valid_preds.data.max(1)[1]
correct += pred.eq(source_label.data.view_as(pred)).cpu().sum()
total_loss /= len(source_valid_loader)
total_clf_loss /= len(source_valid_loader)
total_mmd_loss /= len(source_valid_loader)
print('Validation: total_loss: {:.4f}, clf_loss: {:.4f}, mmd_loss: {:.4f}, accuracy: {}/{} ({:.2f}%)'.format(
total_loss.data, total_clf_loss, total_mmd_loss, correct, len(source_valid_loader.dataset),
100. * correct / len(source_valid_loader.dataset)))
return total_loss
### kernel function
def guassian_kernel(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None):
"""
- source : source data
- target : target data
- kernel_mul : multiplicative step of bandwidth (sigma)
- kernel_num : the number of guassian kernels
- fix_sigma : use a fix value of bandwidth
"""
n_samples = int(source.size()[0])+int(target.size()[0])
total = torch.cat([source, target], dim=0)
total0 = total.unsqueeze(0).expand(int(total.size(0)), \
int(total.size(0)), \
int(total.size(1)))
total1 = total.unsqueeze(1).expand(int(total.size(0)), \
int(total.size(0)), \
int(total.size(1)))
# |x-y|
L2_distance = ((total0-total1)**2).sum(2)
# bandwidth
if fix_sigma:
bandwidth = fix_sigma
else:
bandwidth = torch.sum(L2_distance.data) / (n_samples**2-n_samples)
# take the current bandwidth as the median value, and get a list of bandwidths (for example, when bandwidth is 1, we get [0.25,0.5,1,2,4]).
bandwidth /= kernel_mul ** (kernel_num // 2)
bandwidth_list = [bandwidth * (kernel_mul**i) for i in range(kernel_num)]
#exp(-|x-y|/bandwidth)
kernel_val = [torch.exp(-L2_distance / bandwidth_temp) for \
bandwidth_temp in bandwidth_list]
# return the final kernel matrix
return sum(kernel_val)
### MMD loss function based on guassian kernels
def mmd_rbf(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None):
"""
- source : source data
- target : target data
- kernel_mul : multiplicative step of bandwidth (sigma)
- kernel_num : the number of guassian kernels
- fix_sigma : use a fix value of bandwidth
"""
loss = 0.0
batch_size = int(source.size()[0])
kernels = guassian_kernel(source, target,kernel_mul=kernel_mul,kernel_num=kernel_num, fix_sigma=fix_sigma)
XX = kernels[:batch_size, :batch_size] # Source<->Source
YY = kernels[batch_size:, batch_size:] # Target<->Target
XY = kernels[:batch_size, batch_size:] # Source<->Target
YX = kernels[batch_size:, :batch_size] # Target<->Source
loss = torch.mean(XX + YY - XY -YX)
return loss
### show the guessing entropy and success rate
def plot_guessing_entropy(preds, real_key, device_id, model_flag):
"""
- preds : the probability for each class (n*256 for a byte, n*9 for Hamming weight)
- real_key : the key of the target device
- device_id : id of the target device
- model_flag : a string for naming GE result
"""
# GE/SR is averaged over 100 attacks
num_averaged = 100
# max trace num for attack
trace_num_max = 5000
# the step trace num increases
step = 1
if trace_num_max > 400 and trace_num_max < 1000:
step = 2
if trace_num_max >= 1000 and trace_num_max < 5000:
step = 4
if trace_num_max >= 5000 and trace_num_max < 10000:
step = 5
guessing_entropy = np.zeros((num_averaged, int(trace_num_max/step)))
success_flag = np.zeros((num_averaged, int(trace_num_max/step)))
if device_id == target_device_id: # attack on the target domain
ciphertext = ciphertexts_target
elif device_id == source_device_id: # attack on the source domain
ciphertext = ciphertexts_source
# try to attack multiples times for average
for time in range(num_averaged):
# select the attack traces randomly
random_index = list(range(ciphertext.shape[0]))
random.shuffle(random_index)
random_index = random_index[0:trace_num_max]
# initialize score matrix
score_mat = np.zeros((trace_num_max, 256))
for key_guess in range(0, 256):
for i in range(0, trace_num_max):
temp = int(ciphertext[random_index[i], 1]) ^ key_guess
initialState = InvSbox[temp]
media_value = initialState ^ int(ciphertext[random_index[i], 5])
if labeling_method == 'identity':
label = media_value
elif labeling_method == 'hw':
label = HW_byte[media_value]
score_mat[i, key_guess] = preds[random_index[i], label]
score_mat = np.log(score_mat+1e-40)
for i in range(0, trace_num_max, step):
log_likelihood = np.sum(score_mat[0:i+1,:], axis=0)
ranked = np.argsort(log_likelihood)[::-1]
guessing_entropy[time,int(i/step)] = list(ranked).index(real_key)
if list(ranked).index(real_key) == 0:
success_flag[time, int(i/step)] = 1
guessing_entropy = np.mean(guessing_entropy,axis=0)
plt.figure(figsize=(12,4))
plt.subplot(1, 2, 1)
x = range(0, trace_num_max, step)
p1, = plt.plot(x, guessing_entropy[0:int(trace_num_max/step)],color='red')
plt.xlabel('Number of trace')
plt.ylabel('Guessing entropy')
#np.save('./results/entropy_'+ labeling_method + '_{}_to_{}_'.format(source_device_id, device_id) + model_flag, guessing_entropy)
plt.subplot(1, 2, 2)
success_flag = np.sum(success_flag, axis=0)
success_rate = success_flag/num_averaged
p2, = plt.plot(x, success_rate[0:int(trace_num_max/step)], color='red')
plt.xlabel('Number of trace')
plt.ylabel('Success rate')
plt.show()
#np.save('./results/success_rate_' + labeling_method + '_{}_to_{}_'.format(source_device_id, device_id) + model_flag, success_rate)
### show the confusion matrix
def plot_sonfusion_matrix(cm, classes, normalize=False, title='Confusion matrix',cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
plt.ylim((len(classes)-0.5, -0.5))
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predict label')
plt.show()
```
# Setups
```
source_device_id = 1
target_device_id = 2
# roundkeys of the three devices are : 0x21, 0xCD, 0x8F
real_key_01 = 0x21 # key of the source domain
real_key_02 = 0xCD # key of the target domain
lambda_ = 0.05 # Penalty coefficient
labeling_method = 'identity' # labeling of trace
preprocess = 'horizontal_standardization' # preprocess method
batch_size = 200
total_epoch = 200
finetune_epoch = 15 # epoch number for fine-tuning
lr = 0.001 # learning rate
log_interval = 50 # epoch interval to log training information
train_num = 85000
valid_num = 5000
source_test_num = 9900
target_finetune_num = 200
target_test_num = 9400
trace_offset = 0
trace_length = 1000
source_file_path = './Data/device1/'
target_file_path = './Data/device2/'
no_cuda =False
cuda = not no_cuda and torch.cuda.is_available()
seed = 8
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed(seed)
if labeling_method == 'identity':
class_num = 256
elif labeling_method == 'hw':
class_num = 9
# to load traces and labels
X_train_source = np.load(source_file_path + 'X_train.npy')
Y_train_source = np.load(source_file_path + 'Y_train.npy')
X_attack_source = np.load(source_file_path + 'X_attack.npy')
Y_attack_source = np.load(source_file_path + 'Y_attack.npy')
X_attack_target = np.load(target_file_path + 'X_attack.npy')
Y_attack_target = np.load(target_file_path + 'Y_attack.npy')
# to load ciphertexts
ciphertexts_source = np.load(source_file_path + 'ciphertexts_attack.npy')
ciphertexts_target = np.load(target_file_path + 'ciphertexts_attack.npy')
ciphertexts_target = ciphertexts_target[target_finetune_num:target_finetune_num+target_test_num]
# preprocess of traces
if preprocess == 'horizontal_standardization':
mn = np.repeat(np.mean(X_train_source, axis=1, keepdims=True), X_train_source.shape[1], axis=1)
std = np.repeat(np.std(X_train_source, axis=1, keepdims=True), X_train_source.shape[1], axis=1)
X_train_source = (X_train_source - mn)/std
mn = np.repeat(np.mean(X_attack_source, axis=1, keepdims=True), X_attack_source.shape[1], axis=1)
std = np.repeat(np.std(X_attack_source, axis=1, keepdims=True), X_attack_source.shape[1], axis=1)
X_attack_source = (X_attack_source - mn)/std
mn = np.repeat(np.mean(X_attack_target, axis=1, keepdims=True), X_attack_target.shape[1], axis=1)
std = np.repeat(np.std(X_attack_target, axis=1, keepdims=True), X_attack_target.shape[1], axis=1)
X_attack_target = (X_attack_target - mn)/std
elif preprocess == 'horizontal_scaling':
scaler = preprocessing.MinMaxScaler(feature_range=(-1, 1)).fit(X_train_source.T)
X_train_source = scaler.transform(X_train_source.T).T
scaler = preprocessing.MinMaxScaler(feature_range=(-1, 1)).fit(X_attack_source.T)
X_attack_source = scaler.transform(X_attack_source.T).T
scaler = preprocessing.MinMaxScaler(feature_range=(-1, 1)).fit(X_attack_target.T)
X_attack_target = scaler.transform(X_attack_target.T).T
# parameters of data loader
kwargs_source_train = {
'trs_file': X_train_source[0:train_num,:],
'label_file': Y_train_source[0:train_num],
'trace_num':train_num,
'trace_offset':trace_offset,
'trace_length':trace_length,
}
kwargs_source_valid = {
'trs_file': X_train_source[train_num:train_num+valid_num,:],
'label_file': Y_train_source[train_num:train_num+valid_num],
'trace_num':valid_num,
'trace_offset':trace_offset,
'trace_length':trace_length,
}
kwargs_source_test = {
'trs_file': X_attack_source,
'label_file': Y_attack_source,
'trace_num':source_test_num,
'trace_offset':trace_offset,
'trace_length':trace_length,
}
kwargs_target_finetune = {
'trs_file': X_attack_target[0:target_finetune_num,:],
'label_file': Y_attack_target[0:target_finetune_num],
'trace_num':target_finetune_num,
'trace_offset':trace_offset,
'trace_length':trace_length,
}
kwargs_target = {
'trs_file': X_attack_target[target_finetune_num:target_finetune_num+target_test_num, :],
'label_file': Y_attack_target[target_finetune_num:target_finetune_num+target_test_num],
'trace_num':target_test_num,
'trace_offset':trace_offset,
'trace_length':trace_length,
}
source_train_loader = load_training(batch_size, kwargs_source_train)
source_valid_loader = load_training(batch_size, kwargs_source_valid)
source_test_loader = load_testing(batch_size, kwargs_source_test)
target_finetune_loader = load_training(batch_size, kwargs_target_finetune)
target_test_loader = load_testing(batch_size, kwargs_target)
print('Load data complete!')
```
# Model
```
### the pre-trained model
class Net(nn.Module):
def __init__(self, num_classes=class_num):
super(Net, self).__init__()
# the encoder part
self.features = nn.Sequential(
nn.Conv1d(1, 8, kernel_size=1),
nn.SELU(),
nn.BatchNorm1d(8),
nn.AvgPool1d(kernel_size=2, stride=2),
nn.Conv1d(8, 16, kernel_size=11),
nn.SELU(),
nn.BatchNorm1d(16),
nn.AvgPool1d(kernel_size=11, stride=11),
nn.Conv1d(16, 32, kernel_size=2),
nn.SELU(),
nn.BatchNorm1d(32),
nn.AvgPool1d(kernel_size=3, stride=3),
nn.Flatten()
)
# the fully-connected layer 1
self.classifier_1 = nn.Sequential(
nn.Linear(448, 2),
nn.SELU(),
)
# the output layer
self.final_classifier = nn.Sequential(
nn.Linear(2, num_classes)
)
# how the network runs
def forward(self, input):
x = self.features(input)
x = x.view(x.size(0), -1)
x = self.classifier_1(x)
output = self.final_classifier(x)
return output
### the fine-tuning model
class CDP_Net(nn.Module):
def __init__(self, num_classes=class_num):
super(CDP_Net, self).__init__()
# the encoder part
self.features = nn.Sequential(
nn.Conv1d(1, 8, kernel_size=1),
nn.SELU(),
nn.BatchNorm1d(8),
nn.AvgPool1d(kernel_size=2, stride=2),
nn.Conv1d(8, 16, kernel_size=11),
nn.SELU(),
nn.BatchNorm1d(16),
nn.AvgPool1d(kernel_size=11, stride=11),
nn.Conv1d(16, 32, kernel_size=2),
nn.SELU(),
nn.BatchNorm1d(32),
nn.AvgPool1d(kernel_size=3, stride=3),
nn.Flatten()
)
# the fully-connected layer 1
self.classifier_1 = nn.Sequential(
nn.Linear(448, 2),
nn.SELU(),
)
# the output layer
self.final_classifier = nn.Sequential(
nn.Linear(2, num_classes)
)
# how the network runs
def forward(self, source, target):
mmd_loss = 0
#source data flow
source = self.features(source)
source_0 = source.view(source.size(0), -1)
source_1 = self.classifier_1(source_0)
#target data flow
target = self.features(target)
target = target.view(target.size(0), -1)
mmd_loss += mmd_rbf(source_0, target)
target = self.classifier_1(target)
mmd_loss += mmd_rbf(source_1, target)
result = self.final_classifier(source_1)
return result, mmd_loss
```
## Performance of the pre-trained model
```
# create a network
model = Net(num_classes=class_num)
print('Construct model complete')
if cuda:
model.cuda()
# load the pre-trained network
checkpoint = torch.load('./models/pre-trained_device{}.pth'.format(source_device_id))
pretrained_dict = checkpoint['model_state_dict']
model_dict = pretrained_dict
model.load_state_dict(model_dict)
# evaluate the pre-trained model on source and target domain
with torch.no_grad():
print('Result on source device:')
test(model, source_device_id, model_flag='pretrained_source')
print('Result on target device:')
test(model, target_device_id, model_flag='pretrained_target')
```
## Cross-Device Profiling: fine-tune 15 epochs
```
# create a network
CDP_model = CDP_Net(num_classes=class_num)
print('Construct model complete')
if cuda:
CDP_model.cuda()
# initialize a big enough loss number
min_loss = 1000
# load the pre-trained model
checkpoint = torch.load('./models/pre-trained_device{}.pth'.format(source_device_id))
pretrained_dict = checkpoint['model_state_dict']
CDP_model.load_state_dict(pretrained_dict)
optimizer = optim.Adam([
{'params': CDP_model.features.parameters()},
{'params': CDP_model.classifier_1.parameters()},
{'params': CDP_model.final_classifier.parameters()}
], lr=lr)
# restore the optimizer state
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
for epoch in range(1, finetune_epoch + 1):
print(f'Train Epoch {epoch}:')
CDP_train(epoch, CDP_model)
with torch.no_grad():
valid_loss = CDP_validation(CDP_model)
# save the model that achieves the lowest validation loss
if valid_loss < min_loss:
min_loss = valid_loss
torch.save({
'epoch': epoch,
'model_state_dict': CDP_model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()
}, './models/best_valid_loss_finetuned_device{}_to_{}.pth'.format(source_device_id, target_device_id))
torch.save({
'epoch': epoch,
'model_state_dict': CDP_model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()
}, './models/last_valid_loss_finetuned_device{}_to_{}.pth'.format(source_device_id, target_device_id))
```
## Performance of the fine-tuned model
```
# create a network
model = Net(num_classes=class_num)
print('Construct model complete')
if cuda:
model.cuda()
# load the fine-tuned model
checkpoint = torch.load('./models/best_valid_loss_finetuned_device{}_to_{}.pth'.format(source_device_id, target_device_id))
finetuned_dict = checkpoint['model_state_dict']
model.load_state_dict(finetuned_dict)
print('Results after fine-tuning:')
# evaluate the fine-tuned model on source and target domain
with torch.no_grad():
print('Result on source device:')
test(model, source_device_id, model_flag='finetuned_source')
print('Result on target device:')
test(model, target_device_id, model_flag='finetuned_target')
```
| github_jupyter |
# Facial Keypoint Detection
This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with.
Let's take a look at some examples of images and corresponding facial keypoints.
<img src='images/key_pts_example.png' width=50% height=50%/>
Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.
<img src='images/landmarks_numbered.jpg' width=30% height=30%/>
---
## Load and Visualize Data
The first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints.
#### Training and Testing Data
This facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.
* 3462 of these images are training images, for you to use as you create a model to predict keypoints.
* 2308 are test images, which will be used to test the accuracy of your model.
The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).
---
```
# import the required libraries
import glob
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
key_pts_frame = pd.read_csv('data/training_frames_keypoints.csv')
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
print('Landmarks shape: ', key_pts.shape)
print('First 4 key pts: {}'.format(key_pts[:4]))
# print out some stats about the data
print('Number of images: ', key_pts_frame.shape[0])
```
## Look at some images
Below, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.
```
def show_keypoints(image, key_pts):
"""Show image with keypoints"""
plt.imshow(image)
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')
# Display a few different types of images by changing the index n
# select an image by index in our data frame
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
plt.figure(figsize=(5, 5))
image_show = mpimg.imread(os.path.join('data/training/', image_name));
show_keypoints(image_show, key_pts)
plt.show()
print("image shape: ", image_show.shape)
```
## Dataset class and Transformations
To prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
#### Dataset class
``torch.utils.data.Dataset`` is an abstract class representing a
dataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.
Your custom dataset should inherit ``Dataset`` and override the following
methods:
- ``__len__`` so that ``len(dataset)`` returns the size of the dataset.
- ``__getitem__`` to support the indexing such that ``dataset[i]`` can
be used to get the i-th sample of image/keypoint data.
Let's create a dataset class for our face keypoints dataset. We will
read the CSV file in ``__init__`` but leave the reading of images to
``__getitem__``. This is memory efficient because all the images are not
stored in the memory at once but read as required.
A sample of our dataset will be a dictionary
``{'image': image, 'keypoints': key_pts}``. Our dataset will take an
optional argument ``transform`` so that any required processing can be
applied on the sample. We will see the usefulness of ``transform`` in the
next section.
```
from torch.utils.data import Dataset, DataLoader
class FacialKeypointsDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.key_pts_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.key_pts_frame)
def __getitem__(self, idx):
image_name = os.path.join(self.root_dir,
self.key_pts_frame.iloc[idx, 0])
image = mpimg.imread(image_name)
# if image has an alpha color channel, get rid of it
if(image.shape[2] == 4):
image = image[:,:,0:3]
key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
sample = {'image': image, 'keypoints': key_pts}
if self.transform:
sample = self.transform(sample)
return sample
```
Now that we've defined this class, let's instantiate the dataset and display some images.
```
# Construct the dataset
face_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/')
# print some stats about the dataset
print('Length of dataset: ', len(face_dataset))
# Display a few of the images from the dataset
num_to_display = 3
for i in range(num_to_display):
# define the size of images
fig = plt.figure(figsize=(20,10))
# randomly select a sample
rand_i = np.random.randint(0, len(face_dataset))
sample = face_dataset[rand_i]
# print the shape of the image and keypoints
print(i, sample['image'].shape, sample['keypoints'].shape)
ax = plt.subplot(1, num_to_display, i + 1)
ax.set_title('Sample #{}'.format(i))
# Using the same display function, defined earlier
show_keypoints(sample['image'], sample['keypoints'])
```
## Transforms
Now, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.
Therefore, we will need to write some pre-processing code.
Let's create four transforms:
- ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]
- ``Rescale``: to rescale an image to a desired size.
- ``RandomCrop``: to crop an image randomly.
- ``ToTensor``: to convert numpy images to torch images.
We will write them as callable classes instead of simple functions so
that parameters of the transform need not be passed everytime it's
called. For this, we just need to implement ``__call__`` method and
(if we require parameters to be passed in), the ``__init__`` method.
We can then use a transform like this:
tx = Transform(params)
transformed_sample = tx(sample)
Observe below how these transforms are generally applied to both the image and its keypoints.
```
import torch
from torchvision import transforms, utils
# tranforms
class Normalize(object):
"""Convert a color image to grayscale and normalize the color range to [0,1]."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
# convert image to grayscale
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# scale color range from [0, 255] to [0, 1]
image_copy= image_copy/255.0
# scale keypoints to be centered around 0 with a range of [-1, 1]
# mean = 100, sqrt = 50, so, pts should be (pts - 100)/50
key_pts_copy = (key_pts_copy - 100)/50.0
return {'image': image_copy, 'keypoints': key_pts_copy}
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = cv2.resize(image, (new_w, new_h))
# scale the pts, too
key_pts = key_pts * [new_w / w, new_h / h]
return {'image': img, 'keypoints': key_pts}
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
key_pts = key_pts - [left, top]
return {'image': image, 'keypoints': key_pts}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'keypoints': torch.from_numpy(key_pts)}
```
## Test out the transforms
Let's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.
```
# test out some of these transforms
rescale = Rescale(100)
crop = RandomCrop(50)
composed = transforms.Compose([Rescale(250),
RandomCrop(224)])
# apply the transforms to a sample image
test_num = 500
sample = face_dataset[test_num]
fig = plt.figure()
for i, tx in enumerate([rescale, crop, composed]):
transformed_sample = tx(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tx).__name__)
show_keypoints(transformed_sample['image'], transformed_sample['keypoints'])
plt.show()
```
## Create the transformed dataset
Apply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).
```
# define the data tranform
# order matters! i.e. rescaling should come before a smaller crop
data_transform = transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/',
transform=data_transform)
# print some stats about the transformed data
print('Number of images: ', len(transformed_dataset))
# make sure the sample tensors are the expected size
for i in range(5):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size())
```
## Data Iteration and Batching
Right now, we are iterating over this data using a ``for`` loop, but we are missing out on a lot of PyTorch's dataset capabilities, specifically the abilities to:
- Batch the data
- Shuffle the data
- Load the data in parallel using ``multiprocessing`` workers.
``torch.utils.data.DataLoader`` is an iterator which provides all these
features, and we'll see this in use in the *next* notebook, Notebook 2, when we load data in batches to train a neural network!
---
## Ready to Train!
Now that you've seen how to load and transform our data, you're ready to build a neural network to train on this data.
In the next notebook, you'll be tasked with creating a CNN for facial keypoint detection.
| github_jupyter |
# Simple Use Cases
Simulus is a discrete-event simulator in Python. This document is to demonstrate how to run simulus via a few examples. This is not a tutorial. For that, use [Simulus Tutorial](simulus-tutorial.ipynb). All the examples shown in this guide can be found under the `examples/demos` directory in the simulus source-code distribution.
It's really simple to install simulus. Assuming you have installed pip, you can simply do the following to install simulus:
```
pip install simulus
```
If you don't have administrative privilege to install packages on your machine, you can install it in the per-user managed location using:
```
pip install --user simulus
```
If all are fine at this point, you can simply import the module 'simulus' to start using the simulator.
```
import simulus
```
### Use Case #1: Direct Event Scheduling
One can schedule functions to be executed at designated simulation time. The functions in this case are called event handlers (using the discrete-event simulation terminology).
```
# %load "../examples/demos/case-1.py"
import simulus
# An event handler is a user-defined function; in this case, we take
# one positional argument 'sim', and place all keyworded arguments in
# the dictionary 'params'
def myfunc(sim, **params):
print(str(sim.now) + ": myfunc() runs with params=" + str(params))
# schedule the next event 10 seconds from now
sim.sched(myfunc, sim, **params, offset=10)
# create an anonymous simulator
sim1 = simulus.simulator()
# schedule the first event at 10 seconds
sim1.sched(myfunc, sim1, until=10, msg="hello world", value=100)
# advance simulation until 100 seconds
sim1.run(until=100)
print("simulator.run() ends at " + str(sim1.now))
# we can advance simulation for another 50 seconds
sim1.run(offset=50)
print("simulator.run() ends at " + str(sim1.now))
```
### Use Case #2: Simulation Process
A simulation process is an independent thread of execution. A process can be blocked and therefore advances its simulation time either by sleeping for some duration of time or by being blocked from synchronization primitives (such as semaphores).
```
# %load "../examples/demos/case-2.py"
import simulus
# A process for simulus is a python function with two parameters:
# the first parameter is the simulator, and the second parameter is
# the dictionary containing user-defined parameters for the process
def myproc(sim, intv, id):
print(str(sim.now) + ": myproc(%d) runs with intv=%r" % (id, intv))
while True:
# suspend the process for some time
sim.sleep(intv)
print(str(sim.now) + ": myproc(%d) resumes execution" % id)
# create an anonymous simulator
sim2 = simulus.simulator()
# start a process 100 seconds from now
sim2.process(myproc, sim2, 10, 0, offset=100)
# start another process 5 seconds from now
sim2.process(myproc, sim2, 20, 1, offset=5)
# advance simulation until 200 seconds
sim2.run(until=200)
print("simulator.run() ends at " + str(sim2.now))
sim2.run(offset=50)
print("simulator.run() ends at " + str(sim2.now))
```
### Use Case #3: Process Synchronization with Semaphores
We illustrate the use of semaphore in the context of a classic producer-consumer problem. We are simulating a single-server queue (M/M/1) here.
```
# %load "../examples/demos/case-3.py"
import simulus
from random import seed, expovariate
from statistics import mean, median, stdev
# make it repeatable
seed(12345)
# configuration of the single server queue: the mean inter-arrival
# time, and the mean service time
cfg = {"mean_iat":1, "mean_svc":0.8}
# keep the time of job arrivals, starting services, and departures
arrivals = []
starts = []
finishes = []
# the producer process waits for some random time from an
# exponential distribution, and increments the semaphore
# to represent a new item being produced, and then repeats
def producer(sim, mean_iat, sem):
while True:
iat = expovariate(1.0/mean_iat)
sim.sleep(iat)
#print("%g: job arrives (iat=%g)" % (sim.now, iat))
arrivals.append(sim.now)
sem.signal()
# the consumer process waits for the semaphore (it decrements
# the value and blocks if the value is non-positive), waits for
# some random time from another exponential distribution, and
# then repeats
def consumer(sim, mean_svc, sem):
while True:
sem.wait()
#print("%g: job starts service" % sim.now)
starts.append(sim.now)
svc = expovariate(1.0/mean_svc)
sim.sleep(svc)
#print("%g: job departs (svc=%g)" % (sim.now, svc))
finishes.append(sim.now)
# create an anonymous simulator
sim3 = simulus.simulator()
# create a semaphore with initial value of zero
sem = sim3.semaphore(0)
# start the producer and consumer processes
sim3.process(producer, sim3, cfg['mean_iat'], sem)
sim3.process(consumer, sim3, cfg['mean_svc'], sem)
# advance simulation until 100 seconds
sim3.run(until=1000)
print("simulator.run() ends at " + str(sim3.now))
# calculate and output statistics
print(f'Results: jobs=arrivals:{len(arrivals)}, starts:{len(starts)}, finishes:{len(finishes)}')
waits = [start - arrival for arrival, start in zip(arrivals, starts)]
totals = [finish - arrival for arrival, finish in zip(arrivals, finishes)]
print(f'Wait Time: mean={mean(waits):.1f}, stdev={stdev(waits):.1f}, median={median(waits):.1f}. max={max(waits):.1f}')
print(f'Total Time: mean={mean(totals):.1f}, stdev={stdev(totals):.1f}, median={median(totals):.1f}. max={max(totals):.1f}')
my_lambda = 1.0/cfg['mean_iat'] # mean arrival rate
my_mu = 1.0/cfg['mean_svc'] # mean service rate
my_rho = my_lambda/my_mu # server utilization
my_lq = my_rho*my_rho/(1-my_rho) # number in queue
my_wq = my_lq/my_lambda # wait in queue
my_w = my_wq+1/my_mu # wait in system
print(f'Theoretical Results: mean wait time = {my_wq:.1f}, mean total time = {my_w:.1f}')
```
### Use Case #4: Dynamic Processes
We continue with the previous example. At the time, rathar than using semaphores, we can achieve exactly the same results by dynamically creating processes.
```
# %load "../examples/demos/case-4.py"
import simulus
from random import seed, expovariate
from statistics import mean, median, stdev
# make it repeatable
seed(12345)
# configuration of the single server queue: the mean inter-arrival
# time, and the mean service time
cfg = {"mean_iat":1, "mean_svc":0.8}
# keep the time of job arrivals, starting services, and departures
arrivals = []
starts = []
finishes = []
# we keep the account of the number of jobs in the system (those who
# have arrived but not yet departed); this is used to indicate whether
# there's a consumer process currently running; the value is more than
# 1, we don't need to create a new consumer process
jobs_in_system = 0
# the producer process waits for some random time from an exponential
# distribution to represent a new item being produced, creates a
# consumer process when necessary to represent the item being
# consumed, and then repeats
def producer(sim, mean_iat, mean_svc):
global jobs_in_system
while True:
iat = expovariate(1.0/mean_iat)
sim.sleep(iat)
#print("%g: job arrives (iat=%g)" % (sim.now, iat))
arrivals.append(sim.now)
jobs_in_system += 1
if jobs_in_system <= 1:
sim.process(consumer, sim, mean_svc)
# the consumer process waits for the semaphore (it decrements
# the value and blocks if the value is non-positive), waits for
# some random time from another exponential distribution, and
# then repeats
def consumer(sim, mean_svc):
global jobs_in_system
while jobs_in_system > 0:
#print("%g: job starts service" % sim.now)
starts.append(sim.now)
svc = expovariate(1.0/mean_svc)
sim.sleep(svc)
#print("%g: job departs (svc=%g)" % (sim.now, svc))
finishes.append(sim.now)
jobs_in_system -= 1
# create an anonymous simulator
sim3 = simulus.simulator()
# start the producer process only
sim3.process(producer, sim3, cfg['mean_iat'], cfg['mean_svc'])
# advance simulation until 100 seconds
sim3.run(until=1000)
print("simulator.run() ends at " + str(sim3.now))
# calculate and output statistics
print(f'Results: jobs=arrival:{len(arrivals)}, starts:{len(starts)}, finishes:{len(finishes)}')
waits = [start - arrival for arrival, start in zip(arrivals, starts)]
totals = [finish - arrival for arrival, finish in zip(arrivals, finishes)]
print(f'Wait Time: mean={mean(waits):.1f}, stdev={stdev(waits):.1f}, median={median(waits):.1f}. max={max(waits):.1f}')
print(f'Total Time: mean={mean(totals):.1f}, stdev={stdev(totals):.1f}, median={median(totals):.1f}. max={max(totals):.1f}')
my_lambda = 1.0/cfg['mean_iat'] # mean arrival rate
my_mu = 1.0/cfg['mean_svc'] # mean service rate
my_rho = my_lambda/my_mu # server utilization
my_lq = my_rho*my_rho/(1-my_rho) # number in queue
my_wq = my_lq/my_lambda # wait in queue
my_w = my_wq+1/my_mu # wait in system
print(f'Theoretical Results: mean wait time = {my_wq:.1f}, mean total time = {my_w:.1f}')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Custom training with tf.distribute.Strategy
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/custom_training"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to use [`tf.distribute.Strategy`](https://www.tensorflow.org/guide/distributed_training) with custom training loops. We will train a simple CNN model on the fashion MNIST dataset. The fashion MNIST dataset contains 60000 train images of size 28 x 28 and 10000 test images of size 28 x 28.
We are using custom training loops to train our model because they give us flexibility and a greater control on training. Moreover, it is easier to debug the model and the training loop.
```
from __future__ import absolute_import, division, print_function, unicode_literals
# Import TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
# Helper libraries
import numpy as np
import os
print(tf.__version__)
```
## Download the fashion MNIST dataset
```
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# Adding a dimension to the array -> new shape == (28, 28, 1)
# We are doing this because the first layer in our model is a convolutional
# layer and it requires a 4D input (batch_size, height, width, channels).
# batch_size dimension will be added later on.
train_images = train_images[..., None]
test_images = test_images[..., None]
# Getting the images in [0, 1] range.
train_images = train_images / np.float32(255)
test_images = test_images / np.float32(255)
```
## Create a strategy to distribute the variables and the graph
How does `tf.distribute.MirroredStrategy` strategy work?
* All the variables and the model graph is replicated on the replicas.
* Input is evenly distributed across the replicas.
* Each replica calculates the loss and gradients for the input it received.
* The gradients are synced across all the replicas by summing them.
* After the sync, the same update is made to the copies of the variables on each replica.
Note: You can put all the code below inside a single scope. We are dividing it into several code cells for illustration purposes.
```
# If the list of devices is not specified in the
# `tf.distribute.MirroredStrategy` constructor, it will be auto-detected.
strategy = tf.distribute.MirroredStrategy()
print ('Number of devices: {}'.format(strategy.num_replicas_in_sync))
```
## Setup input pipeline
Export the graph and the variables to the platform-agnostic SavedModel format. After your model is saved, you can load it with or without the scope.
```
BUFFER_SIZE = len(train_images)
BATCH_SIZE_PER_REPLICA = 64
GLOBAL_BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
EPOCHS = 10
```
Create the datasets and distribute them:
```
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE)
train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)
test_dist_dataset = strategy.experimental_distribute_dataset(test_dataset)
```
## Create the model
Create a model using `tf.keras.Sequential`. You can also use the Model Subclassing API to do this.
```
def create_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(64, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
return model
# Create a checkpoint directory to store the checkpoints.
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
```
## Define the loss function
Normally, on a single machine with 1 GPU/CPU, loss is divided by the number of examples in the batch of input.
*So, how should the loss be calculated when using a `tf.distribute.Strategy`?*
* For an example, let's say you have 4 GPU's and a batch size of 64. One batch of input is distributed
across the replicas (4 GPUs), each replica getting an input of size 16.
* The model on each replica does a forward pass with its respective input and calculates the loss. Now, instead of dividing the loss by the number of examples in its respective input (BATCH_SIZE_PER_REPLICA = 16), the loss should be divided by the GLOBAL_BATCH_SIZE (64).
*Why do this?*
* This needs to be done because after the gradients are calculated on each replica, they are synced across the replicas by **summing** them.
*How to do this in TensorFlow?*
* If you're writing a custom training loop, as in this tutorial, you should sum the per example losses and divide the sum by the GLOBAL_BATCH_SIZE:
`scale_loss = tf.reduce_sum(loss) * (1. / GLOBAL_BATCH_SIZE)`
or you can use `tf.nn.compute_average_loss` which takes the per example loss,
optional sample weights, and GLOBAL_BATCH_SIZE as arguments and returns the scaled loss.
* If you are using regularization losses in your model then you need to scale
the loss value by number of replicas. You can do this by using the `tf.nn.scale_regularization_loss` function.
* Using `tf.reduce_mean` is not recommended. Doing so divides the loss by actual per replica batch size which may vary step to step.
* This reduction and scaling is done automatically in keras `model.compile` and `model.fit`
* If using `tf.keras.losses` classes (as in the example below), the loss reduction needs to be explicitly specified to be one of `NONE` or `SUM`. `AUTO` and `SUM_OVER_BATCH_SIZE` are disallowed when used with `tf.distribute.Strategy`. `AUTO` is disallowed because the user should explicitly think about what reduction they want to make sure it is correct in the distributed case. `SUM_OVER_BATCH_SIZE` is disallowed because currently it would only divide by per replica batch size, and leave the dividing by number of replicas to the user, which might be easy to miss. So instead we ask the user do the reduction themselves explicitly.
```
with strategy.scope():
# Set reduction to `none` so we can do the reduction afterwards and divide by
# global batch size.
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE)
```
## Define the metrics to track loss and accuracy
These metrics track the test loss and training and test accuracy. You can use `.result()` to get the accumulated statistics at any time.
```
with strategy.scope():
test_loss = tf.keras.metrics.Mean(name='test_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='test_accuracy')
```
## Training loop
```
# model and optimizer must be created under `strategy.scope`.
with strategy.scope():
model = create_model()
optimizer = tf.keras.optimizers.Adam()
checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)
with strategy.scope():
def train_step(inputs):
images, labels = inputs
with tf.GradientTape() as tape:
predictions = model(images, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_accuracy.update_state(labels, predictions)
return loss
def test_step(inputs):
images, labels = inputs
predictions = model(images, training=False)
t_loss = loss_object(labels, predictions)
test_loss.update_state(t_loss)
test_accuracy.update_state(labels, predictions)
with strategy.scope():
# `experimental_run_v2` replicates the provided computation and runs it
# with the distributed input.
@tf.function
def distributed_train_step(dataset_inputs):
per_replica_losses = strategy.experimental_run_v2(train_step,
args=(dataset_inputs,))
return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
@tf.function
def distributed_test_step(dataset_inputs):
return strategy.experimental_run_v2(test_step, args=(dataset_inputs,))
for epoch in range(EPOCHS):
# TRAIN LOOP
total_loss = 0.0
num_batches = 0
for x in train_dist_dataset:
total_loss += distributed_train_step(x)
num_batches += 1
train_loss = total_loss / num_batches
# TEST LOOP
for x in test_dist_dataset:
distributed_test_step(x)
if epoch % 2 == 0:
checkpoint.save(checkpoint_prefix)
template = ("Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, "
"Test Accuracy: {}")
print (template.format(epoch+1, train_loss,
train_accuracy.result()*100, test_loss.result(),
test_accuracy.result()*100))
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
```
Things to note in the example above:
* We are iterating over the `train_dist_dataset` and `test_dist_dataset` using a `for x in ...` construct.
* The scaled loss is the return value of the `distributed_train_step`. This value is aggregated across replicas using the `tf.distribute.Strategy.reduce` call and then across batches by summing the return value of the `tf.distribute.Strategy.reduce` calls.
* `tf.keras.Metrics` should be updated inside `train_step` and `test_step` that gets executed by `tf.distribute.Strategy.experimental_run_v2`.
*`tf.distribute.Strategy.experimental_run_v2` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can do `tf.distribute.Strategy.reduce` to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results` to get the list of values contained in the result, one per local replica.
## Restore the latest checkpoint and test
A model checkpointed with a `tf.distribute.Strategy` can be restored with or without a strategy.
```
eval_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='eval_accuracy')
new_model = create_model()
new_optimizer = tf.keras.optimizers.Adam()
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE)
@tf.function
def eval_step(images, labels):
predictions = new_model(images, training=False)
eval_accuracy(labels, predictions)
checkpoint = tf.train.Checkpoint(optimizer=new_optimizer, model=new_model)
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
for images, labels in test_dataset:
eval_step(images, labels)
print ('Accuracy after restoring the saved model without strategy: {}'.format(
eval_accuracy.result()*100))
```
## Alternate ways of iterating over a dataset
### Using iterators
If you want to iterate over a given number of steps and not through the entire dataset you can create an iterator using the `iter` call and explicity call `next` on the iterator. You can choose to iterate over the dataset both inside and outside the tf.function. Here is a small snippet demonstrating iteration of the dataset outside the tf.function using an iterator.
```
with strategy.scope():
for _ in range(EPOCHS):
total_loss = 0.0
num_batches = 0
train_iter = iter(train_dist_dataset)
for _ in range(10):
total_loss += distributed_train_step(next(train_iter))
num_batches += 1
average_train_loss = total_loss / num_batches
template = ("Epoch {}, Loss: {}, Accuracy: {}")
print (template.format(epoch+1, average_train_loss, train_accuracy.result()*100))
train_accuracy.reset_states()
```
### Iterating inside a tf.function
You can also iterate over the entire input `train_dist_dataset` inside a tf.function using the `for x in ...` construct or by creating iterators like we did above. The example below demonstrates wrapping one epoch of training in a tf.function and iterating over `train_dist_dataset` inside the function.
```
with strategy.scope():
@tf.function
def distributed_train_epoch(dataset):
total_loss = 0.0
num_batches = 0
for x in dataset:
per_replica_losses = strategy.experimental_run_v2(train_step,
args=(x,))
total_loss += strategy.reduce(
tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)
num_batches += 1
return total_loss / tf.cast(num_batches, dtype=tf.float32)
for epoch in range(EPOCHS):
train_loss = distributed_train_epoch(train_dist_dataset)
template = ("Epoch {}, Loss: {}, Accuracy: {}")
print (template.format(epoch+1, train_loss, train_accuracy.result()*100))
train_accuracy.reset_states()
```
### Tracking training loss across replicas
Note: As a general rule, you should use `tf.keras.Metrics` to track per-sample values and avoid values that have been aggregated within a replica.
We do *not* recommend using `tf.metrics.Mean` to track the training loss across different replicas, because of the loss scaling computation that is carried out.
For example, if you run a training job with the following characteristics:
* Two replicas
* Two samples are processed on each replica
* Resulting loss values: [2, 3] and [4, 5] on each replica
* Global batch size = 4
With loss scaling, you calculate the per-sample value of loss on each replica by adding the loss values, and then dividing by the global batch size. In this case: `(2 + 3) / 4 = 1.25` and `(4 + 5) / 4 = 2.25`.
If you use `tf.metrics.Mean` to track loss across the two replicas, the result is different. In this example, you end up with a `total` of 3.50 and `count` of 2, which results in `total`/`count` = 1.75 when `result()` is called on the metric. Loss calculated with `tf.keras.Metrics` is scaled by an additional factor that is equal to the number of replicas in sync.
### Guide and examples
Here are some examples for using distribution strategy with custom training loops:
1. [Distributed training guide](../../guide/distributed_training)
2. [DenseNet](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/densenet/distributed_train.py) example using `MirroredStrategy`.
1. [BERT](https://github.com/tensorflow/models/blob/master/official/nlp/bert/run_classifier.py) example trained using `MirroredStrategy` and `TPUStrategy`.
This example is particularly helpful for understanding how to load from a checkpoint and generate periodic checkpoints during distributed training etc.
2. [NCF](https://github.com/tensorflow/models/blob/master/official/recommendation/ncf_keras_main.py) example trained using `MirroredStrategy` that can be enabled using the `keras_use_ctl` flag.
3. [NMT](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/nmt_with_attention/distributed_train.py) example trained using `MirroredStrategy`.
More examples listed in the [Distribution strategy guide](../../guide/distributed_training.ipynb#examples_and_tutorials)
## Next steps
Try out the new `tf.distribute.Strategy` API on your models.
| github_jupyter |
<a href="https://colab.research.google.com/github/hBar2013/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments/blob/master/module2-intermediate-linear-algebra/Kim_Lowry_Intermediate_Linear_Algebra_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Statistics
```
import numpy as np
```
## 1.1 Sales for the past week was the following amounts: [3505, 2400, 3027, 2798, 3700, 3250, 2689]. Without using library functions, what is the mean, variance, and standard deviation of of sales from last week? (for extra bonus points, write your own function that can calculate these two values for any sized list)
```
sales = np.array([3505, 2400, 3027, 2798, 3700, 3250, 2689])
length = len(sales)
def mean_var_stdev(data):
sales_mean = sum(data)/length
for num in data:
vnom = sum((data - sales_mean)**2)
sales_var = vnom / length
sales_stdev = sales_var ** 0.5
return sales_mean, sales_var, sales_stdev
mean_var_stdev(sales)
```
## 1.2 Find the covariance between last week's sales numbers and the number of customers that entered the store last week: [127, 80, 105, 92, 120, 115, 93] (you may use librray functions for calculating the covariance since we didn't specifically talk about its formula)
```
customers = np.array([127, 80, 105, 92, 120, 115, 93])
cov_sc = np.cov(sales, customers)
cov_sc
```
## 1.3 Find the standard deviation of customers who entered the store last week. Then, use the standard deviations of both sales and customers to standardize the covariance to find the correlation coefficient that summarizes the relationship between sales and customers. (You may use library functions to check your work.)
```
length = len(customers)
mean_var_stdev(customers)
corr_sc = np.corrcoef(sales, customers)
corr_sc
```
## 1.4 Use pandas to import a cleaned version of the titanic dataset from the following link: [Titanic Dataset](https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv)
## Calculate the variance-covariance matrix and correlation matrix for the titanic dataset's numeric columns. (you can encode some of the categorical variables and include them as a stretch goal if you finish early)
```
import pandas as pd
file_url = 'https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv'
titanic = pd.read_csv(file_url)
titanic.head()
titanic.cov()
titanic.corr()
```
# Orthogonality
## 2.1 Plot two vectors that are orthogonal to each other. What is a synonym for orthogonal?
```
import matplotlib.pyplot as plt
dp = np.dot(vector_1, vector_2)
dp
vector_1 = [2, 4]
vector_2 = [-2, 1]
# Plot the Scaled Vectors
plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red')
plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green')
plt.xlim(-4,5)
plt.ylim(-4,5)
plt.title("Orthogonal Vectors")
plt.show()
```
## 2.2 Are the following vectors orthogonal? Why or why not?
\begin{align}
a = \begin{bmatrix} -5 \\ 3 \\ 7 \end{bmatrix}
\qquad
b = \begin{bmatrix} 6 \\ -8 \\ 2 \end{bmatrix}
\end{align}
```
vector_a = [-5,3,7]
vector_b = [6,-8,2]
ab_dp = np.dot(vector_a, vector_b)
ab_dp
```
Not orthagonal as the dot product of the vectors does not == zero
## 2.3 Compute the following values: What do these quantities have in common?
## What is $||c||^2$?
## What is $c \cdot c$?
## What is $c^{T}c$?
\begin{align}
c = \begin{bmatrix} 2 & -15 & 6 & 20 \end{bmatrix}
\end{align}
```
from numpy import linalg as LA
𝑐 = np.array([2,-15, 6, 20])
dp_c = np.dot(c,c)
dp_c
norm_c = LA.norm(c)
norm_c
norm_c_sq = norm_c**2
norm_c_sq
cTxC = np.matmul(c.T,c)
cTxC
```
# Unit Vectors
## 3.1 Using Latex, write the following vectors as a linear combination of scalars and unit vectors:
\begin{align}
d = \begin{bmatrix} 7 \\ 12 \end{bmatrix}
\qquad
e = \begin{bmatrix} 2 \\ 11 \\ -8 \end{bmatrix}
\end{align}
||d|| = 13.89
||e|| = 13.74
\begin{align}
d-hat = \begin{bmatrix} 0.49\\ 0.84 \end{bmatrix}
\end{align}
\begin{align}
d-hat = 0.49\begin{bmatrix} 1\\ 0 \end{bmatrix}, 0.84\begin{bmatrix} 0\\ 1 \end{bmatrix}
\end{align}
\begin{align}e -hat = \begin{bmatrix} 0.14 \\ 0.79 \\ -0.58 \end{bmatrix}
\end{align}
\begin{align}
e-hat = 0.14\begin{bmatrix} 1\\ 0\\0 \end{bmatrix}, 0.79\begin{bmatrix} 0\\ 1\\0 \end{bmatrix}, -0.58\begin{bmatrix} 0\\ 0\\1 \end{bmatrix}
\end{align}
## 3.2 Turn vector $f$ into a unit vector:
\begin{align}
f = \begin{bmatrix} 4 & 12 & 11 & 9 & 2 \end{bmatrix}
\end{align}
```
f = np.array([4, 12, 11, 9, 2])
norm_f = LA.norm(f)
inv_norm_f = 1/norm_f
unit_f = np.multiply(inv_norm_f,f)
unit_f
```
# Linear Independence / Dependence
## 4.1 Plot two vectors that are linearly dependent and two vectors that are linearly independent (bonus points if done in $\mathbb{R}^3$).
```
vector_1 = [2, 4]
vector_2 = [-2, 1]
plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red')
plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green')
plt.xlim(-4,5)
plt.ylim(-4,5)
plt.title("Linearly Independent")
plt.show()
vector_g = [1, 2]
vector_h = [4, 8]
plt.arrow(0,0, vector_g[0], vector_1[1],head_width=.05, head_length=0.05, color ='blue')
plt.arrow(0,0, vector_h[0], vector_2[1],head_width=.05, head_length=0.05, color ='orange')
plt.xlim(-1,10)
plt.ylim(-1,10)
plt.title("Linearly Dependent")
plt.show()
```
I have no idea what's going on with my colors. Happened yesterday also
# Span
## 5.1 What is the span of the following vectors?
\begin{align}
g = \begin{bmatrix} 1 & 2 \end{bmatrix}
\qquad
h = \begin{bmatrix} 4 & 8 \end{bmatrix}
\end{align}
you can see that the span is 1, because h is just g scaled by 4, also see above for the plot
## 5.2 What is the span of $\{l, m, n\}$?
\begin{align}
l = \begin{bmatrix} 1 & 2 & 3 \end{bmatrix}
\qquad
m = \begin{bmatrix} -1 & 0 & 7 \end{bmatrix}
\qquad
n = \begin{bmatrix} 4 & 8 & 2\end{bmatrix}
\end{align}
The rank is 3 so therefore the span is also 3 and so all 3 equations are required to describe the solution space. (ie there are no linearly dependent rows)
```
M = np.array([[1,-1,4],
[2,0,8],
[3,7,2]])
M_rank = LA.matrix_rank(M)
M_rank
```
# Basis
## 6.1 Graph two vectors that form a basis for $\mathbb{R}^2$
```
vector_1 = [2, 4]
vector_2 = [-2, 1]
plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red')
plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green')
plt.xlim(-4,5)
plt.ylim(-4,5)
plt.title("Linearly Independent")
plt.show()
```
Two vectors form a basis for 2D when they are linearly independent. They can be scaled and used as a basis set of vectors to represent the entire plane they lie in. In the case above these vectors form an orthagonal basis.
## 6.2 What does it mean to form a basis?
^^^^^ See above
# Rank
## 7.1 What is the Rank of P?
\begin{align}
P = \begin{bmatrix}
1 & 2 & 3 \\
-1 & 0 & 7 \\
4 & 8 & 2
\end{bmatrix}
\end{align}
```
P = np.array([[1,2,3],
[-1,0,7],
[4,8,9]])
P_rank = LA.matrix_rank(P)
P_rank
```
## 7.2 What does the rank of a matrix tell us?
The matrix cannot be reduced as all rows are linearly independent. All 3 are required to describe the solution space
# Linear Projections
## 8.1 Line $L$ is formed by all of the vectors that can be created by scaling vector $v$
\begin{align}
v = \begin{bmatrix} 1 & 3 \end{bmatrix}
\end{align}
\begin{align}
w = \begin{bmatrix} -1 & 2 \end{bmatrix}
\end{align}
## find $proj_{L}(w)$
## graph your projected vector to check your work (make sure your axis are square/even)
```
```
# Stretch Goal
## For vectors that begin at the origin, the coordinates of where the vector ends can be interpreted as regular data points. (See 3Blue1Brown videos about Spans, Basis, etc.)
## Write a function that can calculate the linear projection of each point (x,y) (vector) onto the line y=x. run the function and plot the original points in blue and the new projected points on the line y=x in red.
## For extra points plot the orthogonal vectors as a dashed line from the original blue points to the projected red points.
```
import pandas as pd
import matplotlib.pyplot as plt
# Creating a dataframe for you to work with -Feel free to not use the dataframe if you don't want to.
x_values = [1, 4, 7, 3, 9, 4, 5 ]
y_values = [4, 2, 5, 0, 8, 2, 8]
data = {"x": x_values, "y": y_values}
df = pd.DataFrame(data)
df.head()
plt.scatter(df.x, df.y)
plt.show()
```
| github_jupyter |
```
%matplotlib inline
import adaptive
import matplotlib.pyplot as plt
import pycqed as pq
import numpy as np
from pycqed.measurement import measurement_control
import pycqed.measurement.detector_functions as det
from qcodes import station
station = station.Station()
```
## Setting up the mock device
Measurements are controlled through the `MeasurementControl` usually instantiated as `MC`
```
from pycqed.instrument_drivers.virtual_instruments.mock_device import Mock_Device
MC = measurement_control.MeasurementControl('MC',live_plot_enabled=True, verbose=True)
MC.station = station
station.add_component(MC)
mock_device = Mock_Device('mock_device')
mock_device.mw_pow(-20)
mock_device.res_freq(7.400023457e9)
mock_device.cw_noise_level(.0005)
mock_device.acq_delay(.05)
```
## Measuring a resonator using the conventional method
Points are chosen on a linspace of 100 points. This is enough to identify the location of the resonator.
```
freqs = np.linspace(7.39e9, 7.41e9, 100)
d = det.Function_Detector(mock_device.S21,value_names=['Magn', 'Phase'],
value_units=['V', 'deg'])
MC.set_sweep_function(mock_device.mw_freq)
MC.set_sweep_points(freqs)
MC.set_detector_function(d)
dat=MC.run('test')
```
## Using 1D adaptive sampler from the MC
This can also be done using an adaptive `Leaner1D` object, chosing 100 points optimally in the interval.
```
mock_device.acq_delay(.05)
d = det.Function_Detector(mock_device.S21, value_names=['Magn', 'Phase'], value_units=['V', 'deg'])
MC.set_sweep_function(mock_device.mw_freq)
MC.set_detector_function(d)
MC.set_adaptive_function_parameters({'adaptive_function': adaptive.Learner1D,
'goal':lambda l: l.npoints>100,
'bounds':(7.39e9, 7.41e9)})
dat = MC.run(mode='adaptive')
from pycqed.analysis import measurement_analysis as ma
# ma.Homodyne_Analysis(close_fig=False, label='M')
```
## Two D learner
The learner can also be used to adaptively sample a 2D /heatmap type experiment.
However, currently we do not have easy plotting function for that and we still need to rely on the adaptive Learner plotting methods.
It would be great to have this working with a realtime pyqtgraph based plotting window so that we can use this without the notebooks.
```
d = det.Function_Detector(mock_device.S21, value_names=['Magn', 'Phase'], value_units=['V', 'deg'])
MC.set_sweep_function(mock_device.mw_freq)
MC.set_sweep_function_2D(mock_device.mw_pow)
MC.set_detector_function(d)
MC.set_adaptive_function_parameters({'adaptive_function': adaptive.Learner2D,
'goal':lambda l: l.npoints>20*20,
'bounds':((7.398e9, 7.402e9),
(-20, -10))})
dat = MC.run(mode='adaptive')
# Required to be able to use the fancy interpolating plot
adaptive.notebook_extension()
MC.learner.plot(tri_alpha=.1)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import time
import psutil
import matplotlib.pyplot as plt
import numpy as np
# We create a very simple data set with 5 data items in it.
size= 5
# mu, sigma = 100, 5000 # mean and standard deviation
# error=np.random.normal(mu, sigma, size)
x1 = np.arange(0, size)
# x2 = np.arange(1, size)
x2 = np.arange(5, 5+size)
# y = 2.5*x1 + error
y1=2.5 * x1
y2 =-1 *x2
# y = 2*x1 + 10* x2
x = []
for i in range(size):
x.append(np.array([x1[i],x2[i]]))
y = y1 + y2
print(x)
print(y)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
mpl.rcParams['legend.fontsize'] = 10
fig = plt.figure()
ax = fig.gca(projection='3d')
x1 = np.arange(0, size)
# x2 = np.arange(1, size)
x2 = np.arange(5, 5+size)
ax.scatter3D(x1, x2, y, label='parametric curve')
ax.legend()
plt.show()
learningRate = 0.01
num_iteration = 300
# This is our regression coefficients.
beta=np.zeros(2)
n = float(size)
# print("Sample size", n)
# Let's start with main iterative part of gradient descent algorithm
for i in range(num_iteration):
# Calculate the prediction with current regression coefficients.
cost = 0
m_gradient = 0
for j in range(size):
y_prediction = np.dot(beta , x[j])
# We compute costs just for monitoring
cost += ( y[j] - y_prediction)**2
# calculate gradients. sum the gradients for all rows
m_gradient += x[j] * (y[j] - y_prediction)
m_gradient = (-1.0/n)* m_gradient
print(i , "beta = ", beta, " Cost=", cost)
# update the weights - Regression Coefficients
beta = beta - learningRate * m_gradient
x1 = np.arange(0, size)
# x2 = np.arange(1, size)
x2 = np.arange(5, 5+size)
x3 = np.arange(2, 2+size)
# y = 2.5*x1 + error
y1=2.5 * x1
y2 =-1 *x2
y3 = 1*x3
# y = 2*x1 + 10* x2
x = []
for i in range(size):
x.append(np.array([x1[i],x2[i],x3[i]]))
y = y1+y2+y3
# plt.plot(x1, y, 'o', markersize=2)
# plt.show()
print(x)
# print(x2)
# print(error)
print(y)
learningRate = 0.01
num_iteration = 100
# Now we have 3 variables.
beta = np.zeros(3)
n = float(size)
# print("Sample size", n)
# Let's start with main iterative part of gradient descent algorithm
for i in range(num_iteration):
# Calculate the prediction with current regression coefficients.
cost = 0
m_gradient = 0
for j in range(size):
y_prediction = np.dot(beta, x[j])
# We compute costs just for monitoring
cost += ( y[j] - y_prediction)**2
# calculate gradients.
m_gradient += x[j] * (y[j] - y_prediction)
m_gradient = (-1.0/n)* m_gradient
print(i , "beta=", beta, " Cost=", cost)
# update the weights - Regression Coefficients
beta = beta - learningRate * m_gradient
```
| github_jupyter |
## Purpose: Get the stats for pitching per year (1876-2019).
```
# import dependencies.
import time
import pandas as pd
from splinter import Browser
from bs4 import BeautifulSoup as bs
!which chromedriver
# set up driver.
executable_path = {"executable_path": "/usr/local/bin/chromedriver"}
browser = Browser("chrome", **executable_path, headless=False)
# Grab the data into lists.
pitching_data = []
for year in range(2019, 1875, -1):
year = str(year)
url = "http://mlb.mlb.com/stats/sortable.jsp#elem=%5Bobject+Object%5D&tab_level=child&click_text=Sortable+Team+pitching&game_type='R'&season="+year+"&season_type=ANY&league_code='MLB'§ionType=st&statType=pitching&page=1&ts=1564260727128&playerType=QUALIFIER&sportCode='mlb'&split=&team_id=&active_sw=&position='1'&page_type=SortablePlayer&sortOrder='desc'&sortColumn=avg&results=&perPage=50&timeframe=&last_x_days=&extended=0"
browser.visit(url)
html = browser.html
soup = bs(html, "html.parser")
a = soup.find("tbody")
time.sleep(20)
for tr in a:
team_data = {}
team_data["year"] = year
team_data["team"] = tr.find("td", class_="dg-team_full").text
team_data["W"] = tr.find("td", class_="dg-w").text
team_data["L"] = tr.find("td", class_="dg-l").text
team_data["ERA"] = tr.find("td", class_="dg-era").text
team_data["G1"] = tr.find("td", class_="dg-g").text
team_data["GS"] = tr.find("td", class_="dg-gs").text
team_data["SV"] = tr.find("td", class_="dg-sv").text
team_data["SVO"] = tr.find("td", class_="dg-svo").text
team_data["IP"] = tr.find("td", class_="dg-ip").text
team_data["H1"] = tr.find("td", class_="dg-h").text
team_data["R1"] = tr.find("td", class_="dg-r").text
team_data["ER"] = tr.find("td", class_="dg-er").text
team_data["HR1"] = tr.find("td", class_="dg-hr").text
team_data["BB1"] = tr.find("td", class_="dg-bb").text
team_data["SO1"] = tr.find("td", class_="dg-so").text
team_data["WHIP"] = tr.find("td", class_="dg-whip").text
team_data["CG"] = tr.find("td", class_="dg-cg").text
team_data["SHO"] = tr.find("td", class_="dg-sho").text
team_data["HB"] = tr.find("td", class_="dg-hb").text
team_data["IBB1"] = tr.find("td", class_="dg-ibb").text
team_data["GF"] = tr.find("td", class_="dg-gf").text
team_data["HLD"] = tr.find("td", class_="dg-hld").text
team_data["GIDP"] = tr.find("td", class_="dg-gidp").text
team_data["GO1"] = tr.find("td", class_="dg-go").text
team_data["AO1"] = tr.find("td", class_="dg-ao").text
team_data["WP"] = tr.find("td", class_="dg-wp").text
team_data["BK"] = tr.find("td", class_="dg-bk").text
team_data["SB1"] = tr.find("td", class_="dg-sb").text
team_data["CS1"] = tr.find("td", class_="dg-cs").text
team_data["PK"] = tr.find("td", class_="dg-pk").text
team_data["TBF"] = tr.find("td", class_="dg-tbf").text
team_data["NP"] = tr.find("td", class_="dg-np").text
team_data["WPCT"] = tr.find("td", class_="dg-wpct").text
team_data["GO_AO1"] = tr.find("td", class_="dg-go_ao").text
team_data["OBP1"] = tr.find("td", class_="dg-obp").text
team_data["SLG1"] = tr.find("td", class_="dg-slg").text
team_data["OPS"] = tr.find("td", class_="dg-ops").text
pitching_data.append(team_data)
team_data = {}
pitching_data = pd.DataFrame(pitching_data)
pitching_data.head()
pitching_data.to_csv("../Resources/pitching_data.csv")
```
| github_jupyter |
<div class="alert alert-block alert-info">
<font size="6"><b><center> Section 2</font></center>
<br>
<font size="6"><b><center> Fully-Connected, Feed-Forward Neural Network Examples </font></center>
</div>
# Example 1: A feedforward network with one hidden layer using torch.nn and simulated data
In developing (and training) a feedforward neural network, the developer needs to make many decisions, many of which are required when developing more complicated neural networks, such as CNN and RNN:
- the depth of the network (i.e. number of layer)
- the width of the network (i.e. number of hidden units per layer)
- the type of nonlinear activation function applied in each hidden layer
- the type of activation function applied in the output layer
- the loss function
- the optimization algorithms
- the regularization technique (*which we will consider in Section 3*)
- number of epoch and batch size
Our first example uses simulated data, which has the advantage that we define our own data generating mechanism and can observe how a neural network can approximate the mechanism.
----
## Simulate and Visualize Data
Let's first consider an example with one explanatory variable.
<br><br>
The output is related to the input using the following function
$$y_i = 3x_{i,1} + x^2 exp(x_{i,1}) + \epsilon_i$$
where $\epsilon$ is an independently and identically distributed (i.i.d.) random variable and $i = 1,2,\dots,n$ is an index of examples (or observations)
```
# In the following example, n=100
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
n = 100 # number of examples (or observations)
# Generate a set of n random numbers from a standard normal distribution
epsilon = np.random.randn(n)
# Generate a set of n random numbers from a uniform[0,1] distribution
x1 = np.random.uniform(0,1,n)
# Create the data generating mechanism
y = 3*x1 + np.power(x1,2)*np.exp(x1) + epsilon
stats.describe(y)
stats.describe(x1)
fig = plt.figure(figsize=(12,8))
plt.subplot(2, 2, 1)
sns.set()
#ax = sns.distplot(x1)
plt.hist(x1)
plt.subplot(2, 2, 2)
plt.scatter(x1, y)
```
**Note: Before training, `numpy array` needs to be converted to `PyTorch's tensors`**
```
type(x1)
print(x1.shape)
print(y.shape)
# convert numpy array to tensor in shape of input size
import torch
x1 = torch.from_numpy(x1.reshape(-1,1)).float()
y = torch.from_numpy(y.reshape(-1,1)).float()
print(x1.shape)
print(y.shape)
```
## Create a network: First Attempt
* Specify a network
* Define a loss function and choose an optimization algorithm
* Train the network
Our first network is a linear regression model
### Create a linear regression model
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class LinearNet(nn.Module):
def __init__(self):
super(LinearNet, self).__init__()
self.linearlayer1 = torch.nn.Linear(1, 1)
def forward(self, x):
y_pred = self.linearlayer1(x)
return y_pred
linearNet = LinearNet()
print(linearNet)
```
### Define Loss Function and Optimization Algorithm
```
# Define Optimizer and Loss Function
optimizer = torch.optim.SGD(linearNet.parameters(), lr=0.01)
loss_func = torch.nn.MSELoss()
```
### Model training and print losses
```
X = Variable(x1)
y_data = Variable(y)
for epoch in range(500):
y_pred = linearNet(X)
loss = torch.sqrt(loss_func(y_pred, y_data))
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Plot the prediction and print out the loss
if epoch in [0,99,299,399,499]:
print(epoch)
plt.cla()
plt.scatter(x1.data.numpy(), y.data.numpy())
#plt.plot(x.data.numpy(), y_pred.data.numpy(), 'r-', lw=2)
plt.scatter(x1.data.numpy(), y_pred.data.numpy())
plt.text(0.7, -1, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 14, 'color': 'red'})
plt.pause(0.1)
plt.show()
```
## Create a Network: 2nd Attempt
### Define a Feed-forward network with 1 hidden layer
**Let's insert a computational graph here**
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class ffNet(nn.Module):
def __init__(self):
super(ffNet, self).__init__()
self.linearCombo1 = torch.nn.Linear(1, 4) # z1 = W1*x1 + b1
self.linearCombo2 = torch.nn.Linear(4, 1) # z2 = W2*h1 + b2
self.relu = torch.nn.ReLU()
def forward(self, x):
h1 = self.relu(self.linearCombo1(x)) # the ReLU (non-linear activation function) is applied to the linear combination of the weights and input (x1)
y_pred = self.linearCombo2(h1)
return y_pred
ffnet = ffNet()
print(ffnet)
```
### Define loss function and optimization algorithm
```
# Define Optimizer and Loss Function
optimizer = torch.optim.SGD(ffnet.parameters(), lr=0.01)
loss_func = torch.nn.MSELoss()
```
### Model Training
```
X = Variable(x1)
y_data = Variable(y)
for epoch in range(500):
y_pred = ffnet(X)
loss = loss_func(y_pred, y_data)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch in [0,99,299,399,499]:
print(epoch)
plt.cla()
plt.scatter(x1.data.numpy(), y.data.numpy())
plt.scatter(x1.data.numpy(), y_pred.data.numpy())
#plt.plot(x.data.numpy(), y_pred.data.numpy(), 'r-', lw=2)
plt.text(0.5, 0, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 10, 'color': 'red'})
plt.pause(0.1)
plt.show()
```
## Create a Network: 3rd Attempt
### Define a Feed-forward network with 2 hidden layers
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class ffNet(nn.Module):
def __init__(self):
super(ffNet, self).__init__()
self.linearlayer1 = torch.nn.Linear(1, 8)
self.linearlayer2 = torch.nn.Linear(8, 4)
self.linearlayer3 = torch.nn.Linear(4, 1)
self.relu = torch.nn.ReLU()
def forward(self, x):
out1 = self.relu(self.linearlayer1(x))
out2 = self.relu(self.linearlayer2(out1))
y_pred = self.linearlayer3(out2)
return y_pred
ffnet2 = ffNet()
print(ffnet2)
```
### Define loss function and optimization algorithm
```
# Define Optimizer and Loss Function
optimizer = torch.optim.SGD(ffnet2.parameters(), lr=0.01)
loss_func = torch.nn.MSELoss()
```
### Model Training
```
X = Variable(x1)
y_data = Variable(y)
for epoch in range(500):
y_pred = ffnet2(X)
loss = loss_func(y_pred, y_data)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch in [0,99,299,399,499,999]:
print(epoch)
plt.cla()
plt.scatter(x1.data.numpy(), y.data.numpy())
#plt.plot(x.data.numpy(), y_pred.data.numpy(), 'r', lw=1)
plt.scatter(x1.data.numpy(), y_pred.data.numpy())
plt.text(0.5, 0, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 10, 'color': 'red'})
plt.pause(0.1)
plt.show()
```
# Lab 2
**Review modeling attempt 1 - 3 and design a network to improve the existing results.**
| github_jupyter |
# Convolution Nets for MNIST
### TelescopeUser: 10-class classification problem
<img src="imgs/mnist_plot.png"
style="float: left; margin-right: 1px;" width="500" height="400" />
Deep Learning models can take quite a bit of time to run, particularly if GPU isn't used.
In the interest of time, you could sample a subset of observations (e.g. $1000$) that are a particular number of your choice (e.g. $6$) and $1000$ observations that aren't that particular number (i.e. $\neq 6$).
We will build a model using that and see how it performs on the test dataset
```
#Import the required libraries
import numpy as np
np.random.seed(1338)
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras.utils import np_utils
from keras.optimizers import SGD
```
## Loading Data
```
# Load the training and testing data
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
# Display purpose:
X_train_orig = X_train
X_test_orig = X_test
```
## Data Preparation
### Very Important:
When dealing with images & convolutions, it is paramount to handle `image_data_format` properly
```
from keras import backend as K
img_rows, img_cols = 28, 28
if K.image_data_format() == 'channels_first':
shape_ord = (1, img_rows, img_cols)
else: # channel_last
shape_ord = (img_rows, img_cols, 1)
```
#### Preprocess and Normalise Data
```
X_train = X_train.reshape((X_train.shape[0],) + shape_ord)
X_test = X_test.reshape((X_test.shape[0],) + shape_ord)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
```
### Let's look at some images
```
import matplotlib.pyplot as plt
%matplotlib inline
print(Y_train[0:10])
slice = 10
plt.figure(figsize=(16,8))
for i in range(slice):
plt.subplot(1, slice, i+1)
plt.imshow(X_train_orig[i], interpolation='nearest')
plt.axis('off')
print(Y_test[0:10])
slice = 10
plt.figure(figsize=(16,8))
for i in range(slice):
plt.subplot(1, slice, i+1)
plt.imshow(X_test_orig[i], interpolation='nearest')
plt.axis('off')
```
### One-hot Encoding for label digits 0 ~ 9
```
print(X_train.shape, Y_train.shape, X_test.shape, Y_test.shape)
# Converting the classes to its binary categorical form
nb_classes = 10
Y_train = np_utils.to_categorical(Y_train, nb_classes)
Y_test = np_utils.to_categorical(Y_test, nb_classes)
print(X_train.shape, Y_train.shape, X_test.shape, Y_test.shape)
Y_train[0:10]
Y_test[0:10]
```
# A simple CNN
```
# -- Initializing the values for the convolution neural network
# nb_epoch = 2 # kept very low! Please increase if you have GPU instead of CPU
nb_epoch = 20 # kept very low! Please increase if you have GPU instead of CPU
batch_size = 64
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 3
# Vanilla SGD
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
```
#### Step 1: Model Definition
```
model = Sequential()
model.add(Conv2D(nb_filters, (nb_conv, nb_conv), padding='valid',
input_shape=shape_ord)) # note: the very first layer **must** always specify the input_shape
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
```
#### Step 2: Compile
```
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
```
#### Step 3: Fit
```
hist = model.fit(X_train, Y_train, batch_size=batch_size,
epochs=nb_epoch, verbose=1,
validation_data=(X_test, Y_test))
# import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.legend(['Training', 'Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.legend(['Training', 'Validation'], loc='lower right')
```
### Step 4: Evaluate
```
print('Available Metrics in Model: {}'.format(model.metrics_names))
# Evaluating the model on the test data
loss, accuracy = model.evaluate(X_test, Y_test, verbose=0)
print('Test Loss:', loss)
print('Test Accuracy:', accuracy)
```
### Let's plot our model Predictions!
```
slice = 20
predicted = model.predict(X_test[:slice]).argmax(-1)
plt.figure(figsize=(16,8))
for i in range(slice):
plt.subplot(1, slice, i+1)
plt.imshow(X_test_orig[i], interpolation='nearest')
plt.text(0, 0, predicted[i], color='black',
bbox=dict(facecolor='white', alpha=1))
plt.axis('off')
```
# Adding more Dense Layers
```
model = Sequential()
model.add(Conv2D(nb_filters, (nb_conv, nb_conv),
padding='valid', input_shape=shape_ord))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
hist = model.fit(X_train, Y_train, batch_size=batch_size,
epochs=nb_epoch,verbose=1,
validation_data=(X_test, Y_test))
# import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.legend(['Training', 'Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.legend(['Training', 'Validation'], loc='lower right')
#Evaluating the model on the test data
score, accuracy = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score)
print('Test accuracy:', accuracy)
slice = 20
predicted = model.predict(X_test[:slice]).argmax(-1)
plt.figure(figsize=(16,8))
for i in range(slice):
plt.subplot(1, slice, i+1)
plt.imshow(X_test_orig[i], interpolation='nearest')
plt.text(0, 0, predicted[i], color='black',
bbox=dict(facecolor='white', alpha=1))
plt.axis('off')
```
# Adding Dropout
```
model = Sequential()
model.add(Conv2D(nb_filters, (nb_conv, nb_conv),
padding='valid',
input_shape=shape_ord))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
hist = model.fit(X_train, Y_train, batch_size=batch_size,
epochs=nb_epoch,verbose=1,
validation_data=(X_test, Y_test))
# import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.legend(['Training', 'Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.legend(['Training', 'Validation'], loc='lower right')
#Evaluating the model on the test data
score, accuracy = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score)
print('Test accuracy:', accuracy)
slice = 20
predicted = model.predict(X_test[:slice]).argmax(-1)
plt.figure(figsize=(16,8))
for i in range(slice):
plt.subplot(1, slice, i+1)
plt.imshow(X_test_orig[i], interpolation='nearest')
plt.text(0, 0, predicted[i], color='black',
bbox=dict(facecolor='white', alpha=1))
plt.axis('off')
```
# Adding more Convolution Layers
```
model = Sequential()
model.add(Conv2D(nb_filters, (nb_conv, nb_conv),
padding='valid', input_shape=shape_ord))
model.add(Activation('relu'))
model.add(Conv2D(nb_filters, (nb_conv, nb_conv)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
hist = model.fit(X_train, Y_train, batch_size=batch_size,
epochs=nb_epoch,verbose=1,
validation_data=(X_test, Y_test))
# import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.legend(['Training', 'Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.legend(['Training', 'Validation'], loc='lower right')
#Evaluating the model on the test data
score, accuracy = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score)
print('Test accuracy:', accuracy)
slice = 20
predicted = model.predict(X_test[:slice]).argmax(-1)
plt.figure(figsize=(16,8))
for i in range(slice):
plt.subplot(1, slice, i+1)
plt.imshow(X_test_orig[i], interpolation='nearest')
plt.text(0, 0, predicted[i], color='black',
bbox=dict(facecolor='white', alpha=1))
plt.axis('off')
```
# Exercise
The above code has been written as a function.
Change some of the **hyperparameters** and see what happens.
```
nb_epoch = 100
# Function for constructing the convolution neural network
# Feel free to add parameters, if you want
def build_model():
""""""
model = Sequential()
model.add(Conv2D(nb_filters, (nb_conv, nb_conv),
padding='valid',
input_shape=shape_ord))
model.add(Activation('relu'))
model.add(Conv2D(nb_filters, (nb_conv, nb_conv)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
hist = model.fit(X_train, Y_train, batch_size=batch_size,
epochs=nb_epoch,verbose=1,
validation_data=(X_test, Y_test))
#Evaluating the model on the test data
score, accuracy = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score)
print('Test accuracy:', accuracy)
return hist, model
# Train and test model in one shot
hist, model = build_model()
# import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.legend(['Training', 'Validation'])
plt.figure()
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.legend(['Training', 'Validation'], loc='lower right')
#Evaluating the model on the test data
score, accuracy = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score)
print('Test accuracy:', accuracy)
slice = 20
predicted = model.predict(X_test[:slice]).argmax(-1)
plt.figure(figsize=(16,8))
for i in range(slice):
plt.subplot(1, slice, i+1)
plt.imshow(X_test_orig[i], interpolation='nearest')
plt.text(0, 0, predicted[i], color='black',
bbox=dict(facecolor='white', alpha=1))
plt.axis('off')
```
---
## Understanding Convolutional Layers Structure
In this exercise we want to build a (_quite shallow_) network which contains two
[Convolution, Convolution, MaxPooling] stages, and two Dense layers.
To test a different optimizer, we will use [AdaDelta](http://keras.io/optimizers/), which is a bit more complex than the simple Vanilla SGD with momentum.
```
from keras.optimizers import Adadelta
input_shape = shape_ord
nb_classes = 10
```
### Understanding layer shapes
An important feature of Keras layers is that each of them has an `input_shape` attribute, which you can use to visualize the shape of the input tensor, and an `output_shape` attribute, for inspecting the shape of the output tensor.
As we can see, the input shape of the first convolutional layer corresponds to the `input_shape` attribute (which must be specified by the user).
In this case, it is a `28x28` image with three color channels.
Since this convolutional layer has the `padding` set to `same`, its output width and height will remain the same, and the number of output channel will be equal to the number of filters learned by the layer, 16.
The following convolutional layer, instead, have the default `padding`, and therefore reduce width and height by $(k-1)$, where $k$ is the size of the kernel.
`MaxPooling` layers, instead, reduce width and height of the input tensor, but keep the same number of channels.
`Activation` layers, of course, don't change the shape.
```
model.summary()
for i, layer in enumerate(model.layers):
print ("Layer", i, "\t", layer.name, "\t\t", layer.input_shape, "\t", layer.output_shape)
```
### Understanding weights shape
In the same way, we can visualize the shape of the weights learned by each layer.
In particular, Keras lets you inspect weights by using the `get_weights` method of a layer object.
This will return a list with two elements, the first one being the **weight tensor** and the second one being the **bias vector**.
In particular:
- **MaxPooling layer** don't have any weight tensor, since they don't have learnable parameters.
- **Convolutional layers**, instead, learn a $(n_o, n_i, k, k)$ weight tensor, where $k$ is the size of the kernel, $n_i$ is the number of channels of the input tensor, and $n_o$ is the number of filters to be learned.
For each of the $n_o$ filters, a bias is also learned.
- **Dense layers** learn a $(n_i, n_o)$ weight tensor, where $n_o$ is the output size and $n_i$ is the input size of the layer. Each of the $n_o$ neurons also has a bias.
```
for i, layer in enumerate(model.layers):
if len(layer.get_weights()) > 0:
W, b = layer.get_weights()
print("Layer", i, "\t", layer.name, "\t\t", W.shape, "\t", b.shape)
```
# Batch Normalisation
Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.
## How to BatchNorm in Keras
```python
from keras.layers.normalization import BatchNormalization
BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True,
beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros',
moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None,
beta_constraint=None, gamma_constraint=None)
```
#### Arguments
<ul>
<li><strong>axis</strong>: Integer, the axis that should be normalized
(typically the features axis).
For instance, after a <code>Conv2D</code> layer with
<code>data_format="channels_first"</code>,
set <code>axis=1</code> in <code>BatchNormalization</code>.</li>
<li><strong>momentum</strong>: Momentum for the moving average.</li>
<li><strong>epsilon</strong>: Small float added to variance to avoid dividing by zero.</li>
<li><strong>center</strong>: If True, add offset of <code>beta</code> to normalized tensor.
If False, <code>beta</code> is ignored.</li>
<li><strong>scale</strong>: If True, multiply by <code>gamma</code>.
If False, <code>gamma</code> is not used.
When the next layer is linear (also e.g. <code>nn.relu</code>),
this can be disabled since the scaling
will be done by the next layer.</li>
<li><strong>beta_initializer</strong>: Initializer for the beta weight.</li>
<li><strong>gamma_initializer</strong>: Initializer for the gamma weight.</li>
<li><strong>moving_mean_initializer</strong>: Initializer for the moving mean.</li>
<li><strong>moving_variance_initializer</strong>: Initializer for the moving variance.</li>
<li><strong>beta_regularizer</strong>: Optional regularizer for the beta weight.</li>
<li><strong>gamma_regularizer</strong>: Optional regularizer for the gamma weight.</li>
<li><strong>beta_constraint</strong>: Optional constraint for the beta weight.</li>
<li><strong>gamma_constraint</strong>: Optional constraint for the gamma weight.</li>
</ul>
### Excercise
```
# Try to add a new BatchNormalization layer to the Model
# (after the Dropout layer) - before or after the ReLU Activation
```
---
| github_jupyter |
# Exploring datastructures for dataset
A Pandas exploration. Find the best datastructure to explore and transform the dataset (both training and test dataframes). Use case:
- find all numerical features (filtering)
- transform all numerical features (e.g. take square)
- replace NaN values for a numerical feature
- plot distribution for a column in the training dataset
```
import sys
import os
import pandas as pd
import seaborn as sns
sys.path.insert(1, os.path.join(sys.path[0], '..')) # add parent directory to path
import samlib
```
## Using `samlib.DataSet`
Original approach used in `data_exploration_numerical_features`
- class that contains 3 dataframes attributes (train, test, df, where df is the full dataframe
- whenever df is updated, the train and test frames are updated
This allows to work with the training dataset, and to update/transform the full dataset if necessary, so that the transformation is also applied to the test dataframe that will be needed for the final prediction.
```
raw_train = pd.read_csv('../data/train_prepared_light.csv')
raw_test = pd.read_csv('../data/test_prepared_light.csv')
ds = samlib.DataSet(raw_train, raw_test)
is_num = ds.dtypes != object
dfnum = ds.df.loc[:, is_num]
dfnum.head()
ds.apply(lambda df: df.loc[:, is_num]**2, inplace=True)
ds.df.head()
ds.df.MasVnrArea.isnull().sum()
ds.df.loc[ds.df.MasVnrArea.isnull(), 'MasVnrArea'] = 0
ds.df.MasVnrArea.isnull().sum()
sns.distplot(ds.train.GrLivArea)
```
Works but not so great because requires a new dependency (samlib) and a different way of working compared to Pandas. Need to learn the behaviour of the DataSet class, and remember to use the `apply` method otherwise the `train` and `test` sets are *not* going to be kept in sync (for example when assigning to a slice of `ds.df`)
## Using an extra categorical `dataset` column
```
traindf = raw_train.copy()
testdf = raw_test.copy()
traindf['dataset'] = 'train'
testdf['dataset'] = 'test'
df = pd.concat([traindf, testdf])
```
Then we can filter using the value of the `dataset` column
```
train = df['dataset'] == 'train'
test = ~train
df[train].head()
df[test].head()
is_num = df.dtypes != object
dfnum = df.loc[:, is_num]
dfnum.head()
df.loc[:, is_num] = dfnum **2
df.head()
df.MasVnrArea.isnull().sum()
df.loc[df.MasVnrArea.isnull(), 'MasVnrArea'] = 0
df.MasVnrArea.isnull().sum()
sns.distplot(df.loc[train, 'GrLivArea'])
```
Works quite well but takes a bit of work to setup and requires to keep two boolean series (`train` and `test`) to filter the dataset whenever needed. An improvement over `samlib.DataSet` though.
## Using Panel object
```
panel = pd.Panel({'train':raw_train.copy(), 'test': raw_test.copy()})
panel.train.head()
panel.test.head()
```
The above is very nice, but unfortunately a panel isn't a dataframe so we can't really get a view of the full data. Also we seem to have lost all the data types:
```
is_num = panel.train.dtypes != object
any(is_num)
```
So we must keep the raw data if we want to filter the numerical columns :-(
```
is_num = raw_train.dtypes != object
numpanel = panel.loc[:, :, is_num]
numpanel
numpanel.train.head()
```
Finally this raises an error!
```
try:
panel.loc[:, :, is_num] = panel.loc[:, :, is_num]**2
except NotImplementedError as err:
print('raises NotImplementedError: ', err)
```
Looked promising initially but not really workable as we can't assign an indexer with a Panel yet. We really need a dataframe object.
## Using multi-index on rows
```
traindf = raw_train.copy()
testdf = raw_test.copy()
df = pd.concat([traindf, testdf], keys=('train', 'test'))
df.head()
df.tail()
```
The test and train datasets can be accessed by filtering the index. Nice but not quite as compact as `df[train]`, though we don't need the extra `train` (and `test`) masks.
```
df.loc['train'].head()
is_num = df.dtypes != object
dfnum = df.loc[:, is_num]
dfnum.head()
df.loc[:, is_num] = dfnum **2
df.head()
df.MasVnrArea.isnull().sum()
df.loc[df.MasVnrArea.isnull(), 'MasVnrArea'] = 0
df.MasVnrArea.isnull().sum()
sns.distplot(df.GrLivArea.train)
```
Another way of doing it
```
sns.distplot(df.loc['train', 'GrLivArea'])
```
Works very well.
## Using multi-index on columns (swapped levels)
Swap the levels to fix the issue with filtering on features in the column multi-index case.
```
traindf = raw_train.copy()
testdf = raw_test.copy()
df = pd.concat([traindf, testdf], axis=1, keys=('train','test')).swaplevel(axis=1)
df.sort_index(axis=1, inplace=True) # needed otherwise we get in trouble for slicing
df.head()
df.tail()
```
The test and train datasets can be accessed by filtering the index. Nice but not quite as compact as `df[train]`, though we don't need the extra `train` (and `test`) masks.
```
df.xs('train', level=1, axis=1).head() # or use IndexSlice
```
We must also deal with the extra index level when filtering, but it's not too bad.
```
is_num = df.dtypes != object
dfnum = df.loc[:, is_num]
dfnum.head()
df.loc[:, is_num] = dfnum **2
df.head()
```
Getting nulls and setting nulls (without fillna) is a little tricky. Boolean indexing is (by definition) meant to work over rows, not rows *and columns. We can use boolean arrays with DataFrame.mask though. But this is definitely something to keep in mind when using multi indexing over columns.
```
df.MasVnrArea = df.MasVnrArea.mask(df.MasVnrArea.isnull(), 0)
df.MasVnrArea.tail()
```
Visualizing the training dataset is pretty easy.
```
sns.distplot(df.GrLivArea.train)
```
## Using multi-index on columns
Makes it easier to filter on dataset (train or test) and has the advantage of being a dataframe.
```
traindf = raw_train.copy()
testdf = raw_test.copy()
df = pd.concat([traindf, testdf], axis=1, keys=('train','test'))
df.head()
df.tail()
```
The test and train datasets can be accessed by filtering the index. Nice but not quite as compact as `df[train]`, though we don't need the extra `train` (and `test`) masks.
```
df.train.head()
```
We must also deal with the extra index level when filtering, but it's not too bad.
```
is_num = df.dtypes != object
dfnum = df.loc[:, is_num]
dfnum.head()
df.loc[:, is_num] = dfnum **2
df.head()
```
Definitely harder to slice accross columns. It's possible (unlike with panels), but hard (requires pd.IndexSlice).
```
df.loc[:, pd.IndexSlice[:, 'MasVnrArea']].isnull().sum()
```
You can also use a cross section to get the data more easily, but you can't use this for sssignments
```
df.xs('MasVnrArea', axis=1, level=1).head()
df.loc[:, pd.IndexSlice[:, 'MasVnrArea']] = 0
df.loc[:, pd.IndexSlice[:, 'MasVnrArea']].isnull().sum()
```
Visualizing the training dataset is pretty easy.
```
sns.distplot(df.train.GrLivArea)
```
## Using dataset type as label
** Method 1: add columns then use set_index **
```
traindf = raw_train.copy()
testdf = raw_test.copy()
traindf['Dataset'] = 'train'
testdf['Dataset'] = 'test'
df = pd.concat([traindf, testdf])
df.set_index('Dataset').head()
```
** Method 2: use concat and droplevel **
```
traindf = raw_train.copy()
testdf = raw_test.copy()
df = pd.concat([traindf, testdf], keys=('train', 'test'))
df.index = df.index.droplevel(1)
df.head()
df.tail()
```
The test and train datasets can be accessed by using `loc` .
```
df.loc['train'].head()
```
Filtering columns is very easy
```
is_num = df.dtypes != object
dfnum = df.loc[:, is_num]
dfnum.head()
df.loc[:, is_num] = dfnum **2
df.head()
df.MasVnrArea.isnull().sum()
df.loc[df.MasVnrArea.isnull(), 'MasVnrArea'] = 0
df.MasVnrArea.isnull().sum()
sns.distplot(df.GrLivArea.train)
```
Another way of doing it
```
sns.distplot(df.loc['train', 'GrLivArea'])
```
## Discussion
### Samlib
- Pros: does most of what we need pretty easily
- Cons: third party dependency, hackish, introduces new structure with weird behaviour (assining to a slice doesn't update training and test datasets)
- Score: 2/5
### Extra categorical `dataset` column
- Pros: works very well and syntax is compact
- Cons: a bit long to setup, requires to maintain mask variables `test` and `train` alongside the data.
- Score; 4/5
### Panel
doesn't work
### Multi-index on rows
- Pros: excellent, easy to filter on colums and on dataset
- Cons: none
- Score: 5/5
### Multi-index on columns
- Pros: easy to filter on train/test sets
- Cons: hard to transform features for both datasets + would be weird if train and test sets have widely different numbers of indices
- Score: 1/5
### Dataset label
- Pros: index is not a multi index
- Cons: a bit hard to setup and index looks a bit weird as all samples have the same index
- Score: 4/5
## Conclusion
Use `pd.concat([traindf, testdf], keys=['train', 'test'])` to merge the datasets into one dataframe while making it easy to visualize/process features on only the training dataset.
| github_jupyter |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
BRANCH = 'v1.0.0b2'
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
## Install TorchAudio
!pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
## Grab the config we'll use in this example
!mkdir configs
```
# minGPT License
*This notebook port's the [minGPT codebase](https://github.com/karpathy/minGPT) into equivalent NeMo code. The license for minGPT has therefore been attached here.*
```
The MIT License (MIT) Copyright (c) 2020 Andrej Karpathy
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
# torch-rnn License
*This notebook utilizes the `tiny-shakespeare` dataset from the [torch-rnn](https://github.com/jcjohnson/torch-rnn) codebase. The license for torch-rnn has therefore been attached here.*
```
The MIT License (MIT)
Copyright (c) 2016 Justin Johnson
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
-------
***Note: This notebook will intentionally introduce some errors to show the power of Neural Types or model development concepts, inside the cells marked with `[ERROR CELL]`. The explanation of and resolution of such errors can be found in the subsequent cells.***
-----
# The NeMo Model
NeMo comes with many state of the art pre-trained Conversational AI models for users to quickly be able to start training and fine-tuning on their own datasets.
In the previous [NeMo Primer](https://colab.research.google.com/github/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb) notebook, we learned how to download pretrained checkpoints with NeMo and we also discussed the fundamental concepts of the NeMo Model. The previous tutorial showed us how to use, modify, save, and restore NeMo Models.
In this tutorial we will learn how to develop a non-trivial NeMo model from scratch. This helps us to understand the underlying components and how they interact with the overall PyTorch ecosystem.
-------
At the heart of NeMo lies the concept of the "Model". For NeMo developers, a "Model" is the neural network(s) as well as all the infrastructure supporting those network(s), wrapped into a singular, cohesive unit. As such, most NeMo models are constructed to contain the following out of the box (note: some NeMo models support additional functionality specific to the domain/use case!) -
- Neural Network architecture - all of the modules that are required for the model.
- Dataset + Data Loaders - all of the components that prepare the data for consumption during training or evaluation.
- Preprocessing + Postprocessing - any of the components that process the datasets so the modules can easily consume them.
- Optimizer + Schedulers - basic defaults that work out of the box and allow further experimentation with ease.
- Any other supporting infrastructure - tokenizers, language model configuration, data augmentation, etc.
# Constructing a NeMo Model
NeMo "Models" are comprised of a few key components, so let's tackle them one by one. We will attempt to go in the order that's stated above.
To make this slightly challenging, let's port a model from the NLP domain this time. Transformers are all the rage, with BERT and his friends from Sesame Street forming the core infrastructure for many NLP tasks.
An excellent (yet simple) implementation of one such model - GPT - can be found in the `minGPT` repository - https://github.com/karpathy/minGPT. While the script is short, it explains and succinctly explores all of the core components we expect in a NeMo model, so it's a prime candidate for NeMo! Sidenote: NeMo supports GPT in its NLP collection, and as such, this notebook aims to be an in-depth development walkthrough for such models.
In the following notebook, we will attempt to port minGPT to NeMo, and along the way, discuss some core concepts of NeMo itself.
# Constructing the Neural Network Architecture
First, on the list - the neural network that forms the backbone of the NeMo Model.
So how do we create such a model? Using PyTorch! As you'll see below, NeMo components are compatible with all of PyTorch, so you can augment your workflow without ever losing the flexibility of PyTorch itself!
Let's start with a couple of imports -
```
import torch
import nemo
from nemo.core import NeuralModule
from nemo.core import typecheck
```
## Neural Module
Wait, what's `NeuralModule`? Where is the wonderful `torch.nn.Module`?
`NeuralModule` is a subclass of `torch.nn.Module`, and it brings with it a few additional functionalities.
In addition to being a `torch.nn.Module`, thereby being entirely compatible with the PyTorch ecosystem, it has the following capabilities -
1) `Typing` - It adds support for `Neural Type Checking` to the model. `Typing` is optional but quite useful, as we will discuss below!
2) `Serialization` - Remember the `OmegaConf` config dict and YAML config files? Well, all `NeuralModules` inherently supports serialization/deserialization from such config dictionaries!
3) `FileIO` - This is another entirely optional file serialization system. Does your `NeuralModule` require some way to preserve data that can't be saved into a PyTorch checkpoint? Write your serialization and deserialization logic in two handy methods! **Note**: When you create the final NeMo Model, this will be implemented for you! Automatic serialization and deserialization support of NeMo models!
```
class MyEmptyModule(NeuralModule):
def forward(self):
print("Neural Module ~ hello world!")
x = MyEmptyModule()
x()
```
## Neural Types
Neural Types? You might be wondering what that term refers to.
Almost all NeMo components inherit the class `Typing`. `Typing` is a simple class that adds two properties to the class that inherits it - `input_types` and `output_types`. A NeuralType, by its shortest definition, is simply a semantic tensor. It contains information regarding the semantic shape the tensor should hold, as well as the semantic information of what that tensor represents. That's it.
So what semantic information does such a typed tensor contain? Let's take an example below.
------
Across the Deep Learning domain, we often encounter cases where tensor shapes may match, but the semantics don't match at all. For example take a look at the following rank 3 tensors -
```
# Case 1:
embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=30)
x = torch.randint(high=10, size=(1, 5))
print("x :", x)
print("embedding(x) :", embedding(x).shape)
# Case 2
lstm = torch.nn.LSTM(1, 30, batch_first=True)
x = torch.randn(1, 5, 1)
print("x :", x)
print("lstm(x) :", lstm(x)[0].shape) # Let's take all timestep outputs of the LSTM
```
-------
As you can see, the output of Case 1 is an embedding of shape [1, 5, 30], and the output of Case 2 is an LSTM output (state `h` over all time steps), also of the same shape [1, 5, 30].
Do they have the same shape? **Yes**. <br>If we do a Case 1 .shape == Case 2 .shape, will we get True as an output? **Yes**. <br>
Do they represent the same concept? **No**. <br>
The ability to recognize that the two tensors do not represent the same semantic information is precisely why we utilize Neural Types. It contains the information of both the shape and the semantic concept of what that tensor represents. If we performed a neural type check between the two outputs of those tensors, it would raise an error saying semantically they were different things (more technically, it would say that they are `INCOMPATIBLE` with each other)!
--------
You may have read of concepts such as [Named Tensors](https://pytorch.org/docs/stable/named_tensor.html). While conceptually similar, Neural Types attached by NeMo are not as tightly bound to the PyTorch ecosystem - practically any object of a class can be attached with a neural type!
## Neural Types - Usage
Neural Types sound interesting, so how do we go about adding them? Let's take a few cases below.
Neural Types are one of the core foundations of NeMo - you will find them in a vast majority of Neural Modules, and every NeMo Model will have its Neural Types defined. While they are entirely optional and unintrusive, NeMo takes great care to support it so that there is no semantic incompatibility between components being used by users.
Let's start with a basic example of a type checked module.
```
from nemo.core.neural_types import NeuralType
from nemo.core.neural_types import *
class EmbeddingModule(NeuralModule):
def __init__(self):
super().__init__()
self.embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=30)
@typecheck()
def forward(self, x):
return self.embedding(x)
@property
def input_types(self):
return {
'x': NeuralType(axes=('B', 'T'), elements_type=Index())
}
@property
def output_types(self):
return {
'y': NeuralType(axes=('B', 'T', 'C'), elements_type=EmbeddedTextType())
}
```
To show the benefit of Neural Types, we are going to replicate the above cases inside NeuralModules.
Let's discuss how we added type checking support to the above class.
1) `forward` has a decorator `@typecheck()` on it.
2) `input_types` and `output_types` properties are defined.
That's it!
-------
Let's expand on each of the above steps.
- `@typecheck()` is a simple decorator that takes any class that inherits `Typing` (NeuralModule does this for us) and adds the two default properties of `input_types` and `output_types`, which by default returns None.
The `@typecheck()` decorator's explicit use ensures that, by default, neural type checking is **disabled**. NeMo does not wish to intrude on the development process of models. So users can "opt-in" to type checking by overriding the two properties. Therefore, the decorator ensures that users are not burdened with type checking before they wish to have it.
So what is `@typecheck()`? Simply put, you can wrap **any** function of a class that inherits `Typing` with this decorator, and it will look up the definition of the types of that class and enforce them. Typically, `torch.nn.Module` subclasses only implement `forward()` so it is most common to wrap that method, but `@typecheck()` is a very flexible decorator. Inside NeMo, we will show some advanced use cases (which are quite crucial to particular domains such as TTS).
------
As we see above, `@typecheck()` enforces the types. How then, do we provide this type of information to NeMo?
By overriding `input_types` and `output_types` properties of the class, we can return a dictionary mapping a string name to a `NeuralType`.
In the above case, we define a `NeuralType` as two components -
- `axes`: This is the semantic information of the carried by the axes themselves. The most common axes information is from single character notation.
> `B` = Batch <br>
> `C` / `D` - Channel / Dimension (treated the same) <br>
> `T` - Time <br>
> `H` / `W` - Height / Width <br>
- `elements_type`: This is the semantic information of "what the tensor represents". All such types are derived from the basic `ElementType`, and merely subclassing `ElementType` allows us to build a hierarchy of custom semantic types that can be used by NeMo!
Here, we declare that the input is an element_type of `Index` (index of the character in the vocabulary) and that the output is an element_type of `EmbeddedTextType` (the text embedding)
```
embedding_module = EmbeddingModule()
```
Now let's construct the equivalent of the Case 2 above, but as a `NeuralModule`.
```
class LSTMModule(NeuralModule):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(1, 30, batch_first=True)
@typecheck()
def forward(self, x):
return self.lstm(x)
@property
def input_types(self):
return {
'x': NeuralType(axes=('B', 'T', 'C'), elements_type=SpectrogramType())
}
@property
def output_types(self):
return {
'y': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation())
}
```
------
Here, we define the LSTM module from the Case 2 above.
We changed the input to be a rank three tensor, now representing a "SpectrogramType". We intentionally keep it generic - it can be a `MelSpectrogramType` or a `MFCCSpectrogramType` as it's input!
The output of an LSTM is now an `EncodedRepresentation`. Practically, this can be the output of a CNN layer, a Transformer block, or in this case, an LSTM layer. We can, of course, specialize by subclassing EncodedRepresentation and then using that!
```
lstm_module = LSTMModule()
```
------
Now for the test !
```
# Case 1 [ERROR CELL]
x1 = torch.randint(high=10, size=(1, 5))
print("x :", x1)
print("embedding(x) :", embedding_module(x1).shape)
```
-----
You might be wondering why we get a `TypeError` right off the bat. This `TypeError` is raised by design.
Positional arguments can cause significant issues during model development, mostly when the model/module design is not finalized. To reduce the potential for mistakes caused by wrong positional arguments and enforce the name of arguments provided to the function, `Typing` requires you to **call all of your type-checked functions by kwargs only**.
```
# Case 1
print("x :", x1)
print("embedding(x) :", embedding_module(x=x1).shape)
```
Now let's try the same for the `LSTMModule` in Case 2
```
# Case 2 [ERROR CELL]
x2 = torch.randn(1, 5, 1)
print("x :", x2)
print("lstm(x) :", lstm_module(x=x2)[0].shape) # Let's take all timestep outputs of the LSTM
```
-----
Now we get a type error stating that the number of output arguments provided does not match what is expected.
What exactly is going on here? Well, inside our `LSTMModule` class, we declare the output types to be a single NeuralType - an `EncodedRepresentation` of shape [B, T, C].
But the output of an LSTM layer is a tuple of two state values - the hidden state `h` and the cell state `c`!
So the neural type system raises an error saying that the number of output arguments does not match what is expected.
Let's fix the above.
```
class CorrectLSTMModule(LSTMModule): # Let's inherit the wrong class to make it easy to override
@property
def output_types(self):
return {
'h': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()),
'c': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()),
}
lstm_module = CorrectLSTMModule()
# Case 2
x2 = torch.randn(1, 5, 1)
print("x :", x2)
print("lstm(x) :", lstm_module(x=x2)[0].shape) # Let's take all timestep outputs of the LSTM `h` gate
```
------
Great! So now, the type checking system is happy.
If you looked closely, the outputs were ordinary Torch Tensors (this is good news; we don't want to be incompatible with torch Tensors after all!). So, where exactly is the type of information stored?
When the `output_types` is overridden, and valid torch tensors are returned as a result, these tensors are attached with the attribute `neural_type`. Let's inspect this -
```
emb_out = embedding_module(x=x1)
lstm_out = lstm_module(x=x2)[0]
assert hasattr(emb_out, 'neural_type')
assert hasattr(lstm_out, 'neural_type')
print("Embedding tensor :", emb_out.neural_type)
print("LSTM tensor :", lstm_out.neural_type)
```
-------
So we see that these tensors now have this attribute called `neural_type` and are the same shape.
This exercise's entire goal was to assert that the two outputs are semantically **not** the same object, even if they are the same shape.
Let's test this!
```
emb_out.neural_type.compare(lstm_out.neural_type)
emb_out.neural_type == lstm_out.neural_type
```
## Neural Types - Limitations
You might have noticed one interesting fact - our inputs were just `torch.Tensor` to both typed function calls, and they had no `neural_type` assigned to them.
So why did the type check system not raise any error?
This is to maintain compatibility - type checking is meant to work on a chain of function calls - and each of these functions should themselves be wrapped with the `@typecheck()` decorator. This is also done because we don't want to overtax the forward call with dozens of checks, and therefore we only type modules that perform some higher-order logical computation.
------
As an example, it is mostly unnecessary (but still possible) to type the input and output of every residual block of a ResNet model. However, it is practically important to type the encoder (no matter how many layers is inside it) and the decoder (the classification head) separately so that when one does fine-tuning, there is no semantic mismatch of the tensors input to the encoder and bound to the decoder.
-------
For this case, since it would be impractical to extend a class to attach a type to the input tensor, we can take a shortcut and directly attach the neural type to the input!
```
embedding_module = EmbeddingModule()
x1 = torch.randint(high=10, size=(1, 5))
# Attach correct neural type
x1.neural_type = NeuralType(('B', 'T'), Index())
print("embedding(x) :", embedding_module(x=x1).shape)
# Attach wrong neural type [ERROR CELL]
x1.neural_type = NeuralType(('B', 'T'), LabelsType())
print("embedding(x) :", embedding_module(x=x1).shape)
```
## Let's create the minGPT components
Now that we have a somewhat firm grasp of neural type checking, let's begin porting the minGPT example code. Once again, most of the code will be a direct port from the [minGPT repository](https://github.com/karpathy/minGPT).
Here, you will notice one thing. By just changing class imports, one `@typecheck()` on forward, and adding `input_types` and `output_types` (which are also entirely optional!), we are almost entirely done with the PyTorch Lightning port!
```
import math
from typing import List, Set, Dict, Tuple, Optional
import torch
import torch.nn as nn
from torch.nn import functional as F
```
## Creating Element Types
Till now, we have used the Neural Types provided by the NeMo core. But we need not be restricted to the pre-defined element types !
Users have total flexibility in defining any hierarchy of element types as they please!
```
class AttentionType(EncodedRepresentation):
"""Basic Attention Element Type"""
class SelfAttentionType(AttentionType):
"""Self Attention Element Type"""
class CausalSelfAttentionType(SelfAttentionType):
"""Causal Self Attention Element Type"""
```
## Creating the modules
Neural Modules are generally top-level modules but can be used at any level of the module hierarchy.
For demonstration, we will treat an encoder comprising a block of Causal Self Attention modules as a typed Neural Module. Of course, we can also treat each Causal Self Attention layer itself as a neural module if we require it, but top-level modules are generally preferred.
```
class CausalSelfAttention(nn.Module):
"""
A vanilla multi-head masked self-attention layer with a projection at the end.
It is possible to use torch.nn.MultiheadAttention here but I am including an
explicit implementation here to show that there is nothing too scary here.
"""
def __init__(self, n_embd, block_size, n_head, attn_pdrop, resid_pdrop):
super().__init__()
assert n_embd % n_head == 0
self.n_head = n_head
# key, query, value projections for all heads
self.key = nn.Linear(n_embd, n_embd)
self.query = nn.Linear(n_embd, n_embd)
self.value = nn.Linear(n_embd, n_embd)
# regularization
self.attn_drop = nn.Dropout(attn_pdrop)
self.resid_drop = nn.Dropout(resid_pdrop)
# output projection
self.proj = nn.Linear(n_embd, n_embd)
# causal mask to ensure that attention is only applied to the left in the input sequence
self.register_buffer("mask", torch.tril(torch.ones(block_size, block_size))
.view(1, 1, block_size, block_size))
def forward(self, x, layer_past=None):
B, T, C = x.size()
# calculate query, key, values for all heads in batch and move head forward to be the batch dim
k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
# causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf'))
att = F.softmax(att, dim=-1)
att = self.attn_drop(att)
y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side
# output projection
y = self.resid_drop(self.proj(y))
return y
class Block(nn.Module):
""" an unassuming Transformer block """
def __init__(self, n_embd, block_size, n_head, attn_pdrop, resid_pdrop):
super().__init__()
self.ln1 = nn.LayerNorm(n_embd)
self.ln2 = nn.LayerNorm(n_embd)
self.attn = CausalSelfAttention(n_embd, block_size, n_head, attn_pdrop, resid_pdrop)
self.mlp = nn.Sequential(
nn.Linear(n_embd, 4 * n_embd),
nn.GELU(),
nn.Linear(4 * n_embd, n_embd),
nn.Dropout(resid_pdrop),
)
def forward(self, x):
x = x + self.attn(self.ln1(x))
x = x + self.mlp(self.ln2(x))
return x
```
## Building the NeMo Model
Since a NeMo Model is comprised of various parts, we are going to iterate on the model step by step inside this notebook. As such, we will have multiple intermediate NeMo "Models", which will be partial implementations, and they will inherit each other iteratively.
In a complete implementation of a NeMo Model (as found in the NeMo collections), all of these components will generally be found in a single class.
Let's start by inheriting `ModelPT` - the core class of a PyTorch NeMo Model, which inherits the PyTorch Lightning Module.
-------
**Remember**:
- The NeMo equivalent of `torch.nn.Module` is the `NeuralModule.
- The NeMo equivalent of the `LightningModule` is `ModelPT`.
```
import pytorch_lightning as ptl
from nemo.core import ModelPT
from omegaconf import OmegaConf
```
------
Next, let's construct the bare minimum implementation of the NeMo Model - just the constructor, the initializer of weights, and the forward method.
Initially, we will follow the steps followed by the minGPT implementation, and progressively refactor for NeMo
```
class PTLGPT(ptl.LightningModule):
def __init__(self,
# model definition args
vocab_size: int, # size of the vocabulary (number of possible tokens)
block_size: int, # length of the model's context window in time
n_layer: int, # depth of the model; number of Transformer blocks in sequence
n_embd: int, # the "width" of the model, number of channels in each Transformer
n_head: int, # number of heads in each multi-head attention inside each Transformer block
# model optimization args
learning_rate: float = 3e-4, # the base learning rate of the model
weight_decay: float = 0.1, # amount of regularizing L2 weight decay on MatMul ops
betas: Tuple[float, float] = (0.9, 0.95), # momentum terms (betas) for the Adam optimizer
embd_pdrop: float = 0.1, # \in [0,1]: amount of dropout on input embeddings
resid_pdrop: float = 0.1, # \in [0,1]: amount of dropout in each residual connection
attn_pdrop: float = 0.1, # \in [0,1]: amount of dropout on the attention matrix
):
super().__init__()
# save these for optimizer init later
self.learning_rate = learning_rate
self.weight_decay = weight_decay
self.betas = betas
# input embedding stem: drop(content + position)
self.tok_emb = nn.Embedding(vocab_size, n_embd)
self.pos_emb = nn.Parameter(torch.zeros(1, block_size, n_embd))
self.drop = nn.Dropout(embd_pdrop)
# deep transformer: just a sequence of transformer blocks
self.blocks = nn.Sequential(*[Block(n_embd, block_size, n_head, attn_pdrop, resid_pdrop) for _ in range(n_layer)])
# decoder: at the end one more layernorm and decode the answers
self.ln_f = nn.LayerNorm(n_embd)
self.head = nn.Linear(n_embd, vocab_size, bias=False) # no need for extra bias due to one in ln_f
self.block_size = block_size
self.apply(self._init_weights)
print("number of parameters: %e" % sum(p.numel() for p in self.parameters()))
def forward(self, idx):
b, t = idx.size()
assert t <= self.block_size, "Cannot forward, model block size is exhausted."
# forward the GPT model
token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
x = self.drop(token_embeddings + position_embeddings)
x = self.blocks(x)
x = self.ln_f(x)
logits = self.head(x)
return logits
def get_block_size(self):
return self.block_size
def _init_weights(self, module):
"""
Vanilla model initialization:
- all MatMul weights \in N(0, 0.02) and biases to zero
- all LayerNorm post-normalization scaling set to identity, so weight=1, bias=0
"""
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=0.02)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
```
------
Let's create a PyTorch Lightning Model above, just to make sure it works !
```
m = PTLGPT(vocab_size=100, block_size=32, n_layer=1, n_embd=32, n_head=4)
```
------
Now, let's convert the above easily into a NeMo Model.
A NeMo Model constructor generally accepts only two things -
1) `cfg`: An OmegaConf DictConfig object that defines precisely the components required by the model to define its neural network architecture, data loader setup, optimizer setup, and any additional components needed for the model itself.
2) `trainer`: An optional Trainer from PyTorch Lightning if the NeMo model will be used for training. It can be set after construction (if required) using the `set_trainer` method. For this notebook, we will not be constructing the config for the Trainer object.
## Refactoring Neural Modules
As we discussed above, Neural Modules are generally higher-level components of the Model and can potentially be replaced by equivalent Neural Modules.
As we see above, the embedding modules, deep transformer network, and final decoder layer have all been combined inside the PyTorch Lightning implementation constructor.
------
However, the decoder could have been an RNN instead of a simple Linear layer, or it could have been a 1D-CNN instead.
Likewise, the deep encoder could potentially have a different implementation of Self Attention modules.
These changes cannot be easily implemented any more inside the above implementation. However, if we refactor these components into their respective NeuralModules, then we can easily replace them with equivalent modules we construct in the future!
### Refactoring the Embedding module
Let's first refactor out the embedding module from the above implementation
```
class GPTEmbedding(NeuralModule):
def __init__(self, vocab_size: int, n_embd: int, block_size: int, embd_pdrop: float = 0.0):
super().__init__()
# input embedding stem: drop(content + position)
self.tok_emb = nn.Embedding(vocab_size, n_embd)
self.pos_emb = nn.Parameter(torch.zeros(1, block_size, n_embd))
self.drop = nn.Dropout(embd_pdrop)
@typecheck()
def forward(self, idx):
b, t = idx.size()
# forward the GPT model
token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
x = self.drop(token_embeddings + position_embeddings)
return x
@property
def input_types(self):
return {
'idx': NeuralType(('B', 'T'), Index())
}
@property
def output_types(self):
return {
'embeddings': NeuralType(('B', 'T', 'C'), EmbeddedTextType())
}
```
### Refactoring the Encoder
Next, let's refactor the Encoder - the multi layer Transformer Encoder
```
class GPTTransformerEncoder(NeuralModule):
def __init__(self, n_embd: int, block_size: int, n_head: int, n_layer: int, attn_pdrop: float = 0.0, resid_pdrop: float = 0.0):
super().__init__()
self.blocks = nn.Sequential(*[Block(n_embd, block_size, n_head, attn_pdrop, resid_pdrop)
for _ in range(n_layer)])
@typecheck()
def forward(self, embed):
return self.blocks(embed)
@property
def input_types(self):
return {
'embed': NeuralType(('B', 'T', 'C'), EmbeddedTextType())
}
@property
def output_types(self):
return {
'encoding': NeuralType(('B', 'T', 'C'), CausalSelfAttentionType())
}
```
### Refactoring the Decoder
Finally, let's refactor the Decoder - the small one-layer feed-forward network to decode the answer.
-------
Note an interesting detail - The `input_types` of the Decoder accepts the generic `EncoderRepresentation()`, where as the `neural_type` of the `GPTTransformerEncoder` has the `output_type` of `CausalSelfAttentionType`.
This is semantically *not* a mismatch! As you can see above in the inheritance chart, we declare `EncodedRepresentation` -> `AttentionType` -> `SelfAttentionType` -> `CausalSelfAttentionType`.
Such an inheritance hierarchy for the `element_type` allows future encoders (which also have a neural output type of at least `EncodedRepresentation`) to be swapped in place of the current GPT Causal Self Attention Encoder while keeping the rest of the NeMo model working just fine!
```
class GPTDecoder(NeuralModule):
def __init__(self, n_embd: int, vocab_size: int):
super().__init__()
self.ln_f = nn.LayerNorm(n_embd)
self.head = nn.Linear(n_embd, vocab_size, bias=False) # no need for extra bias due to one in ln_f
@typecheck()
def forward(self, encoding):
x = self.ln_f(encoding)
logits = self.head(x)
return logits
@property
def input_types(self):
return {
'encoding': NeuralType(('B', 'T', 'C'), EncodedRepresentation())
}
@property
def output_types(self):
return {
'logits': NeuralType(('B', 'T', 'C'), LogitsType())
}
```
### Refactoring the NeMo GPT Model
Now that we have 3 NeuralModules for the embedding, the encoder, and the decoder, let's refactor the NeMo model to take advantage of this refactor!
This time, we inherit from `ModelPT` instead of the general `LightningModule`.
```
class AbstractNeMoGPT(ModelPT):
def __init__(self, cfg: OmegaConf, trainer: ptl.Trainer = None):
super().__init__(cfg=cfg, trainer=trainer)
# input embedding stem: drop(content + position)
self.embedding = self.from_config_dict(self.cfg.embedding)
# deep transformer: just a sequence of transformer blocks
self.encoder = self.from_config_dict(self.cfg.encoder)
# decoder: at the end one more layernorm and decode the answers
self.decoder = self.from_config_dict(self.cfg.decoder)
self.block_size = self.cfg.embedding.block_size
self.apply(self._init_weights)
print("number of parameters: %e" % self.num_weights)
@typecheck()
def forward(self, idx):
b, t = idx.size()
assert t <= self.block_size, "Cannot forward, model block size is exhausted."
# forward the GPT model
# Remember: Only kwargs are allowed !
e = self.embedding(idx=idx)
x = self.encoder(embed=e)
logits = self.decoder(encoding=x)
return logits
def get_block_size(self):
return self.block_size
def _init_weights(self, module):
"""
Vanilla model initialization:
- all MatMul weights \in N(0, 0.02) and biases to zero
- all LayerNorm post-normalization scaling set to identity, so weight=1, bias=0
"""
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=0.02)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
@property
def input_types(self):
return {
'idx': NeuralType(('B', 'T'), Index())
}
@property
def output_types(self):
return {
'logits': NeuralType(('B', 'T', 'C'), LogitsType())
}
```
## Creating a config for a Model
At first glance, not much changed compared to the PyTorch Lightning implementation above. Other than the constructor, which now accepts a config, nothing changed at all!
NeMo operates on the concept of a NeMo Model being accompanied by a corresponding config dict (instantiated as an OmegaConf object). This enables us to prototype the model by utilizing Hydra rapidly. This includes various other benefits - such as hyperparameter optimization and serialization/deserialization of NeMo models.
Let's look at how actually to construct such config objects!
```
# model definition args (required)
# ================================
# vocab_size: int # size of the vocabulary (number of possible tokens)
# block_size: int # length of the model's context window in time
# n_layer: int # depth of the model; number of Transformer blocks in sequence
# n_embd: int # the "width" of the model, number of channels in each Transformer
# n_head: int # number of heads in each multi-head attention inside each Transformer block
# model definition args (optional)
# ================================
# embd_pdrop: float = 0.1, # \in [0,1]: amount of dropout on input embeddings
# resid_pdrop: float = 0.1, # \in [0,1]: amount of dropout in each residual connection
# attn_pdrop: float = 0.1, # \in [0,1]: amount of dropout on the attention matrix
```
------
As we look at the required parameters above, we need a way to tell OmegaConf that these values are currently not set, but the user should set them before we use them.
OmegaConf supports such behavior using the `MISSING` value. A similar effect can be achieved in YAML configs by using `???` as a placeholder.
```
from omegaconf import MISSING
# Let's create a utility for building the class path
def get_class_path(cls):
return f'{cls.__module__}.{cls.__name__}'
```
### Structure of a Model config
Let's first create a config for the common components of the model level config -
```
common_config = OmegaConf.create({
'vocab_size': MISSING,
'block_size': MISSING,
'n_layer': MISSING,
'n_embd': MISSING,
'n_head': MISSING,
})
```
-----
The model config right now is still being built - it needs to contain a lot more details!
A complete Model Config should have the sub-configs of all of its top-level modules as well. This means the configs of the `embedding`, `encoder`, and the `decoder`.
### Structure of sub-module config
For top-level models, we generally don't change the actual module very often, and instead, primarily change the hyperparameters of that model.
So we will make use of `Hydra`'s Class instantiation method - which can easily be accessed via the class method `ModelPT.from_config_dict()`.
Let's take a few examples below -
```
embedding_config = OmegaConf.create({
'_target_': get_class_path(GPTEmbedding),
'vocab_size': '${model.vocab_size}',
'n_embd': '${model.n_embd}',
'block_size': '${model.block_size}',
'embd_pdrop': 0.1
})
encoder_config = OmegaConf.create({
'_target_': get_class_path(GPTTransformerEncoder),
'n_embd': '${model.n_embd}',
'block_size': '${model.block_size}',
'n_head': '${model.n_head}',
'n_layer': '${model.n_layer}',
'attn_pdrop': 0.1,
'resid_pdrop': 0.1
})
decoder_config = OmegaConf.create({
'_target_': get_class_path(GPTDecoder),
# n_embd: int, vocab_size: int
'n_embd': '${model.n_embd}',
'vocab_size': '${model.vocab_size}'
})
```
##### What is `_target_`?
--------
In the above config, we see a `_target_` in the config. `_target_` is usually a full classpath to the actual class in the python package/user local directory. It is required for Hydra to locate and instantiate the model from its path correctly.
So why do we want to set a classpath?
In general, when developing models, we don't often change the encoder or the decoder, but we do change the hyperparameters of the encoder and decoder.
This notation helps us keep the Model level declaration of the forward step neat and precise. It also logically helps us demark which parts of the model can be easily replaced - in the future, we can easily replace the encoder with some other type of self-attention block or the decoder with an RNN or 1D-CNN neural module (as long as they have the same Neural Type definition as the current blocks).
##### What is the `${}` syntax?
-------
OmegaConf, and by extension, Hydra, supports Variable Interpolation. As you can see in the `__init__` of embedding, encoder, and decoder neural modules, they often share many parameters between each other.
It would become tedious and error-prone to set each of these constructors' values separately in each of the embedding, encoder, and decoder configs.
So instead, we define standard keys inside of the `model` level config and then interpolate these values inside of the respective configs!
### Attaching the model and module-level configs
So now, we have a Model level and per-module level configs for the core components. Sub-module configs generally fall under the "model" namespace, but you have the flexibility to define the structure as you require.
Let's attach them!
```
model_config = OmegaConf.create({
'model': common_config
})
# Then let's attach the sub-module configs
model_config.model.embedding = embedding_config
model_config.model.encoder = encoder_config
model_config.model.decoder = decoder_config
```
-----
Let's print this config!
```
print(OmegaConf.to_yaml(model_config))
```
-----
Wait, why did OmegaConf not fill in the value of the variable interpolation for the configs yet?
This is because OmegaConf takes a deferred approach to variable interpolation. To force it ahead of time, we can use the following snippet -
```
temp_config = OmegaConf.create(OmegaConf.to_container(model_config, resolve=True))
print(OmegaConf.to_yaml(temp_config))
```
-----
Now that we have a config, let's try to create an object of the NeMo Model !
```
import copy
# Let's work on a copy of the model config and update it before we send it into the Model.
cfg = copy.deepcopy(model_config)
# Let's set the values of the config (for some plausible small model)
cfg.model.vocab_size = 100
cfg.model.block_size = 128
cfg.model.n_layer = 1
cfg.model.n_embd = 32
cfg.model.n_head = 4
print(OmegaConf.to_yaml(cfg))
# Try to create a model with this config [ERROR CELL]
m = AbstractNeMoGPT(cfg.model)
```
-----
You will note that we added the `Abstract` tag for a reason to this NeMo Model and that when we try to instantiate it - it raises an error that we need to implement specific methods.
1) `setup_training_data` & `setup_validation_data` - All NeMo models should implement two data loaders - the training data loader and the validation data loader. Optionally, they can go one step further and also implement the `setup_test_data` method to add support for evaluating the Model on its own.
Why do we enforce this? NeMo Models are meant to be a unified, cohesive object containing the details about the neural network underlying that Model and the data loaders to train, validate, and optionally test those models.
In doing so, once the Model is created/deserialized, it would take just a few more steps to train the Model from scratch / fine-tune/evaluate the Model on any data that the user provides, as long as this user-provided dataset is in a format supported by the Dataset / DataLoader that is used by this Model!
2) `list_available_models` - This is a utility method to provide a list of pre-trained NeMo models to the user from the cloud.
Typically, NeMo models can be easily packaged into a tar file (which we call a .nemo file in the earlier primer notebook). These tar files contain the model config + the pre-trained checkpoint weights of the Model, and can easily be downloaded from some cloud service.
For this notebook, we will not be implementing this method.
--------
Finally, let's create a concrete implementation of the above NeMo Model!
```
from nemo.core.classes.common import PretrainedModelInfo
class BasicNeMoGPT(AbstractNeMoGPT):
@classmethod
def list_available_models(cls) -> PretrainedModelInfo:
return None
def setup_training_data(self, train_data_config: OmegaConf):
self._train_dl = None
def setup_validation_data(self, val_data_config: OmegaConf):
self._validation_dl = None
def setup_test_data(self, test_data_config: OmegaConf):
self._test_dl = None
```
------
Now let's try to create an object of the `BasicNeMoGPT` model
```
m = BasicNeMoGPT(cfg.model)
```
## Setting up train-val-test steps
The above `BasicNeMoGPT` Model is a basic PyTorch Lightning Module, with some added functionality -
1) Neural Type checks support - as defined in the Model as well as the internal modules.
2) Save and restore of the Model (in the trivial case) to a tarfile.
But as the Model is right now, it crucially does not support PyTorch Lightning's `Trainer`. As such, while this Model can be called manually, it cannot be easily trained or evaluated by using the PyTorch Lightning framework.
------
Let's begin adding support for this then -
```
class BasicNeMoGPTWithSteps(BasicNeMoGPT):
def step_(self, split, batch, batch_idx=None):
idx, targets = batch
logits = self(idx=idx)
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
key = 'loss' if split == 'train' else f"{split}_loss"
return {key: loss}
def training_step(self, *args, **kwargs):
return self.step_('train', *args, **kwargs)
def validation_step(self, *args, **kwargs):
return self.step_('val', *args, **kwargs)
def test_step(self, *args, **kwargs):
return self.step_('test', *args, **kwargs)
# This is useful for multiple validation data loader setup
def multi_validation_epoch_end(self, outputs, dataloader_idx: int = 0):
val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'val_loss': val_loss_mean}
# This is useful for multiple test data loader setup
def multi_test_epoch_end(self, outputs, dataloader_idx: int = 0):
test_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean()
return {'test_loss': test_loss_mean}
m = BasicNeMoGPTWithSteps(cfg=cfg.model)
```
### Setup for Multi Validation and Multi Test data loaders
As discussed in the NeMo Primer, NeMo has in-built support for multiple data loaders for validation and test steps. Therefore, as an example of how easy it is to add such support, we include the `multi_validation_epoch_end` and `multi_test_epoch_end` overrides.
It is also practically essential to collate results from more than one distributed GPUs, and then aggregate results properly at the end of the epoch. NeMo strictly enforces the correct collation of results, even if you will work on only one device! Future-proofing is baked into the model design for this case!
Therefore NeMo provides the above two generic methods to support aggregation and simultaneously support multiple datasets!
**Please note, you can prepend your already existing `validation_epoch_end` and `test_epoch_end` implementations with the `multi_` in the name, and that alone is sufficient to enable multi-dataset and multi-GPU support!**
------
**Note: To disable multi-dataset support, simply override `validation_epoch_end` and `test_epoch_end` instead of `multi_validation_epoch_end` and `multi_test_epoch_end`!**
## Setting up the optimizer / scheduler
We are relatively close to reaching feature parity with the MinGPT Model! But we are missing a crucial piece - the optimizer.
All NeMo Model's come with a default implementation of `setup_optimization()`, which will parse the provided model config to obtain the `optim` and `sched` sub-configs, and automatically configure the optimizer and scheduler.
If training GPT was as simple as plugging in an Adam optimizer over all the parameters with a cosine weight decay schedule, we could do that from the config alone.
-------
But GPT is not such a trivial model - more specifically, it requires weight decay to be applied to the weight matrices but not to the biases, the embedding matrix, or the LayerNorm layers.
We can drop the support that Nemo provides for such special cases and instead utilize the PyTorch Lightning method `configure_optimizers` to perform the same task.
-------
Note, for NeMo Models; the `configure_optimizers` is implemented as a trivial call to `setup_optimization()` followed by returning the generated optimizer and scheduler! So we can override the `configure_optimizer` method and manage the optimizer creation manually!
NeMo's goal is to provide usable defaults for the general case and simply back off to either PyTorch Lightning or PyTorch nn.Module itself in cases which the additional flexibility becomes necessary!
```
class BasicNeMoGPTWithOptim(BasicNeMoGPTWithSteps):
def configure_optimizers(self):
"""
This long function is unfortunately doing something very simple and is being very defensive:
We are separating out all parameters of the model into two buckets: those that will experience
weight decay for regularization and those that won't (biases, and layernorm/embedding weights).
We are then returning the PyTorch optimizer object.
"""
# separate out all parameters to those that will and won't experience weight decay
decay = set()
no_decay = set()
whitelist_weight_modules = (torch.nn.Linear, )
blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding)
for mn, m in self.named_modules():
for pn, p in m.named_parameters():
fpn = '%s.%s' % (mn, pn) if mn else pn # full param name
if pn.endswith('bias'):
# all biases will not be decayed
no_decay.add(fpn)
elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules):
# weights of whitelist modules will be weight decayed
decay.add(fpn)
elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules):
# weights of blacklist modules will NOT be weight decayed
no_decay.add(fpn)
# special case the position embedding parameter in the root GPT module as not decayed
no_decay.add('embedding.pos_emb')
# validate that we considered every parameter
param_dict = {pn: p for pn, p in self.named_parameters()}
inter_params = decay & no_decay
union_params = decay | no_decay
assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), )
assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \
% (str(param_dict.keys() - union_params), )
# create the pytorch optimizer object
optim_groups = [
{"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": self.cfg.optim.weight_decay},
{"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0},
]
optimizer = torch.optim.AdamW(optim_groups, lr=self.cfg.optim.lr, betas=self.cfg.optim.betas)
return optimizer
m = BasicNeMoGPTWithOptim(cfg=cfg.model)
```
-----
Now let's setup the config for the optimizer !
```
OmegaConf.set_struct(cfg.model, False)
optim_config = OmegaConf.create({
'lr': 3e-4,
'weight_decay': 0.1,
'betas': [0.9, 0.95]
})
cfg.model.optim = optim_config
OmegaConf.set_struct(cfg.model, True)
```
## Setting up the dataset / data loaders
So we were able almost entirely to replicate the MinGPT implementation.
Remember, NeMo models should contain all of the logic to load the Dataset and DataLoader for at least the train and validation step.
We temporarily provided empty implementations to get around it till now, but let's fill that in now!
-------
**Note for datasets**: Below, we will show an example using a very small dataset called `tiny_shakespeare`, found at the original [char-rnn repository](https://github.com/karpathy/char-rnn), but practically you could use any text corpus. The one suggested in minGPT is available at http://mattmahoney.net/dc/textdata.html
### Creating the Dataset
NeMo has Neural Type checking support, even for Datasets! It's just a minor change of the import in most cases and one difference in how we handle `collate_fn`.
We could paste the dataset info from minGPT, and you'd only need to make 2 changes!
-----
In this example, we will be writing a thin subclass over the datasets provided by `nlp` from HuggingFace!
```
from nemo.core import Dataset
from torch.utils import data
from torch.utils.data.dataloader import DataLoader
class TinyShakespeareDataset(Dataset):
def __init__(self, data_path, block_size, crop=None, override_vocab=None):
# load the data and crop it appropriately
with open(data_path, 'r') as f:
if crop is None:
data = f.read()
else:
f.seek(crop[0])
data = f.read(crop[1])
# build a vocabulary from data or inherit it
vocab = sorted(list(set(data))) if override_vocab is None else override_vocab
# Add UNK
special_tokens = ['<PAD>', '<UNK>'] # We use just <UNK> and <PAD> in the call, but can add others.
if not override_vocab:
vocab = [*special_tokens, *vocab] # Update train vocab with special tokens
data_size, vocab_size = len(data), len(vocab)
print('data of crop %s has %d characters, vocab of size %d.' % (str(crop), data_size, vocab_size))
print('Num samples in dataset : %d' % (data_size // block_size))
self.stoi = { ch:i for i,ch in enumerate(vocab) }
self.itos = { i:ch for i,ch in enumerate(vocab) }
self.block_size = block_size
self.vocab_size = vocab_size
self.data = data
self.vocab = vocab
self.special_tokens = special_tokens
def __len__(self):
return len(self.data) // self.block_size
def __getitem__(self, idx):
# attempt to fetch a chunk of (block_size + 1) items, but (block_size) will work too
chunk = self.data[idx*self.block_size : min(len(self.data), (idx+1)*self.block_size + 1)]
# map the string into a sequence of integers
ixes = [self.stoi[s] if s in self.stoi else self.stoi['<UNK>'] for s in chunk ]
# if stars align (last idx and len(self.data) % self.block_size == 0), pad with <PAD>
if len(ixes) < self.block_size + 1:
assert len(ixes) == self.block_size # i believe this is the only way this could happen, make sure
ixes.append(self.stoi['<PAD>'])
dix = torch.tensor(ixes, dtype=torch.long)
return dix[:-1], dix[1:]
@property
def output_types(self):
return {
'input': NeuralType(('B', 'T'), Index()),
'target': NeuralType(('B', 'T'), LabelsType())
}
```
------
We didn't have to change anything until here. How then is type-checking done?
NeMo does type-checking inside of the collate function implementation itself! In this case, it is not necessary to override the `collate_fn` inside the Dataset, but if we did need to override it, **NeMo requires that the private method `_collate_fn` be overridden instead**.
We can then use data loaders with minor modifications!
**Also, there is no need to implement the `input_types` for Dataset, as they are the ones generating the input for the model!**
-----
Let's prepare the dataset that we are going to use - Tiny Shakespeare from the following codebase [char-rnn](https://github.com/karpathy/char-rnn).
```
import os
if not os.path.exists('tiny-shakespeare.txt'):
!wget https://raw.githubusercontent.com/jcjohnson/torch-rnn/master/data/tiny-shakespeare.txt
!head -n 5 tiny-shakespeare.txt
train_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(0, int(1e6)))
val_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(int(1e6), int(50e3)), override_vocab=train_dataset.vocab)
test_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(int(1.05e6), int(100e3)), override_vocab=train_dataset.vocab)
```
### Setting up dataset/data loader support in the Model
So we now know our data loader works. Let's integrate it as part of the Model itself!
To do this, we use the three special attributes of the NeMo Model - `self._train_dl`, `self._validation_dl` and `self._test_dl`. Once you construct your DataLoader, place your data loader to these three variables.
For multi-data loader support, the same applies! NeMo will automatically handle the management of multiple data loaders for you!
```
class NeMoGPT(BasicNeMoGPTWithOptim):
def _setup_data_loader(self, cfg):
if self.vocab is None:
override_vocab = None
else:
override_vocab = self.vocab
dataset = TinyShakespeareDataset(
data_path=cfg.data_path,
block_size=cfg.block_size,
crop=tuple(cfg.crop) if 'crop' in cfg else None,
override_vocab=override_vocab
)
if self.vocab is None:
self.vocab = dataset.vocab
return DataLoader(
dataset=dataset,
batch_size=cfg.batch_size,
shuffle=cfg.shuffle,
collate_fn=dataset.collate_fn, # <-- this is necessary for type checking
pin_memory=cfg.pin_memory if 'pin_memory' in cfg else False,
num_workers=cfg.num_workers if 'num_workers' in cfg else 0
)
def setup_training_data(self, train_data_config: OmegaConf):
self.vocab = None
self._train_dl = self._setup_data_loader(train_data_config)
def setup_validation_data(self, val_data_config: OmegaConf):
self._validation_dl = self._setup_data_loader(val_data_config)
def setup_test_data(self, test_data_config: OmegaConf):
self._test_dl = self._setup_data_loader(test_data_config)
```
### Creating the dataset / dataloader config
The final step to setup this model is to add the `train_ds`, `validation_ds` and `test_ds` configs inside the model config!
```
OmegaConf.set_struct(cfg.model, False)
# Set the data path and update vocabular size
cfg.model.data_path = 'tiny-shakespeare.txt'
cfg.model.vocab_size = train_dataset.vocab_size
OmegaConf.set_struct(cfg.model, True)
train_ds = OmegaConf.create({
'data_path': '${model.data_path}',
'block_size': '${model.block_size}',
'crop': [0, int(1e6)],
'batch_size': 64,
'shuffle': True,
})
validation_ds = OmegaConf.create({
'data_path': '${model.data_path}',
'block_size': '${model.block_size}',
'crop': [int(1e6), int(50e3)],
'batch_size': 4,
'shuffle': False,
})
test_ds = OmegaConf.create({
'data_path': '${model.data_path}',
'block_size': '${model.block_size}',
'crop': [int(1.05e6), int(100e3)],
'batch_size': 4,
'shuffle': False,
})
# Attach to the model config
OmegaConf.set_struct(cfg.model, False)
cfg.model.train_ds = train_ds
cfg.model.validation_ds = validation_ds
cfg.model.test_ds = test_ds
OmegaConf.set_struct(cfg.model, True)
# Let's see the config now !
print(OmegaConf.to_yaml(cfg))
# Let's try creating a model now !
model = NeMoGPT(cfg=cfg.model)
```
-----
All the data loaders load properly ! Yay!
# Evaluate the model - end to end!
Now that the data loaders have been set up, all that's left is to train and test the model! We have most of the components required by this model - the train, val and test data loaders, the optimizer, and the type-checked forward step to perform the train-validation-test steps!
But training a GPT model from scratch is not the goal of this primer, so instead, let's do a sanity check by merely testing the model for a few steps using random initial weights.
The above will ensure that -
1) Our data loaders work as intended
2) The type checking system assures us that our Neural Modules are performing their forward step correctly.
3) The loss is calculated, and therefore the model runs end to end, ultimately supporting PyTorch Lightning.
```
if torch.cuda.is_available():
cuda = 1
else:
cuda = 0
trainer = ptl.Trainer(gpus=cuda, test_percent_check=1.0)
trainer.test(model)
```
# Saving and restoring models
NeMo internally keeps track of the model configuration, as well as the model checkpoints and parameters.
As long as your NeMo follows the above general guidelines, you can call the `save_to` and `restore_from` methods to save and restore your models!
```
model.save_to('gpt_model.nemo')
!ls -d -- *.nemo
temp_model = NeMoGPT.restore_from('gpt_model.nemo')
# [ERROR CELL]
temp_model.setup_test_data(temp_model.cfg.test_ds)
```
-----
Hmm, it seems it wasn't so easy in this case. Non-trivial models have non-trivial issues!
Remember, our NeMoGPT model sets its self.vocab inside the `setup_train_data` step. But that depends on the vocabulary generated by the train set... which is **not** restored during model restoration (unless you call `setup_train_data` explicitly!).
We can quickly resolve this issue by constructing an external data file to enable save and restore support, and NeMo supports that too! We will use the `register_artifact` API in NeMo to support external files being attached to the .nemo checkpoint.
```
class NeMoGPTv2(NeMoGPT):
def setup_training_data(self, train_data_config: OmegaConf):
self.vocab = None
self._train_dl = self._setup_data_loader(train_data_config)
# Save the vocab into a text file for now
with open('vocab.txt', 'w') as f:
for token in self.vocab:
f.write(f"{token}<SEP>")
# This is going to register the file into .nemo!
# When you later use .save_to(), it will copy this file into the tar file.
self.register_artifact(None, 'vocab.txt')
def setup_validation_data(self, val_data_config: OmegaConf):
# This is going to try to find the same file, and if it fails,
# it will use the copy in .nemo
vocab_file = self.register_artifact(None, 'vocab.txt')
with open(vocab_file, 'r') as f:
vocab = []
vocab = f.read().split('<SEP>')[:-1] # the -1 here is for the dangling <SEP> token in the file
self.vocab = vocab
self._validation_dl = self._setup_data_loader(val_data_config)
def setup_test_data(self, test_data_config: OmegaConf):
# This is going to try to find the same file, and if it fails,
# it will use the copy in .nemo
vocab_file = self.register_artifact(None, 'vocab.txt')
with open(vocab_file, 'r') as f:
vocab = []
vocab = f.read().split('<SEP>')[:-1] # the -1 here is for the dangling <SEP> token in the file
self.vocab = vocab
self._test_dl = self._setup_data_loader(test_data_config)
# Let's try creating a model now !
model = NeMoGPTv2(cfg=cfg.model)
# Now let's try to save and restore !
model.save_to('gpt_model.nemo')
temp_model = NeMoGPTv2.restore_from('gpt_model.nemo')
temp_model.setup_multiple_test_data(temp_model.cfg.test_ds)
if torch.cuda.is_available():
cuda = 1
else:
cuda = 0
trainer = ptl.Trainer(gpus=cuda, test_percent_check=1.0)
trainer.test(model)
```
------
There we go ! Now our model's can be serialized and de-serialized without any issue, even with an external vocab file !
| github_jupyter |
**Jupyter** allows you to write and run Python code through an interactive web browser interface.
Each Jupyter **notebook** is a series of **cells** that can have Python code or text.
The cell below contains Python code to carry out some simple arithmatic. You can run the code by selecting the cell and holding _shift_ while hitting _enter_. When you do this, the result of the arithmatic is displayed.
Here is the Python code for some simple arithmatic
```
3 * 4 * 5
```
Try running the code in the cell below:
```
3 * 4 * 6
```
Now, go back and change one of the numbers. Re-run the cell by hitting _shift_+_enter_.
_Exercise._ Compute the sum of the first five positive numbers, 1 through 5 inclusive.
```
1+2+ 3 + 4 +5
```
This is some random text
### Variables
Python can do much more than arithmatic. We can create and use named **variables**.
To start, we'll create a variable named `x` and give it the value 7.
```
x = 7
```
There's no output when we do this, but Jupyter will keep track of this variable and we can use it later.
Below, we'll calculate the square of `x`
```
x**2
```
We can give the variable `x` a new value, which will replace the old one.
Going forward, any time we use `x`, it will have this new value.
Below, we'll give `x` the value 17 instead of 7
```
x = 17
```
Now, we can use this new value for `x` to compute the value of `x + 1`.
```
x + 1
```
We can even use the current value of `x` to compute a new value for `x`.
Here, we'll set x to twice its current value and print this new value
```
x = 2 * x
x
```
We can have many named variables at once and use them in complicated ways
```
y = 5
z = 3
(y + z)/(y - z)
```
If we try to use a variable that we haven't given a value, Python will report an error to us.
Try to use the value of `w`, which we haven't set yet.
```
w
```
The value for a variable is calculated at the time it is assigned. If we use the variable `a` to compute the value that we give to variable `b`, and then we later change the value of `b`, this doesn't affect `a`.
In the example below, we'll set `a` to 5, use it to compute a value for `b`, and then change `a` to 7.
```
a = 5
b = 2 * a
a = 7
```
Check the value of `b`, which was computed when `a` was set to 5.
```
b
```
Now, check the value of `a`, which was updated to 7 after it was used to compute the value for `b`.
```
a
```
In the examples above, we used one-letter variable names similar to the ones we use in mathematics.
It's often better to use longer and more descriptive names for variables. Clearer variable names will make it easier for others to understand your Python when reading it -- and for you to understand it yourself when you come back to it weeks or months later.
For instance, here is Python code to define two variables representing the molecular masses of methionine and its oxidized derivative, methionine sulfoxide
```
methionine_mass = 131.0405
meth_sulfox_mass = 147.0354
```
Paste this into the cells below and use these variables to calculate the change in molecular mass that occurs when methionine is oxidized one step.
```
methionine_mass = 131.0405
meth_sulfox_mass = 147.0354
methSulfoxMass = 147.0354
meth_sulfox_mass - methionine_mass
```
Of course, sometimes we want to work with very large or very small numbers. Python can both produce and understand scientific notation.
Python defaults to scientific notation for very small numbers. For example, try printing the value of
```
1 / (1000 * 1000)
```
```
1 / (1000 * 1000)
```
And, we can use scientific notation to write even ordinary numbers. For instance, to write 4,300, we can convert to scientific notation of 4.3 × 10³, which in Python is
```
4.3e3
```
Use this way of writing the number in the cell below.
```
4.3e3
```
_Exercise._
A standard plasmid miniprep could produce 10 micrograms of plasmid DNA. The cell below has a variable representing the yield of DNA from the miniprep, in grams.
A typical plasmid containing a GFP reporter construct might be about 5 kilobase pairs long, and a single base pair has a molecular mass of 650 grams / mole. Add variables with descriptive names to store
- the size of the plasmid
- the molecular mass of one base pair
```
# mass in grams
plasmid_mass_yield = 10e-6
# size in base pairs
plasmid_size = 5000
# mol mass in (grams / mole)
base_pair_molmass = 650
```
_(continued)_ Use the variables you defined above to compute the value for a new variable with a descriptive name for
- the molecular mass of the whole plasmid
and then print the result of this computation
```
# mol mass in (grams / mole)
plasmid_molmass = plasmid_size * base_pair_molmass
'%e' % plasmid_molmass
```
_(continued)_ Now compute the yield of DNA from the miniprep, in moles
```
# mole = grams / (grams / mole)
plasmid_mole_yield = plasmid_mass_yield / plasmid_molmass
plasmid_mole_yield
```
_(continued)_ Avogadro's constant is the number of molecules per mole. Use this to compute the number of copies of plasmid DNA in the miniprep.
```
avogadro = 6.02e23
```
```
avogadro = 6.02e23
plasmid_mole_yield
'%e' % (plasmid_mole_yield * avogadro)
```
_(continued)_ The miniprep produces 30 µl of DNA. Define a variable for this volume and use it to compute the concentration of DNA in the plasmid miniprep.
```
# volume in liters
volume = 30e-6
# moles / liters = molar
plasmid_mole_yield / volume
```
### Data types
Python keeps track of different **data types** for each variable and each output it computes. We've already seen two different data types, in fact -- one for integers (whole numbers) and one for numbers with a fractional part.
We can use `type()` to ask Python, what is the type of this value?
Here is an example, asking Python the type of the number `6`. The result of `int` is short for **int**eger.
```
type(6)
```
Below we ask Python the type of the number 2.5. The result of `float` is short for **float**ing-point number, which is a slightly confusing reference to the decimal point in a number with a fractional part.
```
type(2.5)
```
All of the values that Python computes have a data type.
Below, we ask Python the type of the value computed when we do multiplication `2*3`
```
type(2*3)
```
When we multiply two integers together, the result is also an integer.
Division can create non-integers from integers, however. Even though `5` and `2` are integers, `5/2` is not an integer.
```
type(5/2)
```
Because division can create non-integers, the output of division is _always_ a `float`, even when the result happens to be a whole number and the fractional part is 0.
```
type(6/2)
6/2
```
In fact, we can write a whole number as a `float` by adding the decimal point and zero, like `6.0`
```
6.0
```
Because Python keeps track of data types, the integer `6` and the number `6.0` are not exactly the same.
### Strings
Python can also keep track of text in variables. We'll often use text to store DNA or protein sequences using one-letter codes. The type of this text data is `str`, because the text is a **str**ing of characters.
To write a text string in Python, enclose it in single quotes. Using quotes allows Python to distinguish a text string from the name of a variable: `'x'` is a one-letter text string, and `x` refers to a variable with a one-letter name.
Here we look at the type of the string `'MCB200'`
```
type('MCB200')
```
We can join two strings together using `+`. Joining strings like this is called **concatenation**.
```
'MCB' + '200'
```
Notice that the string `'200'` is different from the number `200`. The string is a sequence of three characters that happen to be digits, and adding two strings that happen to be numbers will not perform arithmatic.
Here we add the string `'200'` to the string `'100'`
```
'200' + '100'
```
What happens when we try to add a string with an integer?
```
'200' + 100
```
The string `'200'` and the integer `100` are incompatible types, and when we try to add them, it produces a "type error". But, what if we have an integer and we want to turn it into a string?
We can use `str(...)` to convert a number into a string, like this:
```
'MCB' + str(100+100)
```
_Exercise._ Define variables containing your first and last names. Use these variables to compute a string representing your full name.
(_Hint_ You might need to add in some additional characters as well as the two name variables)
```
first_name = 'Nick'
last_name = 'Ingolia'
first_name + ' ' + last_name
```
`str()` is one example of a **function** in Python. We actually saw another example as well, `type()`.
Both of these functions take an **argument** as input and **return** a value computed using the argument. We say that we **call** a function when we run it.
The absolute value function `abs()` is also available in Python. Below we will call `abs()` on the value `-100` and see the return value.
```
abs(-100)
```
We can carry out further computations using the value returned by a function.
Below we double the result of taking the absolute value of -100. In mathematical terms, we're calculating _2 * |-100|_
```
2 * abs(-100)
```
We can also use complicated expressions as the argument to a function.
As shown below, we can compute the integer `200` using `abs()`, convert it into a string, and combine it with `'MCB'`
```
'MCB' + str(2 * abs(-100))
```
Some functions take more than one argument. For instance, `max()` finds the maximum value among all of its arguments.
Here we find the largest number among `3`, `5`, and `4`:
```
max(3, 5, 4)
```
Python has a small collection of built-in functions, like `str()` and `abs()`, that are always available.
Another built-in function, `len()`, takes a string as an argument and returns the length of the string -- i.e., the nubmer of characters in the string.
```
len('MCB200')
```
The built-in function `print()` is useful for displaying the results of a computation in the middle of a cell. There were several places above where we split a single calculation across multiple cells in order to see an intermediate value. Other times, we assigned a value to a variable and then immediately used the variable just to see the result. In each of those cases, we could instead use `print()` to display the result and keep going.
Below, we use print to display some intermediate values when multiplying together two integers and then dividing them again, which produces a floating-point number.
```
x = 2
print(x)
x = x * 2
print(x)
x = x / 2
print(x)
```
The `print()` function has a special behavior with strings -- it doesn't display the quote marks and instead prints just the contents of the string.
Below we show the constrast between using `print()` on a string and displaying the string as the result of a computation.
```
x = 'MCB200'
print(x)
x
```
### Modules
In addition to these built-in functions, Python **modules** provide many other functions. For instance, many mathematical functions can be found in the `math` module. To use these functions, we must first **import** the `math` module. Once we do that, we can use mathematical functions such as `math.sqrt()`, which computes the square root of its argument.
We do this with
```
import math
math.sqrt(49)
```
```
import math
math.sqrt(49)
```
The `math` module also provides mathematical constants like π, named `math.pi`
```
math.pi
```
_Exercise._ The built-in `int()` function converts other data types to integers. Use `int()` to convert π to an integer and see the result.
```
int(math.pi)
```
_(continued)_ `int()` can also be used to convert a string to an integer. Try out this use of `int()` on a string that represents an integer.
```
2 * int('1342')
```
_(continued)_ Now, test the use of `int()` on a string that does _not_ represent an integer, something with letters or other non-digits in it.
```
int('13A2')
```
_(continued)_ Use `int()` to convert _e_ (the natural logarithm base, Euler's number), given by `math.e`, to an integer.
```
int(math.e)
```
_(continued)_ Is this different from the result you would expect? Python has a built-in function called `round()` that is specialized for converting numbers to integers. Use `round()` on _e_ instead.
```
round(math.e)
```
### Methods
A **method** is a special kind of Python function that is "attached" to a Python type. For example, the `str` data type has many methods to carry out operations that make sense for a string.
For example, the `upper()` method returns a version of a string with all the letters converted to upper-case.
Below we demonstrate the use of `upper()` on the string `'mcb200'`. Keep in mind that `upper()` doesn't take any arguments. We still need parentheses to indicate to Python that we're calling a function, and we can just use empty parentheses.
```
'mcb200'.upper()
```
For comparison, we can see what happens when we leave off the parentheses:
```
'mcb200'.upper
```
The string method `replace()` creates a new string where a specified sub-string is replaced with something else. This method has two arguments: the substring to be changed, and the replacement.
Below, we show how `.replace()` can be used to change every occurrence of "ight" to "ite" in a short sentence, "turn right at the light tonight".
```
'turn right at the light tonight'.replace('ight', 'ite')
```
The string methods like `upper()` and `replace()` don't change the string itself, but instead make a new string based on the old one. We can see this by storing a string in a variable, using `upper()`, and then checking the original value of the variable.
```
original = 'mcb200'
new = original.upper()
print(original)
print(new)
```
In fact, we can never change the contents of an existing string; in Python, strings are **immutable**. We can assign a new string to an existing variable and replace the existing string, just as we could assign a new number to an existing variable to replace its current value.
Later on, we'll see other data types that are **mutable**.
_(Exercise)_ Here is a short DNA sequence written in lower-case letters.
```
dna = 'atggctacacat'
```
Use the string methods we just learned to generate a corresponding RNA sequence using upper-case letters. Recall that RNA sequences have uracil in place of thymine.
```
dna = 'atggctacacat'
dna.replace('t','u').upper()
```
_(continued)_ Switch the order of the two string methods to reach the same result. What else needs to change?
```
dna.upper().replace('T','U')
```
| github_jupyter |
## 10.4 딥러닝 기반 Q-Learning을 이용하는 강화학습
- 관련 패키지 불러오기
```
# 기본 패키지
import numpy as np
import random
from collections import deque
import matplotlib.pyplot as plt
# 강화학습 환경 패키지
import gym
# 인공지능 패키지: 텐서플로, 케라스
# 호환성을 위해 텐스플로에 포함된 케라스를 불러옴
import tensorflow as tf # v2.4.1 at 7/25/2021
from tensorflow import keras # v2.4.0 at 7/25/2021
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
```
- Q 함수를 위한 뉴럴넷 구성하기
```
def create_q_model(num_states, num_actions):
inputs = Input(shape=(num_states,))
layer = Dense(32, activation="relu")(inputs)
layer = Dense(16, activation="relu")(layer)
action = Dense(num_actions, activation="linear")(layer)
return Model(inputs=inputs, outputs=action)
model = create_q_model(4,2)
model.summary()
```
- Q함수 뉴럴넷의 학습에 필요한 코드 작성
```
def get_env_model(id='MountainCar-v0'):
env = gym.make(id)
num_states = env.observation_space.shape[0]
num_actions = env.action_space.n
model = create_q_model(num_states, num_actions)
return env, model
def train(model, env):
state_size = env.observation_space.shape[0]
action_size = env.action_space.n
states = np.zeros((10,state_size), dtype=np.float32)
with tf.GradientTape() as tape:
predicts = model(states)
env, model = get_env_model()
train(model, env)
print('Simple processing used in training is completed!')
env_cartpole = gym.make('CartPole-v1')
print('CartPole-v1: ', env_cartpole.observation_space.shape, env_cartpole.action_space.n)
env_mountaincar = gym.make('MountainCar-v0')
print('MountainCar-v0: ', env_mountaincar.observation_space.shape, env_mountaincar.action_space.n)
class World_00:
def __init__(self):
self.get_env_model()
def get_env_model(self):
self.env = gym.make('MountainCar-v0')
self.num_states = env.observation_space.shape[0]
self.num_actions = env.action_space.n
self.model = create_q_model(self.num_states, self.num_actions)
# print(self.model.summary())
def train(self):
states = np.zeros((10,self.num_states), dtype=np.float32)
with tf.GradientTape() as tape:
predicts = self.model(states)
new_world = World_00()
new_world.train()
print('Simple processing used in training is completed!')
def env_test_model_memory(memory, env, model, n_episodes=1000,
flag_render=False):
for e in range(n_episodes):
done = False
score = 0
s = env.reset()
while not done:
s_array = np.array(s).reshape((1,-1))
Qsa = model.predict(s_array)[0]
a = np.argmax(Qsa)
next_s, r, done, _ = env.step(a)
if flag_render:
env.render()
score += r
memory.append([s,a,r,next_s,done])
print(f'Episode: {e:5d} --> Score: {score:3.1f}')
print('Notice that the max score is set to 500.0 in CartPole-v1')
def list_rotate(l):
return list(zip(*l))
class World_01(World_00):
def __init__(self):
World_00.__init__(self)
self.memory = deque(maxlen=2000)
self.N_batch = 64
self.t_model = create_q_model(self.num_states, self.num_actions)
self.discount_factor = 0.99
self.learning_rate = 0.001
self.optimizer = Adam(lr=self.learning_rate)
def trial(self, flag_render=False):
env_test_model_memory(self.memory, self.env,
self.model, n_episodes=10, flag_render=flag_render)
print(len(self.memory))
def train_memory(self):
if len(self.memory) >= self.N_batch:
memory_batch = random.sample(self.memory, self.N_batch)
s_l,a_l,r_l,next_s_l,done_l = [np.array(x) for x in list_rotate(memory_batch)]
model_w = self.model.trainable_variables
with tf.GradientTape() as tape:
Qsa_pred_l = self.model(s_l.astype(np.float32))
a_l_onehot = tf.one_hot(a_l, self.num_actions)
Qs_a_pred_l = tf.reduce_sum(a_l_onehot * Qsa_pred_l,
axis=1)
Qsa_tpred_l = self.t_model(next_s_l.astype(np.float32))
Qsa_tpred_l = tf.stop_gradient(Qsa_tpred_l)
max_Q_next_s_a_l = np.amax(Qsa_tpred_l, axis=-1)
Qs_a_l = r_l + (1 - done_l) * self.discount_factor * max_Q_next_s_a_l
loss = tf.reduce_mean(tf.square(Qs_a_l - Qs_a_pred_l))
grads = tape.gradient(loss, model_w)
self.optimizer.apply_gradients(zip(grads, model_w))
new_world = World_01()
new_world.trial()
new_world.train_memory()
new_world.env.close()
print('Completed!')
class World_02(World_01):
def __init__(self):
World_01.__init__(self)
self.epsilon = 0.2
def update_t_model(self):
self.t_model.set_weights(self.model.get_weights())
def best_action(self, s):
if random.random() <= self.epsilon:
return random.randrange(self.num_actions)
else:
s_array = np.array(s).reshape((1,-1))
Qsa = self.model.predict(s_array)[0]
return np.argmax(Qsa)
def trials(self, n_episodes=100, flag_render=False):
memory = self.memory
env = self.env
model = self.model
score_l = []
for e in range(n_episodes):
done = False
score = 0
s = env.reset()
while not done:
a = self.best_action(s)
next_s, r, done, _ = env.step(a)
if flag_render:
env.render()
score += r
memory.append([s,a,r,next_s,done])
# self.train_memory()
s = next_s
self.train_memory()
self.update_t_model()
print(f'Episode: {e:5d} --> Score: {score:3.1f}')
score_l.append(score)
return score_l
new_world = World_02()
score_l = new_world.trials(n_episodes=50)
new_world.env.close()
np.save('score_l.npy', score_l)
```
---
### 전체코드 (분할 버전)
```
l = [[1,2],[3,4],[5,6]]
list(zip(*l))
# 기본 패키지
import numpy as np
import random
from collections import deque
import matplotlib.pyplot as plt
# 강화학습 환경 패키지
import gym
# 인공지능 패키지: 텐서플로, 케라스
# 호환성을 위해 텐스플로에 포함된 케라스를 불러옴
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
def create_q_model(num_states, num_actions):
inputs = Input(shape=(num_states,))
layer = Dense(32, activation="relu")(inputs)
layer = Dense(16, activation="relu")(layer)
action = Dense(num_actions, activation="linear")(layer)
return Model(inputs=inputs, outputs=action)
def list_rotate(l):
return list(zip(*l))
class WorldFull():
def __init__(self):
self.get_env_model() #?
self.memory = deque(maxlen=2000)
self.N_batch = 64
self.t_model = create_q_model(self.num_states, self.num_actions)
self.discount_factor = 0.99
self.learning_rate = 0.001
self.optimizer = Adam(lr=self.learning_rate)
self.epsilon = 0.2
def get_env_model(self):
self.env = gym.make('CartPole-v1')
self.num_states = self.env.observation_space.shape[0]
self.num_actions = self.env.action_space.n
self.model = create_q_model(self.num_states, self.num_actions)
def update_t_model(self):
self.t_model.set_weights(self.model.get_weights())
def best_action(self, s):
if random.random() <= self.epsilon:
return random.randrange(self.num_actions)
else:
s_array = np.array(s).reshape((1,-1))
Qsa = self.model.predict(s_array)[0]
return np.argmax(Qsa)
def train_memory(self):
if len(self.memory) >= self.N_batch:
memory_batch = random.sample(self.memory, self.N_batch)
s_l,a_l,r_l,next_s_l,done_l = [np.array(x) for x in list_rotate(memory_batch)]
model_w = self.model.trainable_variables
with tf.GradientTape() as tape:
Qsa_pred_l = self.model(s_l.astype(np.float32))
a_l_onehot = tf.one_hot(a_l, self.num_actions)
Qs_a_pred_l = tf.reduce_sum(a_l_onehot * Qsa_pred_l,
axis=1)
Qsa_tpred_l = self.t_model(next_s_l.astype(np.float32))
Qsa_tpred_l = tf.stop_gradient(Qsa_tpred_l)
max_Q_next_s_a_l = np.amax(Qsa_tpred_l, axis=-1)
Qs_a_l = r_l + (1 - done_l) * self.discount_factor * max_Q_next_s_a_l
loss = tf.reduce_mean(tf.square(Qs_a_l - Qs_a_pred_l))
grads = tape.gradient(loss, model_w)
self.optimizer.apply_gradients(zip(grads, model_w))
def trials(self, n_episodes=100, flag_render=False):
memory = self.memory
env = self.env
model = self.model
score_l = []
for e in range(n_episodes):
done = False
score = 0
s = env.reset()
while not done:
a = self.best_action(s)
next_s, r, done, _ = env.step(a)
if flag_render:
env.render()
score += r
memory.append([s,a,r,next_s,done])
# self.train_memory()
s = next_s
self.train_memory()
self.update_t_model()
print(f'Episode: {e:5d} --> Score: {score:3.1f}')
score_l.append(score)
return score_l
new_world = WorldFull()
score_l = new_world.trials(n_episodes=100)
new_world.env.close()
np.save('score_l.npy', score_l)
print('Job completed!')
plt.plot(score_l)
plt.title("Deep Q-Learning for Cartpole")
plt.xlabel("Episode")
plt.ylabel("Score")
plt.plot(score_l)
plt.title("Deep Q-Learning for Cartpole")
plt.xlabel("Episode")
plt.ylabel("Score")
```
---
### 전체코드
```
"""
ENV: MoutainCar
- 2nd hidden layer: 16 --> 32
"""
# 기본 패키지
import numpy as np
import random
from collections import deque
import matplotlib.pyplot as plt
# 강화학습 환경 패키지
import gym
# 인공지능 패키지: 텐서플로, 케라스
# 호환성을 위해 텐스플로에 포함된 케라스를 불러옴
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
def create_q_model(num_states, num_actions):
inputs = Input(shape=(num_states,))
layer = Dense(32, activation="relu")(inputs)
layer = Dense(32, activation="relu")(layer)
action = Dense(num_actions, activation="linear")(layer)
return Model(inputs=inputs, outputs=action)
def list_rotate(l):
return list(zip(*l))
class WorldFull():
def __init__(self):
self.get_env_model() #?
self.memory = deque(maxlen=2000)
self.N_batch = 64
self.t_model = create_q_model(self.num_states, self.num_actions)
self.discount_factor = 0.99
self.learning_rate = 0.001
self.optimizer = Adam(lr=self.learning_rate)
self.epsilon = 0.05
def get_env_model(self):
self.env = gym.make('MountainCar-v0')
self.num_states = self.env.observation_space.shape[0]
self.num_actions = self.env.action_space.n
self.model = create_q_model(self.num_states, self.num_actions)
def update_t_model(self):
self.t_model.set_weights(self.model.get_weights())
def best_action(self, s):
if random.random() <= self.epsilon:
return random.randrange(self.num_actions)
else:
s_array = np.array(s).reshape((1,-1))
Qsa = self.model.predict(s_array)[0]
return np.argmax(Qsa)
def train_memory(self):
if len(self.memory) >= self.N_batch:
memory_batch = random.sample(self.memory, self.N_batch)
s_l,a_l,r_l,next_s_l,done_l = [np.array(x) for x in list_rotate(memory_batch)]
model_w = self.model.trainable_variables
with tf.GradientTape() as tape:
Qsa_pred_l = self.model(s_l.astype(np.float32))
a_l_onehot = tf.one_hot(a_l, self.num_actions)
Qs_a_pred_l = tf.reduce_sum(a_l_onehot * Qsa_pred_l,
axis=1)
Qsa_tpred_l = self.t_model(next_s_l.astype(np.float32))
Qsa_tpred_l = tf.stop_gradient(Qsa_tpred_l)
max_Q_next_s_a_l = np.amax(Qsa_tpred_l, axis=-1)
Qs_a_l = r_l + (1 - done_l) * self.discount_factor * max_Q_next_s_a_l
loss = tf.reduce_mean(tf.square(Qs_a_l - Qs_a_pred_l))
grads = tape.gradient(loss, model_w)
self.optimizer.apply_gradients(zip(grads, model_w))
def trials(self, n_episodes=100, flag_render=False):
memory = self.memory
env = self.env
model = self.model
score_l = []
for e in range(n_episodes):
done = False
score = 0
s = env.reset()
while not done:
a = self.best_action(s)
next_s, r, done, _ = env.step(a)
if flag_render:
env.render()
score += r
memory.append([s,a,r,next_s,done])
# self.train_memory()
s = next_s
self.train_memory()
self.update_t_model()
print(f'Episode: {e:5d} --> Score: {score:3.1f}')
score_l.append(score)
return score_l
new_world = WorldFull()
score_l = new_world.trials(n_episodes=100)
new_world.env.close()
np.save('score_l.npy', score_l)
print('Job completed!')
plt.plot(score_l)
plt.title("Deep Q-Learning for Cartpole")
plt.xlabel("Episode")
plt.ylabel("Score")
```
| github_jupyter |
#1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
```
!pip install git+https://github.com/google/starthinker
```
#2. Get Cloud Project ID
To run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
```
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
```
#3. Get Client Credentials
To read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
```
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
```
#4. Enter SmartSheet Report To BigQuery Parameters
Move report data into a BigQuery table.
1. Specify <a href='https://smartsheet-platform.github.io/api-docs/' target='_blank'>SmartSheet Report</a> token.
1. Locate the ID of a report by viewing its properties.
1. Provide a BigQuery dataset ( must exist ) and table to write the data into.
1. StarThinker will automatically map the correct schema.
Modify the values below for your use case, can be done multiple times, then click play.
```
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'auth_write': 'service', # Credentials used for writing data.
'token': '', # Retrieve from SmartSheet account settings.
'report': '', # Retrieve from report properties.
'dataset': '', # Existing BigQuery dataset.
'table': '', # Table to create from this report.
'schema': '', # Schema provided in JSON list format or leave empty to auto detect.
}
print("Parameters Set To: %s" % FIELDS)
```
#5. Execute SmartSheet Report To BigQuery
This does NOT need to be modified unles you are changing the recipe, click play.
```
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields, json_expand_includes
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'smartsheet': {
'auth': 'user',
'token': {'field': {'name': 'token','kind': 'string','order': 2,'default': '','description': 'Retrieve from SmartSheet account settings.'}},
'report': {'field': {'name': 'report','kind': 'string','order': 3,'description': 'Retrieve from report properties.'}},
'out': {
'bigquery': {
'auth': 'user',
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 4,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'table','kind': 'string','order': 5,'default': '','description': 'Table to create from this report.'}},
'schema': {'field': {'name': 'schema','kind': 'json','order': 6,'description': 'Schema provided in JSON list format or leave empty to auto detect.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
json_expand_includes(TASKS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
```
| github_jupyter |
```
import time
import pandas as pd
import numpy as np
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from sklearn.metrics import mean_squared_error
from sklearn.utils import shuffle
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
epochs = [30, 50, 100, 200]
epoch = 200
# Load data
train = pd.read_csv('trips_train_3.csv', header=None)
test = pd.read_csv('trips_test_2.csv', header=None )
scaler = MinMaxScaler(feature_range=(-1, 1))
window_size = 78 # 78 steps in one day
# normalize features
scaled = scaler.fit_transform(train.values)
train = pd.DataFrame(scaled)
series_s = train.copy()
for i in range(window_size):
train = pd.concat([train, series_s.shift(-(i+1))], axis=1)
train.dropna(axis=0, inplace=True)
# Hacer lo mismo para los datos de prueba
test = test.iloc[:24624, :] # The rest are all 0s
scaled = scaler.fit_transform(test.values)
test = pd.DataFrame(scaled)
series_s = test.copy()
for i in range(window_size):
test = pd.concat([test, series_s.shift(-(i+1))], axis = 1)
test.dropna(axis=0, inplace=True)
train = shuffle(train)
train_X = train.iloc[:,:-1]
train_y = train.iloc[:,-1]
test_X = test.iloc[:,:-1]
test_y = test.iloc[:,-1]
train_X = train_X.values
train_y = train_y.values
test_X = test_X.values
test_y = test_y.values
train_X = train_X.reshape(train_X.shape[0],train_X.shape[1],1)
test_X = test_X.reshape(test_X.shape[0],test_X.shape[1],1)
# Define the LSTM model
model = Sequential()
model.add(LSTM(input_shape=(window_size,1), output_dim=window_size, return_sequences=True))
model.add(Dropout(0.5))
model.add(LSTM(256))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation("linear"))
model.compile(loss="mse", optimizer="adam")
model.summary()
# Train
start = time.time()
model.fit(train_X, train_y, batch_size=100, epochs=epoch, validation_split=0.1)
print("> Compilation Time : ", time.time() - start)
def moving_test_window_preds(n_future_preds):
''' n_future_preds - Represents the number of future predictions we want to make
This coincides with the number of windows that we will move forward
on the test data
'''
preds_moving = [] # Use this to store the prediction made on each test window
moving_test_window = [test_X[0,:].tolist()] # Creating the first test window
moving_test_window = np.array(moving_test_window) # Making it an numpy array
for i in range(n_future_preds):
preds_one_step = model.predict(moving_test_window) # Note that this is already a scaled prediction so no need to rescale this
preds_moving.append(preds_one_step[0,0]) # get the value from the numpy 2D array and append to predictions
preds_one_step = preds_one_step.reshape(1,1,1) # Reshaping the prediction to 3D array for concatenation with moving test window
moving_test_window = np.concatenate((moving_test_window[:,1:,:], preds_one_step), axis=1) # This is the new moving test window, where the first element from the window has been removed and the prediction has been appended to the end
preds_moving = scaler.inverse_transform(np.array(preds_moving).reshape(-1, 1))
return preds_moving
preds_moving = moving_test_window_preds(500)
actuals = scaler.inverse_transform(test_y.reshape(-1, 1))
mse = mean_squared_error(actuals[74:150], preds_moving[74:150])
mae = mean_absolute_error(actuals[74:150], preds_moving[74:150])
# Save data
with open('f_%s_%s_%s.txt' % (epoch, mse, mae), 'w') as f:
for i in preds_moving:
f.write("%s\n" % i)
from matplotlib import pyplot
pyplot.figure(figsize=(20,6))
pyplot.plot(actuals[:600])
pyplot.plot(preds_moving[:600])
pyplot.title("200 epochs")
pyplot.show()
with open("f_30_191360.34899787782_369.8666947640871.txt") as f:
pyplot.figure(figsize=(20,6))
pyplot.plot(actuals[:600])
pyplot.plot(np.array([l[1:-2] for l in f.readlines()]))
pyplot.title("30 epochs")
pyplot.show()
with open("f_50_23261.834132086205_116.21821095441517.txt") as f:
pyplot.figure(figsize=(20,6))
pyplot.plot(actuals[:600])
pyplot.plot(np.array([l[1:-2] for l in f.readlines()]))
pyplot.title("50 epochs")
pyplot.show()
with open("f_100_8694.5463661338_66.40398304085983.txt") as f:
pyplot.figure(figsize=(20,6))
pyplot.plot(actuals[:600])
pyplot.plot(np.array([l[1:-2] for l in f.readlines()]))
pyplot.title("100 epochs")
pyplot.show()
model.predict(np.array([scaler.fit_transform(actuals[:78])]))
pyplot.figure(figsize=(20,6))
pyplot.plot(actuals[:156])
pyplot.plot(np.concatenate(([[0] for i in range(78)], scaler.inverse_transform(res))))
pyplot.show()
def predict_next(info, n):
res = []
for i in range(n):
base = np.concatenate((scaler.fit_transform(info[i:78]), res)) if res else scaler.fit_transform(info[:78])
pred = model.predict(np.array([base]))
res.append(pred[0])
return res
data_temblor = np.array([[int(l.strip())] for l in open("trips_19_sept.csv").readlines() if l.strip()])
pyplot.figure(figsize=(20,6))
pyplot.plot(data_temblor[:156])
pyplot.plot(np.concatenate(([[0] for i in range(78)], scaler.inverse_transform(predict_next(data_temblor, 50)))))
pyplot.show()
n = 40
pyplot.figure(figsize=(20,6))
pyplot.plot(data_temblor[:560])
pyplot.plot(np.concatenate(([[0] for i in range(78 + n)], scaler.inverse_transform(predict_next(data_temblor[n:], 20)))))
pyplot.show()
pyplot.figure(figsize=(20,6))
pyplot.plot(actuals[0:76])
pyplot.plot(actuals[76:152])
pyplot.plot(actuals[152:228])
pyplot.plot(actuals[228:304])
# for d in range(5):
# pyplot.plot(actuals[78*d:78*(d+1)])
pyplot.show()
```
| github_jupyter |
# **Swin Transformer: Hierarchical Vision Transformer using Shifted Windows**
**Swin Transformer (ICCV 2021 best paper award (Marr Prize))**
**Authors {v-zeliu1,v-yutlin,yuecao,hanhu,v-yixwe,zhez,stevelin,bainguo}@microsoft.com**
**Official Github**: https://github.com/microsoft/Swin-Transformer
---
**Edited By Su Hyung Choi - [Computer Vision Paper Reviews]**
**[Github: @JonyChoi]** https://github.com/jonychoi/Computer-Vision-Paper-Reviews
Edited Jan 4 2022
---
## **About Swin Transformer**
```
# --------------------------------------------------------
# Swin Transformer
# Copyright (c) 2021 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ze Liu
# --------------------------------------------------------
import torch
import torch.nn as nn
import torch.utils.checkpoint as checkpoint
from timm.models.layers import DropPath, to_2tuple, trunc_normal_
class Mlp(nn.Module):
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
def window_partition(x, window_size):
"""
Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C)
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
return windows
def window_reverse(windows, window_size, H, W):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
B = int(windows.shape[0] / (H * W / window_size / window_size))
x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
return x
class WindowAttention(nn.Module):
r""" Window based multi-head self attention (W-MSA) module with relative position bias.
It supports both of shifted and non-shifted window.
Args:
dim (int): Number of input channels.
window_size (tuple[int]): The height and width of the window.
num_heads (int): Number of attention heads.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
proj_drop (float, optional): Dropout ratio of output. Default: 0.0
"""
def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
super().__init__()
self.dim = dim
self.window_size = window_size # Wh, Ww
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
# define a parameter table of relative position bias
self.relative_position_bias_table = nn.Parameter(
torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
# get pair-wise relative position index for each token inside the window
coords_h = torch.arange(self.window_size[0])
coords_w = torch.arange(self.window_size[1])
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
relative_coords[:, :, 1] += self.window_size[1] - 1
relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
self.register_buffer("relative_position_index", relative_position_index)
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
trunc_normal_(self.relative_position_bias_table, std=.02)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x, mask=None):
"""
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
attn = attn + relative_position_bias.unsqueeze(0)
if mask is not None:
nW = mask.shape[0]
attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = self.softmax(attn)
else:
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
def extra_repr(self) -> str:
return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
def flops(self, N):
# calculate flops for 1 window with token length of N
flops = 0
# qkv = self.qkv(x)
flops += N * self.dim * 3 * self.dim
# attn = (q @ k.transpose(-2, -1))
flops += self.num_heads * N * (self.dim // self.num_heads) * N
# x = (attn @ v)
flops += self.num_heads * N * N * (self.dim // self.num_heads)
# x = self.proj(x)
flops += N * self.dim * self.dim
return flops
class SwinTransformerBlock(nn.Module):
r""" Swin Transformer Block.
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resulotion.
num_heads (int): Number of attention heads.
window_size (int): Window size.
shift_size (int): Shift size for SW-MSA.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float, optional): Stochastic depth rate. Default: 0.0
act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
act_layer=nn.GELU, norm_layer=nn.LayerNorm):
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.num_heads = num_heads
self.window_size = window_size
self.shift_size = shift_size
self.mlp_ratio = mlp_ratio
if min(self.input_resolution) <= self.window_size:
# if window size is larger than input resolution, we don't partition windows
self.shift_size = 0
self.window_size = min(self.input_resolution)
assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
self.norm1 = norm_layer(dim)
self.attn = WindowAttention(
dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
if self.shift_size > 0:
# calculate attention mask for SW-MSA
H, W = self.input_resolution
img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
h_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
w_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
cnt = 0
for h in h_slices:
for w in w_slices:
img_mask[:, h, w, :] = cnt
cnt += 1
mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
else:
attn_mask = None
self.register_buffer("attn_mask", attn_mask)
def forward(self, x):
H, W = self.input_resolution
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
shortcut = x
x = self.norm1(x)
x = x.view(B, H, W, C)
# cyclic shift
if self.shift_size > 0:
shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
else:
shifted_x = x
# partition windows
x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
# W-MSA/SW-MSA
attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
# merge windows
attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
# reverse cyclic shift
if self.shift_size > 0:
x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
else:
x = shifted_x
x = x.view(B, H * W, C)
# FFN
x = shortcut + self.drop_path(x)
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
def extra_repr(self) -> str:
return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
def flops(self):
flops = 0
H, W = self.input_resolution
# norm1
flops += self.dim * H * W
# W-MSA/SW-MSA
nW = H * W / self.window_size / self.window_size
flops += nW * self.attn.flops(self.window_size * self.window_size)
# mlp
flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
# norm2
flops += self.dim * H * W
return flops
class PatchMerging(nn.Module):
r""" Patch Merging Layer.
Args:
input_resolution (tuple[int]): Resolution of input feature.
dim (int): Number of input channels.
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
super().__init__()
self.input_resolution = input_resolution
self.dim = dim
self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
self.norm = norm_layer(4 * dim)
def forward(self, x):
"""
x: B, H*W, C
"""
H, W = self.input_resolution
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
x = x.view(B, H, W, C)
x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
x = self.norm(x)
x = self.reduction(x)
return x
def extra_repr(self) -> str:
return f"input_resolution={self.input_resolution}, dim={self.dim}"
def flops(self):
H, W = self.input_resolution
flops = H * W * self.dim
flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
return flops
class BasicLayer(nn.Module):
""" A basic Swin Transformer layer for one stage.
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resolution.
depth (int): Number of blocks.
num_heads (int): Number of attention heads.
window_size (int): Local window size.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
"""
def __init__(self, dim, input_resolution, depth, num_heads, window_size,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False):
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.depth = depth
self.use_checkpoint = use_checkpoint
# build blocks
self.blocks = nn.ModuleList([
SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
num_heads=num_heads, window_size=window_size,
shift_size=0 if (i % 2 == 0) else window_size // 2,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop, attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
norm_layer=norm_layer)
for i in range(depth)])
# patch merging layer
if downsample is not None:
self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
else:
self.downsample = None
def forward(self, x):
for blk in self.blocks:
if self.use_checkpoint:
x = checkpoint.checkpoint(blk, x)
else:
x = blk(x)
if self.downsample is not None:
x = self.downsample(x)
return x
def extra_repr(self) -> str:
return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
def flops(self):
flops = 0
for blk in self.blocks:
flops += blk.flops()
if self.downsample is not None:
flops += self.downsample.flops()
return flops
class PatchEmbed(nn.Module):
r""" Image to Patch Embedding
Args:
img_size (int): Image size. Default: 224.
patch_size (int): Patch token size. Default: 4.
in_chans (int): Number of input image channels. Default: 3.
embed_dim (int): Number of linear projection output channels. Default: 96.
norm_layer (nn.Module, optional): Normalization layer. Default: None
"""
def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
self.img_size = img_size
self.patch_size = patch_size
self.patches_resolution = patches_resolution
self.num_patches = patches_resolution[0] * patches_resolution[1]
self.in_chans = in_chans
self.embed_dim = embed_dim
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
if norm_layer is not None:
self.norm = norm_layer(embed_dim)
else:
self.norm = None
def forward(self, x):
B, C, H, W = x.shape
# FIXME look at relaxing size constraints
assert H == self.img_size[0] and W == self.img_size[1], \
f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
x = self.proj(x).flatten(2).transpose(1, 2) # B Ph*Pw C
if self.norm is not None:
x = self.norm(x)
return x
def flops(self):
Ho, Wo = self.patches_resolution
flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1])
if self.norm is not None:
flops += Ho * Wo * self.embed_dim
return flops
class SwinTransformer(nn.Module):
r""" Swin Transformer
A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
https://arxiv.org/pdf/2103.14030
Args:
img_size (int | tuple(int)): Input image size. Default 224
patch_size (int | tuple(int)): Patch size. Default: 4
in_chans (int): Number of input image channels. Default: 3
num_classes (int): Number of classes for classification head. Default: 1000
embed_dim (int): Patch embedding dimension. Default: 96
depths (tuple(int)): Depth of each Swin Transformer layer.
num_heads (tuple(int)): Number of attention heads in different layers.
window_size (int): Window size. Default: 7
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
drop_rate (float): Dropout rate. Default: 0
attn_drop_rate (float): Attention dropout rate. Default: 0
drop_path_rate (float): Stochastic depth rate. Default: 0.1
norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
patch_norm (bool): If True, add normalization after patch embedding. Default: True
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
"""
def __init__(self, img_size=224, patch_size=4, in_chans=3, num_classes=1000,
embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24],
window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
use_checkpoint=False, **kwargs):
super().__init__()
self.num_classes = num_classes
self.num_layers = len(depths)
self.embed_dim = embed_dim
self.ape = ape
self.patch_norm = patch_norm
self.num_features = int(embed_dim * 2 ** (self.num_layers - 1))
self.mlp_ratio = mlp_ratio
# split image into non-overlapping patches
self.patch_embed = PatchEmbed(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
norm_layer=norm_layer if self.patch_norm else None)
num_patches = self.patch_embed.num_patches
patches_resolution = self.patch_embed.patches_resolution
self.patches_resolution = patches_resolution
# absolute position embedding
if self.ape:
self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
trunc_normal_(self.absolute_pos_embed, std=.02)
self.pos_drop = nn.Dropout(p=drop_rate)
# stochastic depth
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
# build layers
self.layers = nn.ModuleList()
for i_layer in range(self.num_layers):
layer = BasicLayer(dim=int(embed_dim * 2 ** i_layer),
input_resolution=(patches_resolution[0] // (2 ** i_layer),
patches_resolution[1] // (2 ** i_layer)),
depth=depths[i_layer],
num_heads=num_heads[i_layer],
window_size=window_size,
mlp_ratio=self.mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop_rate, attn_drop=attn_drop_rate,
drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
norm_layer=norm_layer,
downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
use_checkpoint=use_checkpoint)
self.layers.append(layer)
self.norm = norm_layer(self.num_features)
self.avgpool = nn.AdaptiveAvgPool1d(1)
self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
@torch.jit.ignore
def no_weight_decay(self):
return {'absolute_pos_embed'}
@torch.jit.ignore
def no_weight_decay_keywords(self):
return {'relative_position_bias_table'}
def forward_features(self, x):
x = self.patch_embed(x)
if self.ape:
x = x + self.absolute_pos_embed
x = self.pos_drop(x)
for layer in self.layers:
x = layer(x)
x = self.norm(x) # B L C
x = self.avgpool(x.transpose(1, 2)) # B C 1
x = torch.flatten(x, 1)
return x
def forward(self, x):
x = self.forward_features(x)
x = self.head(x)
return x
def flops(self):
flops = 0
flops += self.patch_embed.flops()
for i, layer in enumerate(self.layers):
flops += layer.flops()
flops += self.num_features * self.patches_resolution[0] * self.patches_resolution[1] // (2 ** self.num_layers)
flops += self.num_features * self.num_classes
return flops
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
import pickle
import numpy as np
import pandas as pd
import skimage.io as io
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import keras
from keras.applications import ResNet50
from keras.applications.resnet50 import preprocess_input
from keras.models import Model
from keras.layers import GlobalAveragePooling2D, Dense, Dropout, Activation, Input, Lambda, BatchNormalization
from keras.optimizers import Adam
from keras.utils import to_categorical
from imgaug import augmenters as iaa
from datetime import datetime
# %load keras_utils.py
import keras
import numpy as np
import skimage.io as io
class DataGenerator(keras.utils.Sequence):
'Generates data for Keras'
def __init__(self, list_IDs, labels, center_IDs=None, batch_size=32, dim=(256,256,3), shuffle=True, img_preprocess=None, img_aug = None):
'Initialization'
self.dim = dim
self.batch_size = batch_size
self.labels = labels
self.list_IDs = list_IDs
self.center_IDs = center_IDs
self.n_classes = labels.shape[1]
self.shuffle = shuffle
self.on_epoch_end()
self.indexes = list(range(0, len(self.list_IDs)))
self.img_aug = img_aug
self.img_preprocess = img_preprocess
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.list_IDs) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:min((index+1)*self.batch_size, len(self.list_IDs))]
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
X = np.empty((self.batch_size, *self.dim))
Y = np.empty((self.batch_size, self.n_classes), dtype=int)
M = np.empty((self.batch_size), dtype=int)
# Generate data
for i, ID in enumerate(indexes):
# Store sample
X[i,] = io.imread(self.list_IDs[ID]).astype(float)
# Store class
Y[i,] = self.labels[ID]
if self.img_aug is not None:
X = self.img_aug.augment_images(X.astype(np.uint8))
X = self.__data_preprocess(X.astype(float))
if self.center_IDs is None:
return X, Y
else:
for i, ID in enumerate(indexes):
M[i] = self.center_IDs[ID]
return [X,M], [Y,M]
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_preprocess(self, img):
if self.img_preprocess is None:
processed_img = img/255.0
else:
processed_img = self.img_preprocess(img)
return processed_img
FLAG_savedir = '/home/put_data/moth/metadata/5_fold/'
FLAG_sfold = 5
idx_fold = 4
FLAG_hidden = 1024
FLAG_dropout = 0.0
FLAG_base_model = 'ResNet50'
FLAG_batch_size = 32
X = pd.read_csv('/home/put_data/moth/metadata/1121_updated_metadata_flickr_summary_used_final.csv',index_col=0)
X.head()
with open(os.path.join('/home/put_data/moth/metadata/1121_Y_mean_dict.pickle'), 'rb') as handle:
Y_dict = pickle.load(handle)
FLAG_model_save = '/home/put_data/moth/code/cmchang/regression/fullcrop_dp{0}_newaug-rmhue+old_species_keras_resnet_fold_{1}_{2}'.format(int(FLAG_dropout*100), datetime.now().strftime('%Y%m%d'),
idx_fold)
if not os.path.exists(FLAG_model_save):
os.makedirs(FLAG_model_save)
print('directory: {}'.format(FLAG_model_save))
X['img_rmbg_path'] = X.Number.apply(lambda x: '/home/put_data/moth/data/whole_crop/'+str(x)+'.png')
plt.imshow(io.imread(X.img_rmbg_path[0]))
sel = list()
for i in range(X.shape[0]):
if os.path.exists(X['img_rmbg_path'][i]):
sel.append(True)
else:
sel.append(False)
X = X[sel]
Xtrain = X[(X.Species.duplicated() == False)]
Xsplit = X[(X.Species.duplicated() == True)]
print("Unique: {0}; Duplicate: {1}".format(Xtrain.shape, Xsplit.shape))
from sklearn.model_selection import train_test_split
Xmerge, Xtest = train_test_split(Xsplit, test_size = 0.2, random_state=0)
Xtrain = pd.concat([Xtrain, Xmerge])
Ytrain = np.vstack(Xtrain['Species'].apply(lambda x: Y_dict[x]))
Ytest = np.vstack(Xtest['Species'].apply(lambda x: Y_dict[x]))
print('Xtrain.shape: {0}, Ytrain.shape: {1}'.format(Xtrain.shape, Ytrain.shape))
print('Xtest.shape: {0}, Ytest.shape: {1}'.format(Xtest.shape, Ytest.shape))
Xtrain.to_csv(os.path.join(FLAG_model_save,'train.csv'), index=False)
Xtest.to_csv(os.path.join(FLAG_model_save,'test.csv'), index=False)
sometimes = lambda aug: iaa.Sometimes(0.5, aug)
augseq = iaa.Sequential([
iaa.Fliplr(0.5)
,sometimes(iaa.Affine(
scale={"x": (0.9, 1.1), "y": (0.9, 1.1)}, # scale images to 80-120% of their size, individually per axis
translate_percent={"x": (-0.1, 0.1), "y": (-0.1, 0.1)}, # translate by -20 to +20 percent (per axis)
rotate=(-30, 30), # rotate by -45 to +45 degrees
cval=255 # if mode is constant, use a cval between 0 and 255
))
])
# Parameters
input_shape = (256, 256, 3)
n_classes = Ytest.shape[1]
batch_size = FLAG_batch_size
isCenterloss = False
from keras.regularizers import l2
img_input = Input(shape=input_shape)
extractor = ResNet50(input_tensor=img_input, include_top=False, weights='imagenet', pooling='avg')
x1 = Dense(FLAG_hidden)(extractor.output)
x1 = BatchNormalization()(x1)
x1 = Activation(activation='relu')(x1)
output = Dense(n_classes, activation='linear', name='output_layer')(x1)
train_params = {'dim': input_shape,
'batch_size': FLAG_batch_size,
'shuffle': True,
'img_aug': augseq,
'img_preprocess': tf.contrib.keras.applications.resnet50.preprocess_input}
valid_params = {'dim': input_shape,
'batch_size': FLAG_batch_size,
'shuffle': False,
'img_aug': None,
'img_preprocess': tf.contrib.keras.applications.resnet50.preprocess_input}
model = Model(inputs=img_input, outputs=output)
model.compile(optimizer=Adam(lr=5e-5, beta_1=0.5),
loss="mean_squared_error")
# Generators
training_generator = DataGenerator(list_IDs = list(Xtrain['img_rmbg_path']), labels = Ytrain, center_IDs = None, **train_params)
validation_generator = DataGenerator(list_IDs = list(Xtest['img_rmbg_path']), labels = Ytest, center_IDs = None, **valid_params)
model.summary()
csv_logger = keras.callbacks.CSVLogger(os.path.join(FLAG_model_save, 'training.log'))
checkpoint = keras.callbacks.ModelCheckpoint(os.path.join(FLAG_model_save, 'model.h5'),
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='min',
period=1)
earlystop = keras.callbacks.EarlyStopping(monitor='val_loss', mode='min',patience=20,min_delta=0.01)
# Train model on dataset
model.fit_generator(generator=training_generator,
validation_data=validation_generator,
steps_per_epoch=Xtrain.shape[0]/FLAG_batch_size,
validation_steps=Xtest.shape[0]/FLAG_batch_size,
epochs=200,
callbacks=[csv_logger, checkpoint, earlystop])
loss = pd.read_table(csv_logger.filename, delimiter=',')
plt.plot(loss.epoch, loss.loss, label='loss')
plt.plot(loss.epoch, loss.val_loss, label='val_loss')
plt.legend()
plt.xlabel('epoch')
plt.ylabel('MSE')
plt.savefig(os.path.join(FLAG_model_save, 'loss.png'))
best_epoch = np.argmin(loss.val_loss)
header = 'model_save,base_model,batch_size,hidden,dropout,epoch,loss,val_loss\n'
row = '{0},{1},{2},{3},{4},{5},{6:.4f},{7:.4f}\n'.format(FLAG_model_save,
FLAG_base_model,
FLAG_batch_size,
FLAG_hidden,
FLAG_dropout,
best_epoch,
loss.iloc[best_epoch]['loss'],
loss.iloc[best_epoch]['val_loss'])
if os.path.exists('result_summary.csv'):
with open('result_summary.csv','a') as myfile:
myfile.write(row)
else:
with open('result_summary.csv','w') as myfile:
myfile.write(header)
myfile.write(row)
```
### evaluation over validation dataset
```
from keras.models import load_model
model = load_model(os.path.join(FLAG_model_save,'model.h5'))
TestImg = list()
for i in range(Xtest.shape[0]):
img = io.imread(list(Xtest['img_rmbg_path'])[i])
TestImg.append(img)
TestImg = np.stack(TestImg)
TestInput = preprocess_input(TestImg.astype(float))
Pred = model.predict(TestInput)
from sklearn.metrics import mean_squared_error, mean_absolute_error
from scipy.stats import pearsonr
plt.scatter(Ytest, Pred, s=0.7)
plt.xlabel('true')
plt.ylabel('pred')
plt.title('rmse={0:.4f}, cor={1:.4f}'.format(np.sqrt(mean_squared_error(y_true=Ytest, y_pred=Pred)),
pearsonr(Ytest, Pred)[0][0]))
plt.savefig(os.path.join(FLAG_model_save, 'scatter_per_img.png'))
result = pd.DataFrame({'Species':Xtest.Species,
'pred':Pred.reshape(-1),
'true':Ytest.reshape(-1)})
result.to_csv(os.path.join(FLAG_model_save, 'predictions.csv'), index=False)
```
### by species
```
Xtest = Xtest.reset_index()
Xtest.drop(columns='index', inplace=True)
Ytest = np.vstack(Xtest['Species'].apply(lambda x: Y_dict[x]))
df_species_group = Xtest.groupby('Species').apply(
lambda g: pd.Series({
'indices': g.index.tolist(),
# 'Alt_class': g['Alt_class'].unique().tolist()[0]
}))
df_species_group = df_species_group.sample(frac=1).reset_index()
display(df_species_group.head())
species_ypred = list()
species_ytest = list()
for i in range(len(df_species_group)):
tidx = df_species_group.iloc[i][1]
species_ypred.append(np.mean(Pred[tidx], axis=0))
species_ytest.append(np.mean(Ytest[tidx], axis=0))
species_ypred = np.stack(species_ypred)
species_ytest = np.stack(species_ytest)
plt.scatter(species_ytest, species_ypred, s=0.7)
plt.xlabel('true')
plt.ylabel('pred')
plt.title('rmse={0:.4f}, cor={1:.4f}'.format(mean_squared_error(y_true=species_ytest, y_pred=species_ypred)**0.5,
pearsonr(species_ytest, species_ypred)[0][0]))
plt.savefig(os.path.join(FLAG_model_save, 'scatter_per_species.png'))
result = pd.DataFrame({'Species':df_species_group.Species,
'pred':species_ypred.reshape(-1),
'true':species_ytest.reshape(-1)})
result.to_csv(os.path.join(FLAG_model_save, 'predictions_species.csv'), index=False)
```
| github_jupyter |
```
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/home/husein/t5/prepare/mesolitica-tpu.json'
os.environ['CUDA_VISIBLE_DEVICES'] = ''
from bigbird import modeling
from bigbird import utils
import tensorflow as tf
import numpy as np
import sentencepiece as spm
vocab = '/home/husein/b2b/sp10m.cased.t5.model'
sp = spm.SentencePieceProcessor()
sp.Load(vocab)
class Encoder:
def __init__(self, sp):
self.sp = sp
self.vocab_size = sp.GetPieceSize() + 100
def encode(self, s):
return self.sp.EncodeAsIds(s)
def decode(self, ids, strip_extraneous=False):
return self.sp.DecodeIds(list(ids))
encoder = Encoder(sp)
top_p = tf.placeholder(tf.float32, None, name = 'top_p')
temperature = tf.placeholder(tf.float32, None, name = 'temperature')
bert_config = {
# transformer basic configs
'attention_probs_dropout_prob': 0.1,
'hidden_act': 'relu',
'hidden_dropout_prob': 0.1,
'hidden_size': 512,
'initializer_range': 0.02,
'intermediate_size': 3072,
'max_position_embeddings': 4096,
'max_encoder_length': 2048,
'max_decoder_length': 512,
'num_attention_heads': 8,
'num_hidden_layers': 6,
'type_vocab_size': 2,
'scope': 'pegasus',
'use_bias': False,
'rescale_embedding': True,
'vocab_model_file': None,
# sparse mask configs
'attention_type': 'block_sparse',
'norm_type': 'prenorm',
'block_size': 64,
'num_rand_blocks': 3,
'vocab_size': 32128,
'beam_size': 1,
'alpha': 1.0,
'couple_encoder_decoder': False,
'num_warmup_steps': 10000,
'learning_rate': 0.1,
'label_smoothing': 0.0,
'optimizer': 'Adafactor',
'use_tpu': True,
'top_p': top_p,
'temperature': temperature
}
model = modeling.TransformerModel(bert_config)
X = tf.placeholder(tf.int32, [None, None])
r = model(X, training = False)
r
logits = tf.identity(r[0][2], name = 'logits')
logits
files = tf.gfile.Glob('gs://mesolitica-tpu-general/t2t-summarization-v2/data/seq2*')
batch_size = 4
data_fields = {
'inputs': tf.VarLenFeature(tf.int64),
'targets': tf.VarLenFeature(tf.int64),
}
data_len = {
'inputs': 2048,
'targets': 1024,
}
def parse(serialized_example):
features = tf.parse_single_example(
serialized_example, features = data_fields
)
for k in features.keys():
features[k] = features[k].values
features[k] = tf.pad(
features[k], [[0, data_len[k] - tf.shape(features[k])[0]]]
)
features[k].set_shape((data_len[k]))
return features
def _decode_record(example, name_to_features):
"""Decodes a record to a TensorFlow example."""
# tf.Example only supports tf.int64, but the TPU only supports tf.int32.
# So cast all int64 to int32.
for name in list(example.keys()):
t = example[name]
if t.dtype == tf.int64:
t = tf.to_int32(t)
example[name] = t
return example
d = tf.data.TFRecordDataset(files)
d = d.map(parse, num_parallel_calls = 32)
d = d.apply(
tf.contrib.data.map_and_batch(
lambda record: _decode_record(record, data_fields),
batch_size = batch_size,
num_parallel_batches = 4,
drop_remainder = True,
)
)
d = d.make_one_shot_iterator().get_next()
d
import tensorflow as tf
ckpt_path = tf.train.latest_checkpoint('gs://mesolitica-tpu-general/bigbird-summarization-small')
ckpt_path
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
r_ = sess.run(d)
encoder.decode(r_['inputs'][0].tolist())
encoder.decode(r_['targets'][0].tolist())
# import re
# import collections
# def get_assignment_map_from_checkpoint(tvars, init_checkpoint):
# """Compute the union of the current variables and checkpoint variables."""
# assignment_map = {}
# initialized_variable_names = {}
# name_to_variable = collections.OrderedDict()
# for var in tvars:
# name = var.name
# m = re.match('^(.*):\\d+$', name)
# if m is not None:
# name = m.group(1)
# name_to_variable[name] = var
# init_vars = tf.train.list_variables(init_checkpoint)
# assignment_map = collections.OrderedDict()
# for x in init_vars:
# (name, var) = (x[0], x[1])
# l = 'pegasus/' + name
# l = l.replace('embeddings/weights', 'embeddings/word_embeddings')
# l = l.replace('self/output', 'output')
# l = l.replace('ffn/dense_1', 'output/dense')
# l = l.replace('ffn', 'intermediate')
# l = l.replace('memory_attention/output', 'attention/encdec_output')
# l = l.replace('memory_attention', 'attention/encdec')
# if l not in name_to_variable:
# continue
# assignment_map[name] = name_to_variable[l]
# initialized_variable_names[l + ':0'] = 1
# return (assignment_map, initialized_variable_names)
# t = tf.trainable_variables()
# assignment_map, initialized_variable_names = get_assignment_map_from_checkpoint(t, ckpt_path)
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
# var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
# saver = tf.train.Saver(var_list = var_lists)
# saver.restore(sess, 'gs://mesolitica-tpu-general/bigbird-summarization-small/model.ckpt-0')
import re
from unidecode import unidecode
def cleaning(string):
return re.sub(r'[ ]+', ' ', unidecode(string.replace('\n', ' '))).strip()
string = """
KUALA LUMPUR: Hakim Mahkamah Tinggi, Mohd Nazlan Mohd Ghazali menyifatkan kes penyelewengan dana RM42 juta milik SRC International Sdn Bhd dihadapi Datuk Seri Najib Razak adalah kesalahan salah guna kedudukan, pecah amanah jenayah dan pengubahan wang haram yang paling teruk.
Mohd Nazlan yang mensabitkan Najib terhadap kesemua tujuh tuduhan dan memerintahkan bekas Perdana Menteri itu dipenjara 12 tahun, dan didenda RM210 juta, berkata ia bukan sahaja disebabkan oleh alasan bagaimana jenayah itu dilakukan, malah kes berprofil tinggi berkenaan turut membabitkan sejumlah wang yang sangat besar.
Melalui alasan penghakiman penuh setebal 801 muka surat itu, Mohd Nazlan, berkata kes terbabit mempunyai elemen yang memberikan kesan ke atas kepentingan awam kerana dana RM42 juta itu adalah milik Kementerian Kewangan (Diperbadankan) (MKD) yang berkemungkinan berasal daripada dana pencen Kumpulan Wang Persaraan (Diperbadankan) (KWAP) berjumlah RM4 bilion.
"Dan yang paling penting ia membabitkan individu yang pada ketika itu berada dalam pada tertinggi dalam kerajaan," katanya.
Pada 28 Julai lalu, Mohd Nazlan memerintahkan Najib dipenjarakan 10 tahun masing-masing bagi tiga tuduhan pecah amanah wang RM42 juta milik SRC.
Hakim turut memerintahkan Najib dipenjara 12 tahun dan denda RM210 juta (jika gagal bayar, lima tahun penjara) bagi tuduhan menyalahgunakan kedudukan.
Bagi tuduhan pengubahan wang haram pula, Mohd Nazlan memerintahkan Najib dipenjara 10 tahun bagi setiap tuduhan.
Sementara itu, Mohd Nazlan berkata, Najib selaku tertuduh tidak menunjukkan penyesalan, malah mempertahankan pembelaan beliau tidak mengetahui mengenai wang RM42 juta milik SRC itu dalam rayuannya bagi diringankan hukuman.
"Tetapi saya tidak boleh menafikan beliau adalah Perdana Menteri negara ini dan tidak boleh mempersoalkan sumbangannya untuk kebaikan dan kesejahteraan masyarakat dalam pelbagai cara kerana beliau adalah Perdana Menteri selama sembilan tahun.
"Sejarah politik akan terus diperdebatkan sama ada dari segi keseimbangan, beliau melakukan lebih banyak kebaikan daripada keburukan.
"Walau apa pun, ia adalah tidak selari dengan idea sesebuah pentadbiran negara yang bersih daripada rasuah yang tidak boleh bertolak ansur dengan sebarang penyalahgunaan kuasa," katanya.
Mahkamah Rayuan menetapkan pada 15 Oktober ini bagi pengurusan kes rayuan Najib terhadap sabitan dan hukuman terhadapnya.
"""
string2 = """
Gabungan parti Warisan, Pakatan Harapan, dan Upko hari ini mendedahkan calon-calon masing-masing untuk pilihan raya negeri Sabah, tetapi ketika pengumuman itu berlangsung, perwakilan PKR di dewan itu dilihat ‘gelisah’ seperti ‘tidak senang duduk’.
Sekumpulan anggota PKR kemudian dilihat meninggalkan dewan di Pusat Konvensyen Antarabangsa Sabah di Kota Kinabalu selepas berbincang dengan ketua PKR Sabah Christina Liew.
Semakan senarai-senarai calon berkenaan mendapati PKR hanya memperolehi separuh daripada jumlah kerusi yang diharapkan.
Semalam, PKR Sabah mengumumkan akan bertanding di 14 kerusi tetapi ketika Presiden Warisan Shafie Apdal mengumumkan calon gabungan tersebut hari ini, PKR hanya diberikan tujuh kerusi untuk bertanding.
Kerusi yang diberikan adalah Api-Api, Inanam, Tempasuk, Tamparuli, Matunggong, Klias, dan Sook.
Klias dan Sook adalah dua kerusi yang diberikan kepada PKR, sementara lima kerusi selebihnya pernah ditandingi oleh PKR pada pilihan raya umum 2018.
Dalam pengumuman PKR Sabah semalam, parti itu menjangkakan Warisan akan turut menyerahkan kerusi Kemabong, Membakut, dan Petagas kepada mereka.
Walau bagaimanapun, Warisan menyerahkan kerusi Kemabong kepada Upko dan mengekalkan bertanding untuk kerusi Membakut dan Petagas.
PKR juga menuntut empat daripada 13 kerusi baru yang diperkenalkan iaitu Segama, Limbahau, Sungai Manila, dan Pintasan tetapi Warisan membolot semua kerusi itu.
Sebagai pertukaran untuk kerusi yang diintainya, PKR bersedia untuk menyerahkan kerusi Kadaimaian, Kuala Penyu, dan Karanaan. Namun, ini dijangka tidak akan berlaku memandangkan parti tersebut tidak berpuas hati dengan agihan kerusi seperti yang diharapkan itu.
Selepas perwakilan dari PKR dan Liew keluar dari dewan tersebut, wartawan kemudian menyusuri Liew untuk mendapatkan penjelasannya.
Walau bagaimanapun, Liew enggan memberikan sebarang komen dan berkata bahawa dia ingin ke tandas.
Liew dan perwakilan PKR kemudian tidak kembali ke dalam dewan tersebut.
Apabila calon pilihan raya yang diumumkan diminta naik ke atas pentas untuk sesi bergambar, Liew tidak kelihatan.
Bilangan kerusi yang ditandingi oleh PKR kali ini hanya kurang satu kerusi daripada yang ditandingi parti itu pada PRU 2018.
Dalam perkembangan berkaitan, DAP dan Amanah dikatakan tidak mempunyai sebarang masalah dengan kerusi yang diberikan untuk PRN Sabah.
Sementara itu, Presiden Upko Madius Tangau enggan mengulas adakah dia berpuas hati dengan agihan kerusi tersebut. Madius kekal di majlis tersebut sehingga ia berakhir.
Partinya diberikan 12 kerusi, iaitu lebih tujuh kerusi berbanding PRU lalu.
DAP dan Amanah akan bertanding di bawah logo Warisan sementara PKR dan Upko akan menggunakan logo masing-masing.
DAP akan bertanding di tujuh kerusi, jumlah yang sama seperti yang mereka tandingi pada PRU lalu, sementara Amanah diberi satu kerusi.
Warisan akan bertanding sebanyak 54 kerusi.
Perkembangan terbaru ini mungkin mencetuskan pergeseran di antara PKR dan Warisan. PKR boleh memilih untuk bertanding di lebih banyak kerusi daripada 14 yang dituntutnya manakala Warisan juga boleh bertanding di kerusi sekutunya.
Barisan pemimpin tertinggi PKR dan Warisan hanya mempunyai dua hari sebelum hari penamaan calon pada Sabtu untuk mengurangkan pergeseran.
"""
string3 = """
Penubuhan universiti sukan seperti diutarakan Ketua Unit Sukan Kementerian Pengajian Tinggi, Dr Pekan Ramli dan disokong Pakar Pembangunan Sukan dan Reakreasi Luar, Universiti Pendidikan Sultan Idris (UPSI), Prof Dr Md Amin Md Taaf seperti disiarkan akhbar ini, memberikan sinar harapan kepada kewujudan institusi sedemikian.
Ia menjadi impian atlet negara untuk mengejar kejayaan dalam bidang sukan dan kecemerlangan dalam akademik untuk menjamin masa depan lebih baik apabila bersara daripada arena sukan kelak.
Pelbagai pandangan, idea, kaedah, bukti dan cadangan dilontarkan pakar berikutan pentingnya universiti sukan yang akan memberi impak besar sama ada pada peringkat kebangsaan mahupun antarabangsa.
Negara lain sudah lama meraih laba dengan kewujudan universiti sukan seperti China, Korea, Japan, Taiwan, India dan Vietnam. Mereka menghasilkan atlet universiti yang mempamerkan keputusan cemerlang pada peringkat tinggi seperti Sukan Olimpik, Kejohanan Dunia dan Sukan Asia.
Justeru, kejayaan mereka perlu dijadikan rujukan demi memajukan sukan tanah air. Jika kita merujuk pendekatan Asia, kewujudan universiti sukan penting dan memberi kesan positif dalam melonjakkan prestasi sukan lebih optimum.
Namun, jika kita melihat pendekatan Eropah, universiti sukan bukan antara organisasi atau institusi penting yang diberi perhatian dalam menyumbang kepada pemenang pingat.
Antara isu dalam universiti sukan ialah kos tinggi, lokasi, prasarana sukan, pertindihan kursus dengan universiti sedia ada dan impak terhadap dunia sukan negara hingga mengundang persoalan kewajaran dan kerelevanan penubuhannya.
Namun sebagai bekas atlet memanah negara dan Olympian (OLY) di Sukan Olimpik 2004 di Athens, Greece serta bekas pelajar Sekolah Sukan Bukit Jalil hingga berjaya dalam dunia akademik, saya mendapati terdapat beberapa faktor sering menjadi halangan dalam rutin harian mereka.
Antaranya, faktor masa yang terpaksa bergegas menghadiri kuliah selepas tamat sesi latihan yang mengambil masa 15 hingga 20 minit dengan menunggang motosikal; kereta (20-30 minit) atau pengangkutan disediakan Majlis Sukan Negara (MSN) ke Universiti Putra Malaysia (UPM).
Jika mereka menuntut di Universiti Teknologi MARA (UiTM) atau Universiti Malaya (UM), ia mungkin lebih lama.
Walaupun di universiti tersedia dengan kemudahan kolej dan kemudahan sukan, mereka memilih pulang ke MSN untuk menjalani latihan bersama pasukan dan jurulatih di padang atau gelanggang latihan rasmi.
Ini berlanjutan selagi bergelar atlet negara yang perlu memastikan prestasi sentiasa meningkat dari semasa ke semasa tanpa mengabaikan tugas sebagai pelajar.
Alangkah baiknya jika sebahagian Sekolah Sukan Bukit Jalil itu sendiri dijadikan Kolej atau Universiti Sukan Malaysia kerana lengkap dari segi kemudahan prasarana sukannya dan proses pengajaran dan pembelajaran (PdP) dalam bidang Sains Sukan, Kejurulatihan, Pendidikan Jasmani dan setaraf dengannya.
Pengambilan setiap semester pula hanya terhad kepada atlet berstatus kebangsaan dan antarabangsa sahaja supaya hasrat melahirkan lebih ramai atlet bertaraf Olimpik mudah direalisasikan.
Contohnya, bekas atlet lompat bergalah negara, Roslinda Samsu yang juga pemenang pingat perak Sukan Asia Doha 2006 dan Penerima Anugerah Khas Majlis Anugerah Sukan KPT 2012, terpaksa mengambil masa lebih kurang sembilan tahun untuk menamatkan ijazah Sarjana Muda Pendidikan Jasmani di UPM sepanjang 14 tahun terbabit dalam sukan olahraga.
Sepanjang tempoh bergelar atlet kebangsaan dan mahasiswa, beliau juga memenangi pingat Emas Sukan SEA empat siri berturut-turut pada 2005, 2007, 2009 dan 2011.
Begitu juga atlet kebangsaan seperti Leong Mun Yee (UPM); Pandalela Renong (UM); Bryan Nickson Lomas (UM); Cheng Chu Sian (UPM); Marbawi Sulaiman (UiTM) dan Norasheela Khalid (UPM).
Jika disenaraikan, mungkin lebih ramai lagi. Namun, pernah terlintas di fikiran mengapa hanya atlet dari sukan terjun yang dapat memenangi pingat di Sukan Olimpik? Bagaimana dengan atlet lain yang juga layak secara merit? Apakah kekangan atau masalah dihadapi sebagai atlet dan mahasiswa?
Adakah kewujudan universiti sukan akan memberi impak besar kepada kemajuan sukan negara? Jika dirancang dan diatur dengan cekap dan sistematik, ia perkara tidak mustahil dicapai.
"""
cleaning(string2)
pad_sequences = tf.keras.preprocessing.sequence.pad_sequences
encoded = encoder.encode(f'ringkasan: {cleaning(string2)}') + [1]
s = pad_sequences([encoded], padding='post', maxlen = 2048)
%%time
l = sess.run(r[0][2], feed_dict = {X: s, top_p: 0.0, temperature: 0.0})
encoder.decode(l[0].tolist())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'output/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'top_p' in n.name
or 'temperature' in n.name
or 'logits' in n.name
or 'alphas' in n.name
or 'self/Softmax' in n.name)
and 'adam' not in n.name
and 'beta' not in n.name
and 'global_step' not in n.name
and 'gradients' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('output', strings)
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-10, fallback_max=10)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'output/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
inputs = ['Placeholder', 'top_p', 'temperature']
transformed_graph_def = TransformGraph(input_graph_def,
inputs,
['logits'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
def load_graph(frozen_graph_filename, **kwargs):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# https://github.com/onnx/tensorflow-onnx/issues/77#issuecomment-445066091
# to fix import T5
for node in graph_def.node:
if node.op == 'RefSwitch':
node.op = 'Switch'
for index in xrange(len(node.input)):
if 'moving_' in node.input[index]:
node.input[index] = node.input[index] + '/read'
elif node.op == 'AssignSub':
node.op = 'Sub'
if 'use_locking' in node.attr:
del node.attr['use_locking']
elif node.op == 'AssignAdd':
node.op = 'Add'
if 'use_locking' in node.attr:
del node.attr['use_locking']
elif node.op == 'Assign':
node.op = 'Identity'
if 'use_locking' in node.attr:
del node.attr['use_locking']
if 'validate_shape' in node.attr:
del node.attr['validate_shape']
if len(node.input) == 2:
node.input[0] = node.input[1]
del node.input[1]
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('output/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
top_p = g.get_tensor_by_name('import/top_p:0')
temperature = g.get_tensor_by_name('import/temperature:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
%%time
l = test_sess.run(logits, feed_dict = {x: s, top_p: 0.0, temperature: 0.0})
encoder.decode([i for i in l[0].tolist() if i > 0])
g = load_graph('output/frozen_model.pb.quantized')
x = g.get_tensor_by_name('import/Placeholder:0')
top_p = g.get_tensor_by_name('import/top_p:0')
temperature = g.get_tensor_by_name('import/temperature:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
%%time
l = test_sess.run(logits, feed_dict = {x: s, top_p: 0.0, temperature: 0.0})
encoder.decode([i for i in l[0].tolist() if i > 0])
```
| github_jupyter |
# Objective
Import the FAF freight matrices provided with FAF into AequilibraE's matrix format
## Input data
* FAF: https://faf.ornl.gov/fafweb/
* Matrices: https://faf.ornl.gov/fafweb/Data/FAF4.4_HiLoForecasts.zip
* Zones System: http://www.census.gov/econ/cfs/AboutGeographyFiles/CFS_AREA_shapefile_010215.zip
* FAF User Guide: https://faf.ornl.gov/fafweb/data/FAF4%20User%20Guide.pdf
* The blog post (with data): http://www.xl-optim.com/matrix-api-and-multi-class-assignment
# The code
We import all libraries we will need, including the AequilibraE, after putting it in our Python path
```
import sys
# On Linux
# sys.path.append('/home/pedro/.qgis2/python/plugins/AequilibraE')
# On Windows
sys.path.append('C:/Users/Pedro/.qgis2/python/plugins/AequilibraE')
import pandas as pd
import numpy as np
import os
from aequilibrae.matrix import AequilibraeMatrix
from scipy.sparse import coo_matrix
```
Now we set all the paths for files and parameters we need
```
data_folder = 'Y:/ALL DATA/DATA/Pedro/Professional/Data/USA/FAF/4.4'
data_file = 'FAF4.4_HiLoForecasts.csv'
sctg_names_file = 'sctg_codes.csv' # Simplified to 50 characters, which is AequilibraE's limit
output_folder = data_folder
```
We import the the matrices
```
matrices = pd.read_csv(os.path.join(data_folder, data_file), low_memory=False)
print matrices.head(10)
```
We import the sctg codes
```
sctg_names = pd.read_csv(os.path.join(data_folder, sctg_names_file), low_memory=False)
sctg_names.set_index('Code', inplace=True)
sctg_descr = list(sctg_names['Commodity Description'])
print sctg_names.head(5)
```
We now process the matrices to collect all the data we need, such as:
* the list of zones
* CSTG codes
* Matrices/scenarios we are importing
```
# lists the zones
all_zones = np.array(sorted(list(set( list(matrices.dms_orig.unique()) + list(matrices.dms_dest.unique())))))
# Count them and create a 0-based index
num_zones = all_zones.shape[0]
idx = np.arange(num_zones)
# Creates the indexing dataframes
origs = pd.DataFrame({"from_index": all_zones, "from":idx})
dests = pd.DataFrame({"to_index": all_zones, "to":idx})
# adds the new index columns to the pandas dataframe
matrices = matrices.merge(origs, left_on='dms_orig', right_on='from_index', how='left')
matrices = matrices.merge(dests, left_on='dms_dest', right_on='to_index', how='left')
# Lists sctg codes and all the years/scenarios we have matrices for
mat_years = [x for x in matrices.columns if 'tons' in x]
sctg_codes = matrices.sctg2.unique()
```
We now import one matrix for each year, saving all the SCTG codes as different matrix cores in our zoning system
```
# aggregate the matrix according to the relevant criteria
agg_matrix = matrices.groupby(['from', 'to', 'sctg2'])[mat_years].sum()
# returns the indices
agg_matrix.reset_index(inplace=True)
for y in mat_years:
mat = AequilibraeMatrix()
kwargs = {'file_name': os.path.join(output_folder, y + '.aem'),
'zones': num_zones,
'matrix_names': sctg_descr}
mat.create_empty(**kwargs)
mat.index[:] = all_zones[:]
# for all sctg codes
for i in sctg_names.index:
prod_name = sctg_names['Commodity Description'][i]
mat_filtered_sctg = agg_matrix[agg_matrix.sctg2 == i]
m = coo_matrix((mat_filtered_sctg[y], (mat_filtered_sctg['from'], mat_filtered_sctg['to'])),
shape=(num_zones, num_zones)).toarray().astype(np.float64)
mat.matrix[prod_name][:,:] = m[:,:]
```
| github_jupyter |
## House Prices: Advanced Regression Techniques : Kaggle Competition
### Import Libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
### Import Data
```
test_df=pd.read_csv('test.csv')
test_df.head()
test_df.shape
```
### Step1: Check for missing values
```
fig, ax = plt.subplots(figsize=(20,5)) # To change fig shape for better representation
sns.heatmap(test_df.isnull(),yticklabels=False,cbar=False, ax=ax)
def missing_zero_values_table(dataframe):
zero_val = (dataframe == 0.00).astype(int).sum(axis=0)
mis_val = dataframe.isnull().sum()
mis_val_percent = 100 * dataframe.isnull().sum() / len(dataframe)
mz_table = pd.concat([zero_val, mis_val, mis_val_percent], axis=1)
mz_table = mz_table.rename(
columns = {0 : 'Zero Values', 1 : 'Missing Values', 2 : '% of Total Values'})
zero_val = (dataframe == 0.00).astype(int).sum(axis=0)
mis_val = dataframe.isnull().sum()
mis_val_percent = 100 * dataframe.isnull().sum() / len(dataframe)
mz_table = pd.concat([zero_val, mis_val, mis_val_percent], axis=1)
mz_table = mz_table.rename(
columns = {0 : 'Zero Values', 1 : 'Missing Values', 2 : '% of Total Values'})
mz_table['Data Type'] = dataframe.dtypes
mz_table = mz_table[
mz_table.iloc[:,1] != 0].sort_values(
'% of Total Values', ascending=False).round(1)
print ("Your selected dataframe has " + str(dataframe.shape[1]) + " columns and " + str(dataframe.shape[0]) + " Rows.\n"
"There are " + str(mz_table.shape[0]) +
" columns that have missing values.")
return mz_table
missing_zero_values_table(test_df)
```
### Step 2: Filling Missing values and droping columns whose missing >70%
```
# droping columns whose missing >70%
# this will remain same as in Train
test_df.drop(['PoolQC','MiscFeature','Alley','Fence'],axis=1,inplace=True)
missing_zero_values_table(test_df)
```
#### Handling Missing data : categorical data with MODE & numerical data with MEAN
```
test_df['FireplaceQu'].value_counts()
test_df['FireplaceQu'].fillna(value='Gd', inplace=True)
test_df['LotFrontage'].mean()
test_df['LotFrontage'].fillna(value=70.05, inplace=True)
test_df['GarageType'].value_counts()
test_df['GarageType'].fillna(value='Attchd', inplace=True)
test_df['GarageYrBlt'].value_counts()
test_df['GarageYrBlt'].fillna(value=2005, inplace=True)
test_df['GarageFinish'].value_counts()
test_df['GarageFinish'].fillna(value='Unf', inplace=True)
test_df['GarageQual'].value_counts()
test_df['GarageQual'].fillna(value='TA', inplace=True)
test_df['GarageCond'].value_counts()
test_df['GarageCond'].fillna(value='TA', inplace=True)
test_df['BsmtExposure'].value_counts()
test_df['BsmtExposure'].fillna(value='No', inplace=True)
test_df['BsmtFinType1'].value_counts()
test_df['BsmtFinType1'].fillna(value='Unf', inplace=True)
test_df['BsmtFinType2'].value_counts()
test_df['BsmtFinType2'].fillna(value='Unf', inplace=True)
test_df['BsmtCond'].value_counts()
test_df['BsmtCond'].fillna(value='TA', inplace=True)
test_df['BsmtQual'].value_counts()
test_df['BsmtQual'].fillna(value='TA', inplace=True)
test_df['MasVnrArea'].mean()
test_df['MasVnrArea'].fillna(value=103.6, inplace=True)
test_df['MasVnrType'].value_counts()
test_df['MasVnrType'].fillna(value='None', inplace=True)
test_df['Electrical'].value_counts()
test_df['Electrical'].fillna(value='SBrkr', inplace=True)
test_df.shape
missing_zero_values_table(test_df)
#df.drop(['Id'],axis=1,inplace=True)
missing_zero_values_table(test_df)
test_df['Utilities']=test_df['Utilities'].fillna(test_df['Utilities'].mode()[0])
test_df['Exterior1st']=test_df['Exterior1st'].fillna(test_df['Exterior1st'].mode()[0])
test_df['Exterior2nd']=test_df['Exterior2nd'].fillna(test_df['Exterior2nd'].mode()[0])
test_df['BsmtFinType1']=test_df['BsmtFinType1'].fillna(test_df['BsmtFinType1'].mode()[0])
test_df['BsmtFinSF1']=test_df['BsmtFinSF1'].fillna(test_df['BsmtFinSF1'].mean())
test_df['BsmtFinSF2']=test_df['BsmtFinSF2'].fillna(test_df['BsmtFinSF2'].mean())
test_df['BsmtUnfSF']=test_df['BsmtUnfSF'].fillna(test_df['BsmtUnfSF'].mean())
test_df['TotalBsmtSF']=test_df['TotalBsmtSF'].fillna(test_df['TotalBsmtSF'].mean())
test_df['BsmtFullBath']=test_df['BsmtFullBath'].fillna(test_df['BsmtFullBath'].mode()[0])
test_df['BsmtHalfBath']=test_df['BsmtHalfBath'].fillna(test_df['BsmtHalfBath'].mode()[0])
test_df['KitchenQual']=test_df['KitchenQual'].fillna(test_df['KitchenQual'].mode()[0])
test_df['Functional']=test_df['Functional'].fillna(test_df['Functional'].mode()[0])
test_df['GarageCars']=test_df['GarageCars'].fillna(test_df['GarageCars'].mean())
test_df['GarageArea']=test_df['GarageArea'].fillna(test_df['GarageArea'].mean())
test_df['SaleType']=test_df['SaleType'].fillna(test_df['SaleType'].mode()[0])
fig, ax = plt.subplots(figsize=(20,5))
sns.heatmap(test_df.isnull(),yticklabels=False,cbar=False,cmap='YlGnBu',ax=ax)
missing_zero_values_table(test_df)
test_df.shape
test_df.to_csv('cleaned_test.csv',index=False)
```
| github_jupyter |
## Introduction to Tasks with States
Task might be run for a single set of input values or we can generate multiple sets, that will be called "states". If we want to run our `Task` multiple times we have to provide input that is iterable and specify the way we want to map values of the inputs to the specific states. In order to do it, we set so-called `splitter`.
Let's start from a simple `FunctionTask` that takes a list as an input:
```
import pydra
@pydra.mark.task
def add_two(x):
return x + 2
task1 = add_two(x=[1, 2, 3])
```
Before we set any splitter, the task's `state` should be `None`
```
task1.state is None
```
We can now check the results of our task:
Now, we can set the `splitter` by using the `split` method. Since our task has only one input, there is only one option to create set of inputs, i.e. `splitter="x"`:
```
task1.split(splitter="x")
```
Now, we can check that our task has a `state`:
```
task1.state
```
And if we can print information about our state
```
print(task1.state)
```
within the `state` information about the splitter has been stored:
```
task1.state.splitter
```
Note, that *pydra* adds name of the function to the name of the input.
Now, we can run the task and check results:
```
task1()
task1.result()
```
This time, we got a list that contains three `Result` objects, one for each value of `x`.
For tasks with a state *pydra* prepare all sets of inputs and run the task for each of the set. We could simply represent this by the following figure:

### Multiple inputs
We can also use `State` for functions with multiple inputs:
```
@pydra.mark.task
def add_var(a, b):
return a + b
```
Now we have more options to define `splitter`, it depends on the type of inputs and on our application. For example, we could have `a` that is a list and `b` that is a single value:
```
task2 = add_var(a=[1, 2, 3], b=10).split(splitter="a")
task2()
task2.result()
```
Now we have three results for each element from the `a` list and the value of `b` is always the same.

But we can have lists for both inputs, let's assume that `a` and `b` are two elements lists.
```
task3 = add_var(a=[1, 2], b=[10, 100])
```
Now, we have two options to map the input values, we might want to run the task for two sets of values: (`a`=1, `b`=10) and (`a`=2, `b`=100), or we might want to run the task for four sets: (`a`=1, `b`=10), (`a`=1, `b`=100), (`a`=2, `b`=10) and (`a`=2, `b`=100).
The first situation will be represented by the so-called "scalar" splitter, the later by the so-called "outer" splitter.
#### Scalar splitter
Let's start from the scalar splitter, that uses parentheses in the syntax:
```
task3.split(splitter=("a", "b"))
task3()
task3.result()
```
As we expected, we have two outputs: `1+10=11` and `2+100=102`.
We can represent the execution by the graph:

#### Outer splitter
For the outer splitter we will use brackets:
```
task4 = add_var(a=[1, 2], b=[10, 100])
task4.split(splitter=["a", "b"])
task4()
task4.result()
```
Now, we have results for all of the combinations of values from `a` and `b`.

For more inputs we can create more complex splitter, and use scalar and outer splitters together. Note, that the scalar splitter can only work for lists that have the same length, but the outer splitter doesn't have this limitation.
Let's run one more example that takes four inputs, `x` and `y` components of two vectors, and calculates all possible sums of vectors. `x` components should be kept together with corresponding `y` components (i.e. scalar splitters: `("x1", "y1")` and `("x2", "y2")`), but we should use outer splitter for two vectors to get all combinations.
```
@pydra.mark.task
def add_vector(x1, y1, x2, y2):
return (x1 + x2, y1 + y2)
task5 = add_vector(name="add_vect", output_names=["x", "y"],
x1=[10, 20], y1=[1, 2], x2=[10, 20, 30], y2=[10, 20, 30])
task5.split(splitter=[("x1", "y1"), ("x2", "y2")])
task5()
task5.result()
```
We should get six outputs: two elements for vector1 times three elements for vector2.
### Combining the output
When we use `splitter`, we can also define `combiner`, if we want to combine together the results.
If we take the `task4` as an example and combine all results for each element of the input `b`, we can modify the task as follows:
```
task5 = add_var(a=[1, 2], b=[10, 100])
task5.split(splitter=["a", "b"])
# adding combiner
task5.combine(combiner="b")
task5()
task5.result()
```
Now our result contains two elements, each one is a list. The first one contains results for `a=1` and both values of `b`, and the second contains results for `a=2` and both values of `b`.

But we could also group all elements from the input `a` and have a different combined output:
```
task6 = add_var(a=[1, 2], b=[10, 100])
task6.split(splitter=["a", "b"])
# changing the combiner
task6.combine(combiner="a")
task6()
task6.result()
```
We still have two elements in our results, but this time the first element contains results for `b=10` and both values of `a`, and the second contains results for `b=100` and both values of `a`.

We can also combine all elements by providing a list of all inputs to the `combiner`:
```
task7 = add_var(a=[1, 2], b=[10, 100])
task7.split(splitter=["a", "b"])
# combining all inputs
task7.combine(combiner=["a", "b"])
task7()
task7.result()
```
This time the output contains one element that is a list of all outputs:

### Exercise 1
Let's say we want to calculate squares and cubes of integers from 2 to 5, and combine separately all squares and all cubes:
First we will define a function that returns powers:
```
@pydra.mark.task
def power(x, n):
return x**n
```
Now we can create a task that takes two lists as its input, outer splitter for `x` and `n`, and combine all `x`:
```
task_ex1 = power(x=[2, 3, 4, 5], n=[2, 3]).split(splitter=["x", "n"]).combine("x")
task_ex1(plugin="cf")
task_ex1.result()
```
The result should contain two list, the first one is for squares, the second for cubes.
```
squares_list = [el.output.out for el in task_ex1.result()[0]]
cubes_list = [el.output.out for el in task_ex1.result()[1]]
print(f"squares: {squares_list}")
print(f"cubes: {cubes_list}")
```
### Parallel execution
We run task multiple times for multiple sets of input, but we didn't talk about the execution time. Let's create a function that sleeps for a second and run for four values:
```
import time
@pydra.mark.task
def add_two_sleep(x):
time.sleep(1)
return x + 2
task8 = add_two_sleep(x=[1, 2, 3, 4]).split(splitter="x")
t0 = time.time()
task8()
print(f'total time: {time.time() - t0}')
task8.result()
```
The total time will depend on the machine you are using, but it could be below `1.1s`, so clearly the tasks are running in parallel!
If we run `Task` that has a `State`, pydra will automatically create a `Submitter` with a default `Worker` that is `cf`, i.e. `ConcurrentFutures`.
We could also create a `Submitter` first, and than use it to run the task:
```
task9 = add_two_sleep(x=[1, 2, 3, 4]).split(splitter="x")
t0 = time.time()
with pydra.Submitter(plugin="cf") as sub:
task9(submitter=sub)
print(f'total time: {time.time() - t0}')
print(f"results: {task9.result()}")
```
or we can provide the name of the plugin:
```
task10 = add_two_sleep(x=[1, 2, 3, 4]).split(splitter="x")
t0 = time.time()
task10(plugin="cf")
print(f'total time: {time.time() - t0}')
print(f"results: {task10.result()}")
```
The last option for running the task is to create a `Submitter` first and run the submitter (`Submitter` is also a callable object) with the task as a `runnable`:
```
task11 = add_two_sleep(x=[1, 2, 3, 4]).split(splitter="x")
t0 = time.time()
with pydra.Submitter(plugin="cf") as sub:
sub(runnable=task11)
print(f'total time: {time.time() - t0}')
print(f"results: {task11.result()}")
```
All of the execution time should be similar, since all taska are run by *pydra* in the same way, i.e. *pydra* creates a submitter with `ConcurrentFutures` worker, if a number of processors is not provided, `ConcurrentFutures` takes all available processors as `max_workers`. However, if we want to set a specific number of processors, we can set it using `n_procs` when creating a `Submitter`. Let's see how the execution time changes when we use `n_procs=2`.
```
task12 = add_two_sleep(x=[1, 2, 3, 4]).split(splitter="x")
t0 = time.time()
with pydra.Submitter(plugin="cf", n_procs=2) as sub:
sub(runnable=task12)
print(f'total time: {time.time() - t0}')
print(f"results: {task11.result()}")
```
Now, the total time could be significantly different. For example, if your machine has at least 4 processors, the previous `tasks8` - `task11` took around 1s to run, but the task12 took around 2s.
If you have 2 processors or less, you should not see any difference in the execution time.
| github_jupyter |
```
import jax.numpy as jnp
from jax import jit, grad, jvp, random
from jax.scipy.stats import multivariate_normal as mvn
from jax.scipy.stats import norm
from scipy.optimize import minimize, NonlinearConstraint
from itertools import product
from jax.config import config
config.update('jax_enable_x64', True)
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme("poster", "darkgrid")
%matplotlib inline
plt.rcParams["figure.figsize"] = (16,12)
from optispd.manifold import SPD, Product, Euclidean
from optispd.minimizer import minimizer
n = 1000
p = 2
tol = 1e-6
seed = 0
rng = random.PRNGKey(seed)
def compute_delta(sl, cor):
bilin = jnp.einsum('i,ij,j', sl, cor, sl)
return jnp.matmul(cor, sl) / jnp.sqrt(1 + bilin)
def compute_omega(cor, sl):
delta = compute_delta(sl, cor)
omega = jnp.append(
jnp.append(cor, jnp.expand_dims(delta, 1), axis=1),
jnp.expand_dims(jnp.append(delta, 1.), 0),
axis=0
)
return omega
def sample_skew(key, loc, cov, sl, shape=(1,)):
p = loc.shape[-1]
scale = jnp.sqrt(jnp.diag(cov))
cor = jnp.einsum('i,ij,j->ij', 1./scale, cov, 1./scale)
omega = compute_omega(cor, sl)
X = random.multivariate_normal(
key=key, shape=shape,
mean=jnp.zeros(shape=(p + 1),),
cov=omega,
)
X0 = jnp.expand_dims(X[:, -1], 1)
X = X[:, :-1]
Z = jnp.where(X0 > 0, X, - X)
return loc + jnp.einsum('i,ji->ji', scale, Z)
def capital_phi(alpha, y):
return jnp.sum(norm.logcdf(jnp.matmul(alpha, y)))
def small_phi(cov, y):
mean = jnp.zeros(shape=(cov.shape[-1],))
return jnp.sum(mvn.logpdf(y, mean=mean, cov=cov))
def loglik_skew(cov, slant, data):
scale = jnp.sqrt(jnp.diag(cov))
al = jnp.einsum('i,i->i', 1./scale, slant)
Phi = capital_phi(al, data.T)
phi = small_phi(cov, data)
return 2 + phi + Phi
def pdf_skew(y, cov, slant):
scale = jnp.sqrt(jnp.diag(cov))
al = jnp.einsum('i,i->i', 1./scale, slant)
Phi = norm.logcdf(jnp.matmul(al, y.T))
phi = mvn.logpdf(y, mean=jnp.zeros_like(slant), cov=cov)
return jnp.exp(2 + phi + Phi)
rng, *key = random.split(rng, 3)
tmean = jnp.zeros(shape=(p,))
tcov = random.normal(key[0], shape=(p, p))
tcov = jnp.matmul(tcov, tcov.T)
tslant = random.uniform(key[1], shape=(p,), maxval=5)
print("True values:")
print("\tMean: {}".format(tmean))
print("\tSigma: {} (Eigs: {})".format(tcov.ravel(), jnp.linalg.eigvalsh(tcov)))
print("\tSkew: {}".format(tslant))
rng, key = random.split(rng)
data = sample_skew(key, tmean, tcov, tslant, shape=(n,))
nloglik = jit(lambda x, y: - loglik_skew(x, y, data))
true_loglik = nloglik(tcov, tslant)
print("\tLoglik: {:.2f}".format(true_loglik))
plot_data = pd.DataFrame(
data,
columns=['x', 'y']
)
l = 100
x = jnp.linspace(jnp.min(data[:, 0]), jnp.max(data[:, 0]), l)
y = jnp.linspace(jnp.min(data[:, 1]), jnp.max(data[:, 1]), l)
xy = jnp.array(list(product(x, y)))
Z_skew = pdf_skew(xy, tcov, tslant).reshape(l, l).T
Z_norm = mvn.pdf(xy, jnp.zeros(p,), cov=tcov).reshape(l, l).T
g = sns.jointplot(data=plot_data, x='x', y='y', alpha=0.4, height=16)
g.ax_joint.contour(x, y, Z_norm, colors='k', alpha=0.7, levels=4, linestyles='dashed')
g.ax_joint.contour(x, y, Z_skew, colors='k', levels=5)
plt.show();
k = 0
maxit = 100
man = SPD(p=p)
optimizer = minimizer(
man, method='rsd',
maxiter=1, mingradnorm=tol,
verbosity=0, logverbosity=False
)
rng, *key = random.split(rng, 5)
sig = random.normal(key[0], shape=(p,p))
sig = jnp.matmul(sig, sig.T)
th = random.uniform(key[1], shape=(p,), maxval=10)
logl = [nloglik(sig, th)]
print(logl)
while True:
print("Iteration {} starts from:".format(k))
print("\tSigma : {}".format(sig.ravel()))
print("\t(Eigs: {})".format(jnp.linalg.eigvalsh(sig)))
print("\tTheta: {} (norm: {})".format(th, jnp.linalg.norm(th)))
print("\tLoglik : {:.2f}".format(logl[-1]))
loglik_sig = jit(lambda x: nloglik(x, th))
gradient_sig = jit(grad(loglik_sig))
res = optimizer.solve(loglik_sig, gradient_sig, x=sig)
sig = res.x
print('\t...')
loglik_th = jit(lambda x: nloglik(sig, x))
gradient_psi = jit(grad(loglik_th))
res = minimize(loglik_th, th,
method="cg",
jac=gradient_psi,
tol=tol,
options={'maxiter':10}
)
th = res.x
logl.append(nloglik(sig, th))
k += 1
print("And ends at:")
print("\tSigma : {}".format(sig.ravel()))
print("\t(Eigs: {})".format(jnp.linalg.eigvalsh(sig)))
print("\tTheta: {} (norm: {})".format(th, jnp.linalg.norm(th)))
print("\tLoglik : {:.2f}".format(logl[-1]))
if jnp.isclose(logl[-2], logl[-1], rtol=tol) or k == maxit:
break
if jnp.isnan(logl[-1]).any():
print("PANIC! NAN APPEARS")
break
print("\n---\n")
def nloglik(X):
y = jnp.concatenate([data.T, jnp.ones(shape=(1, n))], axis=0)
datapart = jnp.trace(jnp.linalg.solve(X, jnp.matmul(y, y.T)))
return 0.5 * (n * jnp.linalg.slogdet(X)[1] + datapart)
fun_rep = jit(nloglik)
gra_rep = jit(grad(fun_rep))
man_norm = SPD(p+1)
opt = minimizer(man_norm, method='rlbfgs', verbosity=1)
res = opt.solve(fun_rep, gra_rep, x=jnp.identity(p+1))
muhat = res.x[-1, :-1]
covhat = res.x[:-1, :-1] - jnp.outer(muhat, muhat)
plt.plot(jnp.array(logl), label="Estimated loglikelihood")
plt.hlines(y=true_loglik, xmin=0, xmax=k, colors='k', linestyles='--', label="Loglikelihood of true values")
plt.yscale('log')
plt.legend(loc='best')
plt.show()
print("True values:")
print("\tSigma: {} (Eigs: {})".format(tcov.ravel(), jnp.linalg.eigvalsh(tcov)))
print("\tTheta: {} (norm: {})".format(tslant, jnp.linalg.norm(tslant)))
print("\tLoglik: {:.2f}".format(true_loglik))
print("Estimated values:")
print("\tSigma: {} (Eigs: {})".format(sig.ravel(), jnp.linalg.eigvalsh(sig)))
print("\tTheta: {} (norm: {})".format(th, jnp.linalg.norm(th)))
print("\tLoglik: {:.2f}".format(logl[-1]))
print("Estimated normal values:")
print("\tCovariance: {}".format(covhat.ravel()))
print("\tMean: {}".format(muhat))
l = 100
x = jnp.linspace(jnp.min(data[:, 0]), jnp.max(data[:, 0]), l)
y = jnp.linspace(jnp.min(data[:, 1]), jnp.max(data[:, 1]), l)
xy = jnp.array(list(product(x, y)))
Z_est = pdf_skew(xy, sig, th).reshape(l, l).T
Z_tru = pdf_skew(xy, tcov, tslant).reshape(l, l).T
Z_norm = mvn.pdf(xy, muhat.ravel(), covhat).reshape(l, l).T
g = sns.jointplot(data=plot_data, x='x', y='y', alpha=0.3, height=16)
g.ax_joint.contour(x, y, Z_tru, colors='r', alpha=0.5, levels=5, linestyles='dashed')
g.ax_joint.contour(x, y, Z_est, colors='r', levels=5)
g.ax_joint.contour(x, y, Z_norm, colors='g', levels=5)
plt.show();
sns.kdeplot(data=plot_data, x='x', y='y', alpha=0.4, fill=True)
#sns.scatterplot(data=plot_data, x='x', y='y', alpha=0.3)
plt.contour(x, y, Z_est, colors='r', levels=4)
plt.contour(x, y, Z_norm, colors='k', levels=4, linestyles='dashed')
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Bluelord/ML_Mastery_Python/blob/main/06_Feature_Selection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Feature Selction
---
```
from google.colab import drive
drive.mount('/content/drive')
```
Features in the dataset are very important for our model perfomance accuracy, it can also decrese performance of many ML models, especially for linear ML model. Feture selction is a crusial step befor modeling your ML model, the advantages of selecting relavent features will **Reduce Overfitting, Improves Accuracy, Reduces Traning Time**.
## Univariable Selection
Statistical test can be done on the data to find the stringest relation with the out, **SelectKBest** class use to select the best feature from the test, in this example **Chi-Square** test is used non-negative features.
```
# Feature Extraction with univariable Statictical test (Chi-Squaare for classification)
from pandas import read_csv
from numpy import set_printoptions
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
# Load Data
filename = "/content/drive/MyDrive/Colab Notebooks/ML Mastery python/Dataset/pima-indians-diabetes.csv"
names = ['preg', 'plas','pres','skin','test','mass','pedi','age','class']
dataframe = read_csv(filename, names=names)
array = dataframe.values
# separate array into input & output
X = array[:,0:8]
Y = array[:,8]
# Feature extraction
##########################################
test = SelectKBest(score_func=chi2, k = 4) # we can change the score fun & k is the no of feature we want to use
fit = test.fit(X, Y)
###########################################
#Summarize scores
set_printoptions(precision=3)
print(fit.scores_)
features = fit.transform(X)
# Summarizing the selected features
print(features[:,:])
```
## Recursive Features Elimination
RFE uses the model accuracy to identify which features are important by recursivly removing the features and building new model dataset again and again. **RFE** class from sklearn is used for this.
```
# Feature extraction wtih RFE
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
# Load Data
# Feature extraction
##########################################
model = LogisticRegression(solver='liblinear') # In this we need a model for recurrently select the feature and test the dataset.
rfe = RFE(model, 3)
fit = rfe.fit(X,Y)
###########################################
# Summarizing the selected features
print("Num Features: %d:" % fit.n_features_)
print("Selected Features:" % fit.support_)
print("Feature Ranking:" % fit.ranking_)
```
## Principal Component Analysis
PCA is one if the data comprasion technique, it choose the number of features or the principal components and result into new dataset.
```
# Feature Extraction with PCA
from sklearn.decomposition import PCA
# Load Data
# Feature extraction
##########################################
pca = PCA(n_components= 3)
fit = pca.fit(X)
# Summarize components
print("Explained Varience: %s" % fit.explained_variance_ratio_)
print(fit.components_)
```
## Feature Importance
Decision trees like Random forest & Extra Trees can be used to estimate the importance of Features. Score given to the features by this class higlight the importance of the features in the dataset.
```
# Feature Extraction with Extra Trees Classifier
from sklearn.ensemble import ExtraTreesClassifier
# Load Data
# Feature extraction
model = ExtraTreesClassifier(n_estimators=100)
model.fit(X,Y)
print(model.feature_importances_)
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from torch.utils.data import TensorDataset, DataLoader
import os
import timm
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import torch.nn as nn
#define variables specific to this model
subject = 'sub01'
roi = 'FFA'
ensemble_model_name = 'model_weights_{}_{}_ensemble.pt'.format(subject, roi)
RAW_model_name = 'model_weights_{}_{}_RAW.pt'.format(subject, roi)
RAFT_model_name = 'model_weights_{}_{}_RAFT.pt'.format(subject, roi)
BDCN_model_name = 'model_weights_{}_{}_BDCN.pt'.format(subject, roi)
MFCC_model_name = 'model_weights_{}_{}_MFCC.pt'.format(subject, roi)
# define global variables / hyperparameters
batch_size = 16
num_epochs = 20
learning_rate = 0.001
# detect if GPU/CPU device
use_cuda = torch.cuda.is_available()
print('CUDA available:', use_cuda)
# set RNG seed for reproducibility
seed = 1
torch.manual_seed(seed)
np.random.seed(seed)
# setup gpu things
dtype = 'float32' if use_cuda else 'float64' # GPU does better with float32 numbers
torchtype = {'float32': torch.float32, 'float64': torch.float64}[dtype]
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# flush out the cache
torch.cuda.empty_cache()
# Function that takes arrays x1, y1, x2, y2
# each y has dimensions (num_samples, num_classes)
# finds the similar labels in y1 and y2
# uses the indices of similar labels to find the corresponding x1 and x2
# returns the x1, x2, y1
def find_similar_labels(x1, y1, x2, y2):
# for every label in y1, see if there is a similar label in y2
# if there is, add the corresponding x1 and x2 to the list
# return the list of x1, x2, y1, y2
x1_list = []
x2_list = []
y1_list = []
for i, label in enumerate(y1):
# labels are both floats, so check if they are close enough
# if they are close enough, add the corresponding x1 and x2 to the list
# if they are not close enough, do nothing
# if there is no similar label, do nothing
similar_label_idx = -1
similar_label = False
for j, label2 in enumerate(y2):
if np.allclose(label, label2, atol=0.1):
similar_label = True
similar_label_idx = j
break
if similar_label:
x1_list.append(x1[i])
x2_list.append(x2[j])
y1_list.append(label)
return np.array(x1_list), np.array(x2_list), np.array(y1_list)
# Training a RNN with PyTorch for roi V1 with RAW data
# Load the entire training data into main memory
# This is a huge dataset, so we need to do this in chunks
#isolate subject of interests' data
num_subjects = 10
soi = subject
num_classes = -1
# read in every npy file in the directory Gunners_training_data/V1/RAW and store them in a list
RAW_training_data = []
RAW_training_labels = []
RAFT_training_data = []
RAFT_training_labels = []
BDCN_training_data = []
BDCN_training_labels = []
MFCC_training_data = []
MFCC_training_labels = []
# load in every Nth file in the directory
culling_scale = 1
preprocessing = 'MFCC'
input_data_dims = (39,261)
input_channels = 1
# for each file in the MFCC directory
for i, file in enumerate(os.listdir('../Gunners_training_data/{}/{}'.format(roi, preprocessing))):
# if the file name contains the soi string
if not soi in file:
continue
# if the file is a .npy file
if file.endswith('.npy'):
# read in the file
data = np.load('../Gunners_training_data/{}/{}/'.format(roi,preprocessing) + file, allow_pickle=True)
# print out first voxel label
# if i == 0:
# print(data[0][1][0])
# for each sample, make sure its dimensions are correct, if not then skip it
if data[0][0].shape != input_data_dims:
# if the shape is larger than the input_data_dims, then crop it
if data[0][0].shape[1] > input_data_dims[1]:
data[0][0] = data[0][0][:,:input_data_dims[1]]
if data[0][0].shape != input_data_dims:
continue
# for each sample, add the data to the training_data list
data[0][0] = np.expand_dims(data[0][0],axis=0)
MFCC_training_data.append(data[0][0])
# for each sample, add the label to the training_labels list
MFCC_training_labels.append(data[0][1])
num_classes = len(MFCC_training_labels[0])
preprocessing = 'RAW'
input_data_dims = (32,32,225)
input_channels = 225
# for each file in the RAW directory
for i, file in enumerate(os.listdir('../Gunners_training_data/{}/{}'.format(roi, preprocessing))):
# if the file name contains the soi string
if not soi in file:
continue
# if the file is a .npy file
if file.endswith('.npy'):
# read in the file
data = np.load('../Gunners_training_data/{}/{}/'.format(roi,preprocessing) + file, allow_pickle=True)
# print out first voxel label
# if i == 0:
# print(data[0][1][0])
# for each sample, make sure its dimensions are 32x32x225, if not then skip it
if data[0][0].shape != input_data_dims:
continue
# for each sample, add the data to the training_data list
RAW_training_data.append(data[0][0])
# for each label, add the data to the training_data list
RAW_training_labels.append(data[0][1])
# cull MFCCS that dont have RAW data and vice versa
MFCC_training_data = np.array(MFCC_training_data)
MFCC_training_labels = np.array(MFCC_training_labels)
RAW_training_data = np.array(RAW_training_data)
RAW_training_labels = np.array(RAW_training_labels)
print("Finding similar labels between MFCC and RAW")
MFCC_training_data, RAW_training_data, MFCC_training_labels = find_similar_labels(MFCC_training_data, MFCC_training_labels, RAW_training_data, RAW_training_labels)
preprocessing = 'RAFT'
input_data_dims = (32,32,222)
input_channels = 222
# for each file in the RAW directory
for i, file in enumerate(os.listdir('../Gunners_training_data/{}/{}'.format(roi, preprocessing))):
# if the file name contains the soi string
if not soi in file:
continue
# if the file is a .npy file
if file.endswith('.npy'):
# read in the file
data = np.load('../Gunners_training_data/{}/{}/'.format(roi,preprocessing) + file, allow_pickle=True)
# print out first voxel label
# if i == 0:
# print(data[0][1][0])
# for each sample, make sure its dimensions are 32x32x225, if not then skip it
if data[0][0].shape != input_data_dims:
continue
# for each sample, add the data to the training_data list
RAFT_training_data.append(data[0][0])
# for each label, add the data to the training_data list
RAFT_training_labels.append(data[0][1])
# cull MFCCs/RAW that dont have RAFT data and vice versa
RAFT_training_data = np.array(RAFT_training_data)
RAFT_training_labels = np.array(RAFT_training_labels)
print("Finding similar labels between MFCC and RAFT")
RAFT_training_data, MFCC_training_data, MFCC_training_labels = find_similar_labels(RAFT_training_data, RAFT_training_labels, MFCC_training_data, MFCC_training_labels)
preprocessing = 'BDCN'
input_data_dims = (32,32,75)
input_channels = 75
# for each file in the RAW directory
for i, file in enumerate(os.listdir('../Gunners_training_data/{}/{}'.format(roi, preprocessing))):
# if the file name contains the soi string
if not soi in file:
continue
# if the file is a .npy file
if file.endswith('.npy'):
# read in the file
data = np.load('../Gunners_training_data/{}/{}/'.format(roi,preprocessing) + file, allow_pickle=True)
# print out first voxel label
# if i == 0:
# print(data[0][1][0])
# for each sample, make sure its dimensions are 32x32x225, if not then skip it
if data[0][0].shape != input_data_dims:
continue
# for each sample, add the data to the training_data list
BDCN_training_data.append(data[0][0])
# for each label, add the data to the training_data list
BDCN_training_labels.append(data[0][1])
# cull MFCCs/RAW that dont have RAFT data and vice versa
BDCN_training_data = np.array(BDCN_training_data)
BDCN_training_labels = np.array(BDCN_training_labels)
print("Finding similar labels between MFCC and BDCN")
BDCN_training_data, MFCC_training_data, MFCC_training_labels = find_similar_labels(BDCN_training_data, BDCN_training_labels, MFCC_training_data, MFCC_training_labels)
print('Number of RAW training samples: ', len(RAW_training_data))
print('Number of MFCC training samples: ', len(MFCC_training_data))
print('Number of RAFT training samples: ', len(RAFT_training_data))
print('Number of BDCN training samples: ', len(BDCN_training_data))
print('Number of voxel activations (classes): ', num_classes)
# Only keep MFFC_training_labels; release memory of other label arrays
RAW_training_labels = None
RAFT_training_labels = None
BDCN_training_labels = None
#normalize all labels to be between -1 and 1
MFCC_training_labels = np.array(MFCC_training_labels)
MFCC_training_labels = (MFCC_training_labels - np.min(MFCC_training_labels)) / (np.max(MFCC_training_labels) - np.min(MFCC_training_labels))
#print the value range of the labels
print('Value range of labels: ', np.min(MFCC_training_labels), np.max(MFCC_training_labels))
num_classes = MFCC_training_labels[0].shape[0]
print('Number of voxel activations (classes): ', num_classes)
# verify the data is loaded correctly and is in numpy arrays
RAW_training_data = np.array(RAW_training_data)
RAFT_training_data = np.array(RAFT_training_data)
BDCN_training_data = np.array(BDCN_training_data)
MFCC_training_data = np.array(MFCC_training_data)
# combine all training data into one array for a pytorch Dataset object
RAW_training_data = torch.tensor(RAW_training_data).type(torchtype)
RAFT_training_data = torch.tensor(RAFT_training_data).type(torchtype)
BDCN_training_data = torch.tensor(BDCN_training_data).type(torchtype)
MFCC_training_data = torch.tensor(MFCC_training_data).type(torchtype)
training_labels = torch.tensor(MFCC_training_labels).type(torchtype)
# permute the data so that the first dimension is the number of samples, the second is the number of channels
# not viable for MFCC 2d data
RAW_training_data = RAW_training_data.permute(0,3,1,2)
RAFT_training_data = RAFT_training_data.permute(0,3,1,2)
BDCN_training_data = BDCN_training_data.permute(0,3,1,2)
#print the dims of training_data tensor
print('RAW_training_data tensor dims:', RAW_training_data.shape)
print('RAFT_training_data tensor dims:', RAFT_training_data.shape)
print('BDCN_training_data tensor dims:', BDCN_training_data.shape)
print('MFCC_training_data tensor dims:', MFCC_training_data.shape)
print('training_labels tensor dims:', training_labels.shape)
# create a dataset from the tensors
RAW_dataset = TensorDataset(RAW_training_data,training_labels)
RAFT_dataset = TensorDataset(RAFT_training_data,training_labels)
BDCN_dataset = TensorDataset(BDCN_training_data,training_labels)
MFCC_dataset = TensorDataset(MFCC_training_data,training_labels)
# split the data into training and validation sets
train_size = int(0.8 * len(RAW_training_data))
valid_size = len(RAW_training_data) - train_size
# create training and validation sets
RAW_dataset, RAW_validation_data = torch.utils.data.random_split(RAW_dataset, [train_size, valid_size])
RAFT_dataset, RAFT_validation_data = torch.utils.data.random_split(RAFT_dataset, [train_size, valid_size])
BDCN_dataset, BDCN_validation_data = torch.utils.data.random_split(BDCN_dataset, [train_size, valid_size])
MFCC_dataset, MFCC_validation_data = torch.utils.data.random_split(MFCC_dataset, [train_size, valid_size])
# # combine all the training datasets into a big one
# train_data = [RAW_dataset, RAFT_dataset, BDCN_dataset, MFCC_dataset]
# # concatenate all the validation datasets
# valid_data = torch.utils.data.ConcatDataset([RAW_validation_data, RAFT_validation_data, BDCN_validation_data, MFCC_validation_data])
# print out the range of values for labels
print('Value range of labels: ', np.min(MFCC_training_labels), np.max(MFCC_training_labels))
# create training and validation dataloaders
RAW_train_loader = DataLoader(RAW_dataset, batch_size = batch_size, shuffle=True)
RAFT_train_loader = DataLoader(RAFT_dataset, batch_size = batch_size, shuffle=True)
BDCN_train_loader = DataLoader(BDCN_dataset, batch_size = batch_size, shuffle=True)
MFCC_train_loader = DataLoader(MFCC_dataset, batch_size = batch_size, shuffle=True)
RAW_val_loader = DataLoader(RAW_validation_data, batch_size = batch_size, shuffle=False)
RAFT_val_loader = DataLoader(RAFT_validation_data, batch_size = batch_size, shuffle=False)
BDCN_val_loader = DataLoader(BDCN_validation_data, batch_size = batch_size, shuffle=False)
MFCC_val_loader = DataLoader(MFCC_validation_data, batch_size = batch_size, shuffle=False)
class MyEnsemble(nn.Module):
def __init__(self, modelRAW, modelRAFT, modelBDCN, modelMFCC, nb_classes=num_classes):
super(MyEnsemble, self).__init__()
self.modelRAW = modelRAW
self.modelRAFT = modelRAFT
self.modelBDCN = modelBDCN
self.modelMFCC = modelMFCC
# Remove last linear layer
# self.modelRAW.fc = nn.Identity()
# self.modelBDCN.fc = nn.Identity()
# self.modelRAFT.fc = nn.Identity()
# self.modelMFCC.fc = nn.Identity()
# Create new classifier
self.classifier1 = nn.Linear(num_classes*4, 2048)
self.classifier2 = nn.Linear(2048, nb_classes)
def forward(self, dataRAW, dataRAFT, dataBDCN, dataMFCC):
x1 = self.modelRAW(dataRAW)
x1 = x1.view(x1.size(0), -1)
x2 = self.modelBDCN(dataBDCN)
x2 = x2.view(x2.size(0), -1)
x3 = self.modelRAFT(dataRAFT)
x3 = x3.view(x3.size(0), -1)
x4 = self.modelMFCC(dataMFCC)
x4 = x4.view(x4.size(0), -1)
x = torch.cat((x1, x2, x3, x4), dim=1).to(device)
# print(x.shape)
# print(x.shape)
x = torch.sigmoid(self.classifier2(F.relu(self.classifier1(F.relu(x)))))
return x
# define the RAW model
# define RNN model with 225 channels
input_channels = 225
modelRAW = timm.create_model('densenet121', num_classes=num_classes, in_chans=input_channels, pretrained=False).to(device)
modelRAW = nn.Sequential(modelRAW, nn.Sigmoid())
modelRAW.eval()
# make the model use floats
modelRAW.float()
# load pretrained weighst from the file
modelRAW.load_state_dict(torch.load('{}'.format(RAW_model_name)))
# define optimizer
optimizerRAW = torch.optim.Adam(modelRAW.parameters(), lr=learning_rate)
# scheduler for Learning Rate
schedulerRAW = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizerRAW, 'min', patience=2, verbose=True)
# define the RAFT model
# define RNN model with 222 channels
input_channels = 222
modelRAFT = timm.create_model('cspresnext50', num_classes=num_classes, in_chans=input_channels, pretrained=False).to(device)
modelRAFT = nn.Sequential(modelRAFT, nn.Sigmoid())
modelRAFT.eval()
# make the model use floats
modelRAFT.float()
# load pretrained weighst from the file
modelRAFT.load_state_dict(torch.load('{}'.format(RAFT_model_name)))
# define optimizer
optimizerRAFT = torch.optim.Adam(modelRAFT.parameters(), lr=learning_rate)
# scheduler for Learning Rate
schedulerRAFT = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizerRAFT, 'min', patience=2, verbose=True)
# define the BDCN model
# define RNN model with 222 channels
input_channels = 75
modelBDCN = timm.create_model('densenet121', num_classes=num_classes, in_chans=input_channels, pretrained=False).to(device)
modelBDCN = nn.Sequential(modelBDCN, nn.Sigmoid())
modelBDCN.eval()
# make the model use floats
modelBDCN.float()
# load pretrained weighst from the file
modelBDCN.load_state_dict(torch.load('{}'.format(BDCN_model_name)))
# define optimizer
optimizerBDCN = torch.optim.Adam(modelBDCN.parameters(), lr=learning_rate)
# scheduler for Learning Rate
schedulerBDCN = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizerBDCN, 'min', patience=2, verbose=True)
# define the MFCC model
# define RNN model with 222 channels
input_channels = 1
modelMFCC = timm.create_model('densenet121', num_classes=num_classes, in_chans=input_channels, pretrained=False).to(device)
modelMFCC = nn.Sequential(modelMFCC, nn.Sigmoid())
modelMFCC.eval()
# make the model use floats
modelMFCC.float()
# load pretrained weighst from the file
modelMFCC.load_state_dict(torch.load('{}'.format(MFCC_model_name)))
# define optimizer
optimizerMFCC = torch.optim.Adam(modelMFCC.parameters(), lr=learning_rate)
# scheduler for Learning Rate
schedulerMFCC = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizerMFCC, 'min', patience=2, verbose=True)
# create a loss function that is 1 - the correlation coefficient
def corrcoef_loss_function(output, target):
x = output
y = target
vx = x - torch.mean(x)
vy = y - torch.mean(y)
cost = torch.sum(vx * vy) / (torch.sqrt(torch.sum(vx ** 2)) * torch.sqrt(torch.sum(vy ** 2)))
# mse_loss = torch.mean((output - target) ** 2)
return (1 - cost)**3
# define the ensemble model
model = MyEnsemble(modelRAW, modelRAFT, modelBDCN, modelMFCC)
# define optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# scheduler for Learning Rate
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=2, verbose=True)
# define loss function for multi-variable regression
loss_fn = corrcoef_loss_function
if use_cuda:
#put model on gpu
model = model.to(device)
# keep track of training/validation loss
train_losses = []
valid_losses = []
# train the model
#progress bar for training
pbar = tqdm(range(num_epochs))
for epoch in pbar:
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
# keep track of training and validation accuracy
train_accuracy = 0.0
valid_accuracy = 0.0
# set the model to training mode
model.train()
# train the model for one epoch
for RAW_data, RAFT_data, BDCN_data, MFCC_data in zip(RAW_train_loader, RAFT_train_loader, BDCN_train_loader, MFCC_train_loader):
image_batch1, labels = RAW_data
image_batch2, _ = RAFT_data
image_batch3, _ = BDCN_data
image_batch4, _ = MFCC_data
# move tensors to GPU if CUDA is available
if use_cuda:
image_batch1, labels = image_batch1.to(device), labels.to(device)
image_batch2, image_batch3, image_batch4 = image_batch2.to(device), image_batch3.to(device), image_batch4.to(device)
# zero out the gradients
optimizer.zero_grad()
# forward pass
output = model(dataRAW = image_batch1, dataRAFT = image_batch2, dataBDCN = image_batch3, dataMFCC = image_batch4)
# calculate loss
loss = loss_fn(output, labels)
# backpropagate
loss.backward()
# update the weights
optimizer.step()
# calculate the training loss
train_loss += loss.item()
# set the model to evaluation mode
model.eval()
# evaluate the model on the validation set
for RAW_data, RAFT_data, BDCN_data, MFCC_data in zip(RAW_val_loader, RAFT_val_loader, BDCN_val_loader, MFCC_val_loader):
image_batch1, labels = RAW_data
image_batch2, _ = RAFT_data
image_batch3, _ = BDCN_data
image_batch4, _ = MFCC_data
# move tensors to GPU if CUDA is available
if use_cuda:
image_batch1, labels = image_batch1.to(device), labels.to(device)
image_batch2, image_batch3, image_batch4 = image_batch2.to(device), image_batch3.to(device), image_batch4.to(device)
#put model on gpu
model = model.to(device)
# validation forward pass
output = model(image_batch1, image_batch2, image_batch3, image_batch4)
# calculate the validation loss
valid_loss += loss_fn(output, labels).item()
# calculate the average training loss and accuracy
train_loss = train_loss/len(RAW_train_loader)
# calculate the average validation loss and accuracy
valid_loss = valid_loss/len(RAW_val_loader)
# ping the learning rate scheduler
scheduler.step(valid_loss)
# append the training and validation loss and accuracy to the lists
train_losses.append(train_loss)
valid_losses.append(valid_loss)
# if current validation loss was best so far, save the model weights in memory
best_valid_loss = min(valid_losses)
if valid_loss == best_valid_loss:
my_best_weights = model.state_dict()
# display the epoch training loss
pbar.set_postfix({
'Epoch':'{}/{}'.format(epoch+1, num_epochs),
'Training Loss': '{:.4f}'.format(train_loss) ,
'Validation loss' : '{:.4f}'.format(valid_loss)})
# assign the best weights to the model
model.load_state_dict(my_best_weights)
#print the epoch of the best validation loss
print('Best validation loss: ', min(valid_losses))
print('Epoch of best validation loss: ', valid_losses.index(min(valid_losses))+1)
# print the model summary
# print(model)
# load best model weights to the model
model.load_state_dict(my_best_weights)
# save the best model weights to a file
torch.save(model.state_dict(), '{}'.format(ensemble_model_name))
# plot the training and validation loss and accuracy and a vertical line on the x-axis at the epoch of the best validation loss
best_epoch = valid_losses.index(min(valid_losses))+1
plt.figure(figsize=(12,8))
plt.plot(range(1,num_epochs+1), train_losses, label='Training Loss')
plt.plot(range(1,num_epochs+1), valid_losses, label='Validation Loss')
plt.axvline(best_epoch, color='r', linestyle='--', label='Best Validation Loss Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
# load the ensemble model from the file
model.load_state_dict(torch.load('{}'.format(ensemble_model_name)))
model.eval()
# display a side by side comparison of the original label and the predicted label
def display_side_by_side(original, prediction):
#add title to the figure
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(1, 3, 1)
ax.imshow(original)
ax.set_title('Original')
ax = fig.add_subplot(1, 3, 2)
ax.imshow(prediction)
ax.set_title('Prediction')
# calculate the mean squared error
mse = (original - prediction)**2
# display mse next to the other comparisons
ax = fig.add_subplot(1, 3, 3)
ax.set_title('MSE: {:.4f}'.format (mse.mean()))
ax.imshow(mse)
plt.show()
# display a figure of the mean squared error between the original label and the predicted label on matplotlib
def display_mse(original, prediction):
mse = np.mean((original - prediction)**2)
print('Mean Squared Error: ', mse)
plt.imshow((original - prediction)**2)
plt.show()
# print(training_labels[0].unsqueeze(0).numpy().shape)
# resized_original = training_labels[0].unsqueeze(0).numpy().reshape(8,29)
# resized_prediction = model(training_data[0].unsqueeze(0).to(device)).detach().cpu().numpy().reshape(8,29)
#draw a correlation coefficient graph between the original label and the predicted label
def draw_correlation_coefficient(original, prediction):
# calculate the correlation coefficient
corr_coeff = np.corrcoef(original, prediction)[0,1]
# display the correlation coefficient
print('Correlation Coefficient: ', corr_coeff)
# plot the correlation coefficient graph
plt.plot(original, prediction, 'o')
plt.xlabel('Original')
plt.ylabel('Prediction')
plt.title('Correlation Coefficient: {:.2f}'.format(corr_coeff))
plt.show()
#print out value ranges of prediction
# print('Prediction Range: ', np.min(resized_prediction), np.max(resized_prediction))
# display a side by side comparison of the original label and the predicted label
# display_side_by_side(resized_original,resized_prediction)
# display_mse(resized_original,resized_prediction)
# draw_correlation_coefficient(training_labels[0].unsqueeze(0).numpy(),model(training_data[0].unsqueeze(0).to(device)).detach().cpu().numpy())
#find out the correlation coefficient between the original label and the predicted label for the entire dataset
def find_correlation_coeff(model):
# calculate the correlation coefficient
corr_coeff_list = []
# separate the pytorch dataset into the data and labels
# set the model to evaluation mode
# evaluate the model on the validation set
for RAW_data, RAFT_data, BDCN_data, MFCC_data in zip(RAW_val_loader, RAFT_val_loader, BDCN_val_loader, MFCC_val_loader):
image_batch1, labels = RAW_data
image_batch2, _ = RAFT_data
image_batch3, _ = BDCN_data
image_batch4, _ = MFCC_data
print(labels.cpu().numpy()[0][0])
# print the range of labels
# move tensors to GPU if CUDA is available
if use_cuda:
image_batch1, labels = image_batch1.to(device), labels.to(device)
image_batch2, image_batch3, image_batch4 = image_batch2.to(device), image_batch3.to(device), image_batch4.to(device)
#put model on gpu
model = model.to(device)
model.eval()
# validation forward pass
output = model(image_batch1, image_batch2, image_batch3, image_batch4)
# calculate the correlation coefficient for every image in the batch
# for every image in the batch
for i in range(len(output)):
# calculate the correlation coefficient
corr_coeff = np.corrcoef(labels.cpu().numpy()[i].T, output[i].detach().cpu().numpy().T)[0,1]
# append the correlation coefficient to the list
corr_coeff_list.append(corr_coeff)
# calculate the mean correlation coefficient
mean_corr_coeff = np.mean(corr_coeff_list)
#print the highest correlation coefficient and lowest correlation coefficient
print('Highest Correlation Coefficient: ', max(corr_coeff_list))
print('Lowest Correlation Coefficient: ', min(corr_coeff_list))
# plot a histogram of the correlation coefficients
plt.hist(corr_coeff_list, bins=20)
plt.xlabel('Correlation Coefficient')
plt.ylabel('Frequency')
plt.title('Histogram of Correlation Coefficients')
plt.show()
# display the mean correlation coefficient
print('Mean Correlation Coefficient: ', mean_corr_coeff)
return mean_corr_coeff
model.eval()
print ( 'Mean Correlation Coefficient: ', find_correlation_coeff(model))
```
| github_jupyter |
```
%matplotlib widget
import os
import sys
sys.path.insert(0, os.getenv('HOME')+'/pycode/MscThesis/')
import pandas as pd
from amftrack.util import get_dates_datetime, get_dirname, get_plate_number, get_postion_number,get_begin_index
import ast
from amftrack.plotutil import plot_t_tp1
from scipy import sparse
from datetime import datetime
from amftrack.pipeline.functions.node_id import orient
import pickle
import scipy.io as sio
from pymatreader import read_mat
from matplotlib import colors
import cv2
import imageio
import matplotlib.pyplot as plt
import numpy as np
from skimage.filters import frangi
from skimage import filters
from random import choice
import scipy.sparse
import os
from amftrack.pipeline.functions.extract_graph import from_sparse_to_graph, generate_nx_graph, sparse_to_doc
from skimage.feature import hessian_matrix_det
from amftrack.pipeline.functions.experiment_class_surf import Experiment, Edge, Node, Hyphae, plot_raw_plus
from amftrack.pipeline.paths.directory import run_parallel, find_state, directory_scratch, directory_project
from amftrack.notebooks.analysis.util import *
from scipy import stats
from scipy.ndimage.filters import uniform_filter1d
from statsmodels.stats import weightstats as stests
from amftrack.pipeline.functions.hyphae_id_surf import get_pixel_growth_and_new_children
from collections import Counter
from IPython.display import clear_output
from amftrack.notebooks.analysis.data_info import *
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
plt.rcParams.update({
"font.family": "verdana",
'font.weight' : 'normal',
'font.size': 20})
from amftrack.plotutil import plot_node_skel
lapse = 60
exp = get_exp((38,131,131+lapse),directory_project)
exp2 = Experiment(38,directory_project)
exp2.copy(exp)
exp = exp2
def transform_skeleton_final_for_show(skeleton_doc,Rot,trans):
skeleton_transformed={}
transformed_keys = np.round(np.transpose(np.dot(Rot,np.transpose(np.array(list(skeleton_doc.keys())))))+trans).astype(np.int)
i=0
for pixel in list(transformed_keys):
i+=1
skeleton_transformed[(pixel[0],pixel[1])]=1
skeleton_transformed_sparse=sparse.lil_matrix((27000, 60000))
for pixel in list(skeleton_transformed.keys()):
i+=1
skeleton_transformed_sparse[(pixel[0],pixel[1])]=1
return(skeleton_transformed_sparse)
def get_skeleton_non_aligned(exp,boundaries,t,directory):
i = t
plate = exp.plate
listdir=os.listdir(directory)
dates = exp.dates
date =dates [i]
directory_name = get_dirname(date, plate)
path_snap=directory+directory_name
skel = read_mat(path_snap+'/Analysis/skeleton.mat')
skelet = skel['skeleton']
skelet = sparse_to_doc(skelet)
# Rot= skel['R']
# trans = skel['t']
skel_aligned = transform_skeleton_final_for_show(skelet,np.array([[1,0],[0,1]]),np.array([0,0]))
output = skel_aligned[boundaries[2]:boundaries[3],boundaries[0]:boundaries[1]].todense()
kernel = np.ones((5,5),np.uint8)
output = cv2.dilate(output.astype(np.uint8),kernel,iterations = 1)
return(output)
from amftrack.util import get_skeleton
def plot_raw_plus_random(exp,compress=5,ranges = 1000):
t0 = choice(range(exp.ts))
node_ch = choice([node for node in exp.nodes if node.is_in(t0) and node.degree(t0)==1])
# node_ch = choice(exp.nodes)
# t0 = choice(node_ch.ts())
node_ch.show_source_image(t0,t0+1)
for index,t in enumerate([t0,t0+1]):
date = exp.dates[t]
anchor_time = t0
center = node_ch.pos(anchor_time)[1],node_ch.pos(anchor_time)[0]
window = (center[0]-ranges,center[0]+ranges,center[1]-ranges,center[1]+ranges)
skelet= get_skeleton_non_aligned(exp,window,t,exp.directory)
tips = [node.label for node in exp.nodes if t in node.ts() and node.degree(t) ==1 and node.pos(t)[1]>=window[0]-ranges and node.pos(t)[1]<=window[1]+ranges and node.pos(t)[0]>=window[2]-ranges and node.pos(t)[0]<=window[3]+ranges]
junction = [node.label for node in exp.nodes if t in node.ts() and node.degree(t) >=2 and node.pos(t)[1]>=window[0]-ranges and node.pos(t)[1]<=window[1]+ranges and node.pos(t)[0]>=window[2]-ranges and node.pos(t)[0]<=window[3]+ranges]
directory_name = get_dirname(date,exp.plate)
path_snap = exp.directory + directory_name
skel = read_mat(path_snap + "/Analysis/skeleton_pruned_realigned.mat")
Rot = skel["R"]
trans = skel["t"]
im = read_mat(path_snap+'/Analysis/raw_image.mat')['raw']
size = 8
fig = plt.figure(figsize = (9,9))
ax = fig.add_subplot(111)
ax.imshow(im[(window[2]//compress):(window[3]//compress),(window[0]//compress):(window[1]//compress)])
ax.imshow(cv2.resize(skelet,(2*ranges//compress,2*ranges//compress)),alpha = 0.2)
shift=(window[2],window[0])
greys = [1,0.5]
for i,node_list in enumerate([tips,junction]):
grey = greys[i]
bbox = dict(boxstyle="circle", fc=colors.rgb2hex((grey, grey, grey)))
# ax.text(right, top, time,
# horizontalalignment='right',
# verticalalignment='bottom',
# transform=ax.transAxes,color='white')
for node in node_list:
# print(self.positions[ts[i]])
if node in exp.positions[t].keys():
xs,ys = exp.positions[t][node]
rottrans = np.dot(np.linalg.inv(Rot), np.array([xs, ys] - trans))
ys, xs = round(rottrans[0]), round(rottrans[1])
tex = ax.text(
(xs - shift[1]) // compress,
(ys - shift[0]) // compress,
str(node),
ha="center",
va="center",
size=size,
bbox=bbox,
)
plt.show()
plt.close('all')
plot_raw_plus_random(exp,compress=5,ranges = 700)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
```
# Question1
```
bk = pd.read_csv('./data_banknote_authentication.csv')
bk = bk.rename({'class':'Class'}, axis='columns')
color = []
for c in bk.Class:
if c == 0:
color.append('Green')
else:
color.append('red')
bk['Color'] = color
bk0 = bk[bk.Class == 0]
bk1 = bk[bk.Class == 1]
print(bk0.describe())
print()
print(bk1.describe())
print()
print(bk.describe())
```
# Question2
```
# Split data and pairplot
## bk0
x0 = bk0[['variance', 'skewness', 'curtosis', 'entropy']]
y0 = bk0[['Class']]
x0Train, x0Test, y0Train, y0Test = train_test_split(x0, y0, test_size=0.5, random_state=0)
f0 = sns.pairplot(x0Train)
f0.fig.suptitle("class 0")
## bk1
x1 = bk1[['variance', 'skewness', 'curtosis', 'entropy']]
y1 = bk1[['Class']]
x1Train, x1Test, y1Train, y1Test = train_test_split(x1, y1, test_size=0.5, random_state=0)
f1 = sns.pairplot(x1Train)
f1.fig.suptitle("class 1")
# easy model
f = plt.figure()
f.set_size_inches(12,24)
## variance
va = f.add_subplot(4,2,1)
a0 = x0Train.variance
a1 = x1Train.variance
va.plot(a0, np.zeros_like(a0) + 0, '.', color = 'green')
va.plot(a1, np.zeros_like(a1) + 0.1, '.', color = 'red')
va.set_title('variance')
vah = f.add_subplot(4,2,2)
vah.hist(a0, color='green')
vah.hist(a1, color = 'red', alpha=0.3)
vah.set_title('variance')
## skewness
sk = f.add_subplot(4,2,3)
a0 = x0Train.skewness
a1 = x1Train.skewness
sk.plot(a0, np.zeros_like(a0) + 0, '.', color = 'green')
sk.plot(a1, np.zeros_like(a1) + 0.1, '.', color = 'red')
sk.set_title('skewness')
skh = f.add_subplot(4,2,4)
skh.hist(a0, color='green')
skh.hist(a1, color = 'red', alpha=0.3)
skh.set_title('skewness')
## curtosis
cu = f.add_subplot(4,2,5)
a0 = x0Train.curtosis
a1 = x1Train.curtosis
cu.plot(a0, np.zeros_like(a0) + 0, '.', color = 'green')
cu.plot(a1, np.zeros_like(a1) + 0.1, '.', color = 'red')
cu.set_title('curtosis')
cuh = f.add_subplot(4,2,6)
cuh.hist(a0, color='green')
cuh.hist(a1, color = 'red', alpha=0.3)
cuh.set_title('curtosis')
## entropy
en = f.add_subplot(4,2,7)
a0 = x0Train.entropy
a1 = x1Train.entropy
en.plot(a0, np.zeros_like(a0) + 0, '.', color = 'green')
en.plot(a1, np.zeros_like(a1) + 0.1, '.', color = 'red')
en.set_title('entropy')
enh = f.add_subplot(4,2,8)
enh.hist(a0, color='green')
enh.hist(a1, color = 'red', alpha=0.3)
enh.set_title('entropy')
# Predict lable
x = bk[['variance', 'skewness', 'curtosis', 'entropy']]
y = bk[['Class']]
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size=0.5, random_state=0)
yPredict = []
for v in xTest.variance:
if v >= 0 :
yPredict.append(0)
else:
yPredict.append(1)
# True False
tp = 0
tn = 0
fp = 0
fn = 0
acc = 0
for (p, t) in zip(yPredict, yTest.Class):
if p == 0 and t == 0:
tp += 1
elif p == 1 and t == 1:
tn += 1
elif p == 0 and t == 1:
fp += 1
elif p == 1 and t == 0:
fn += 1
if p == t:
acc = acc + 1
print("TP:{} FP:{} TN:{} FN:{} TPR:{} TNR:{} Accuracy:{}".format(tp, fp, tn, fn, tp/(tp + fn), tn/(tn + fp), acc / len(yPredict)))
## Confusion Matrix I choose
# 0 is good 1 is bad
temp = confusion_matrix(yTest, yPredict)
print(temp)
tn = temp[0][0]
fn = temp[1][0]
tp = temp[1][1]
fp = temp[0][1]
tpr = tp / (tp + fn)
tnr = tn / (tn + fp)
print('TPR = {}, TNR = {}, tp fp tn fn = {} {} {} {}'.format(tpr, tnr, tp, fp, tn, fn))
```
# Qestion3
```
# KNN
kList = [3,5,7,9,11]
accuracy = []
for k in kList:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(xTrain, yTrain)
yPredict = knn.predict(xTest)
accuracy.append(accuracy_score(yTest, yPredict))
plt.plot(kList, accuracy)
print(accuracy)
# k = 7 is optimal
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(xTrain, yTrain)
yPredict = knn.predict(xTest)
# True False
tp = 0
tn = 0
fp = 0
fn = 0
acc = 0
for (p, t) in zip(yPredict, yTest.Class):
if p == 0 and t == 0:
tp += 1
elif p == 1 and t == 1:
tn += 1
elif p == 0 and t == 1:
fp += 1
elif p == 1 and t == 0:
fn += 1
if p == t:
acc = acc + 1
print("TP:{} FP:{} TN:{} FN:{} TPR:{} TNR:{} Accuracy:{}".format(tp, fp, tn, fn, tp/(tp + fn), tn/(tn + fp), acc / len(yPredict)))
# BU ID 64501194
# Take 1 1 9 4
x = {'variance':[1], 'skewness':[1], 'curtosis':[9], 'entropy':[4]}
x = pd.DataFrame.from_dict(x)
## my simple classifier
yPredict = 1
print("my simple classifier: {}".format(yPredict))
## for best knn
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(xTrain, yTrain)
yPredict = knn.predict(x)
print("knn(n=7): {}".format(yPredict))
```
| github_jupyter |
# Analysing the Stroop Effect
## Introduction
The aim of this project was to investigate a classic phenomenon from experimental psychology called the Stroop Effect. The Stroop Effect is a demonstration of interference in the reaction time of a task. The Stroop task investigated for this project was a list of congruent and incongruent coloured words. The congruent words were colour words whose names matched the colors in which they were printed (eg. the word 'blue' is printed in blue ink). The incongruent words were colour words whose names did not match the colors in which they were printed (eg. the word 'blue' is printed in red ink).
For the purpose of this project, a data set was provided, containing the reaction times of 24 paricipants, to name the congruent and incongruent words outloud. The aim of this project was to identify if, on average, indivduals have a longer reaction time to name the colours of incongruent words compared to congruent words.
## Methods
The aim was to investigate if, on average, indivduals have a longer reaction time to name the colours of the incongruent words compared to congruent words. However, we are dealing with a small sample set of just 24 data points for each test, and any diffreence observed between the reaction times from this dataset could just be by random chance. Therefore, we would like to determine the liklihood of observing a difference in the mean reaction times for the population at large. To do this, we can perform a hypothesis test to assess whether the sample means (x_bar) are different because the two population means (µ) are different, or just by chance. The hypothesis for this task is as follows:
The null hypothesis: There is no difference between the average reaction time of naming the colours of the congruent words and the incongrunet words.
The alternative hypothesis: There is a difference between the average reaction time of naming the colours of the congruent words and the incongrunet words.
In summary: Ho: µ_diff = 0 and HA: µdiff is not equal to 0 (where µdiff is the difference in the mean reaction times of naming the colours for the congruent and incongruent words).
In order to test if the null hypothesis is true, a paired t-test can be implemented. In this case a t-test is preferred over a z-test, due to the fact that the sample size is small (24 individuals) and a t-test addresses the uncertainty of the standard error estimate for small sample sizes. The t-test is considered quite robust to violations of the normal distribution. This means that, without knowing the population paramters, we can assume a normal population distribution, without serious error being introduced into the test. Importantly, the reaction time results between the two tests are not independent of each other. The reaction speed of one individual in the congruent test is likely not independent of their reaction speed in the incongruent test eg. if a person is good at such tests, they are likely to have relatively quick reaction times for both the congruent and incongruent words. Therefore, these data sets are said to be paired. Thus, a paired t-test will be performed in order to asses the hypothesis.
Bootstrapping is another method which can be implemented here to test if the null hypothesis is true. Bootstrapping is a simulation based method, which takes samples from the original sample, with replacement. Sample statistics can be derived from the bootstrap samples, and can be replicated multiple times to build-up a sampling distribution which gives us an idea of what the population sampling distribution would look like if we had access to it. In this case, we are dealing with a very small sample size of 24, which makes it difficult to assume normality. Therefore, bootstrap simulations allow us a build up a sampling distribution, rather than just assuming normality, from which we can measure confidence intervals and determine p-values to test the validity of the null hypothesis.
## Descriptive Statistics
```
import pandas as pd
data = pd.read_csv('stroopdata.csv')
data.describe()
```
From the descriptive statistics table, the measures of centre of the data are provided by the mean and median (the 50th percentile) values. These data give an indication of how the reaction times between the two data sets differ. The mean of the congruent data set is lower than the mean of the incongruent data set (14.05 vs. 22.02. respectively), which suggests that, on average, the incongruent test takes a longer time to complete than the congruent test. The standard deviation (std) value provides an insight into the variability in the 2 data sets. The standard deviation is higher for the incongruent reaction times (4.80) than the congruent reaction times (3.56). This suggests that the individual results for the incongruent data are more varied. However, further exploratory data analysis and statistical analysis is required to gain better insights from these descriptive statistics.
## Data Visualisation
```
import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv('stroopdata.csv')
_ = data.boxplot()
_ = plt.ylabel('reaction time (s)')
plt.show()
_ = plt.hist(data['Congruent'], bins = 24)
_ = plt.xlabel('reaction times')
_ = plt.ylabel('counts')
_ = plt.title('Congruent Test')
_ = plt.xlim(5, 40)
plt.show()
_ = plt.hist(data['Incongruent'], bins=24)
_ = plt.xlabel('reaction times')
_ = plt.ylabel('counts')
_ = plt.title('Incongruent Test')
_ = plt.xlim(5, 40)
plt.show()
```
The boxplot clearly shows the difference in the interquartile range (IQR) of the congruent test reaction times comared to the incongruent test reaction times i.e. the middle 50% of the congruent test reaction times are lower than the middle 50% of the incongruent test reaction times. In the case of the boxplot for the incongruent data, two data points are visible at approximately 35 seconds, which is more than 2 IQR away from the median (2 * 5.33 + 21.02 = 31.68) and therefore, these points can be denoted as outliers. These outliers will increase the mean of the incongruent data and thus result in a right-skewed distribution of the reaction times, as illustrated in the histogram. Although, outliers are not present in the congruent data boxplot, the distribution of reaction times is also right-skewed, as shown in the histogram of the data set.
## Statistical Tests
```
import scipy.stats as stats
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#set random number generator for reproducibility
np.random.seed(123456789)
#load data and get arrays of both data sets
data = pd.read_csv('stroopdata.csv')
con = np.array(data['Congruent'])
incon = np.array(data['Incongruent'])
#Define bootsrap replicate function
def bootstrap_replicate(data, func):
'''Generate bootstrap replicate of 1D data'''
bs_sample = np.random.choice(data, len(data))
return func(bs_sample)
#Define multiple bootsrap replicates function
def draw_bs_reps(data, func, size=1):
'''draw bootstrap replicates'''
bs_replicates = np.empty(size)
for i in range(size):
bs_replicates[i] = bootstrap_replicate(data, func)
return bs_replicates
# GENERATE BOOTSTRAP REPLICATES FOR CONFIDENCE INTERVALS
# 1) Prepare data - flatten con and incon to 1D arrays
diff_obs = np.mean(con) - np.mean(incon)
con_1D = con.flatten()
incon_1D = incon.flatten()
# 2) compute bs_replicates for confidence interval
bs_replicates_con = draw_bs_reps(con_1D, np.mean, size=10000)
bs_replicates_incon = draw_bs_reps(incon_1D, np.mean, size=10000)
bs_replicates_diff_mean = bs_replicates_con - bs_replicates_incon
conf_int = np.percentile(bs_replicates_diff_mean, [2.5, 97.5])
# 3) plot the difference in mean from bs_replicates
_=plt.hist(bs_replicates_diff_mean, bins=30, normed=True, edgecolor = 'black')
_=plt.title('Bootstrap replicates for the difference in mean between Congruent and Incongruent data')
_=plt.xlabel('Difference in Mean (s)')
_=plt.ylabel('PDF')
plt.show()
# 4) print diff mean & confidence intervals
print('Difference in observed mean =', diff_obs, 's')
print('95% Confidence Interaval from bs_reps =', conf_int, 's')
# GENERATE BOOTSTRAP RELICATES FOR p-VALUES
# 1) Concatenate data for bootstrap samples
data_concat = np.concatenate((con, incon))
concat_mean = np.mean(data_concat)
# 2) shift arrays & flatten shifted arrays to 1D
con_shift = con - np.mean(con) + concat_mean
incon_shift = incon - np.mean(incon) + concat_mean
con_shift_1D = con_shift.flatten()
incon_shift_1D = incon_shift.flatten()
# 3) compute multiple bs_replicates of shifted arrays
bs_replicates_con_shift = draw_bs_reps(con_shift_1D, np.mean, size=10000)
bs_replicates_incon_shift = draw_bs_reps(incon_shift_1D, np.mean, size=10000)
# 4) get difference of means of replicates
bs_replicates_diff_mean_shift = bs_replicates_con_shift - bs_replicates_incon_shift
# 4) plot the difference of means of bs_replicates for null hypothesis
_=plt.hist(bs_replicates_diff_mean_shift, bins=30, normed=True, edgecolor = 'black')
_=plt.title('Bootstrap replicates for the difference in mean between shifted Congruent and Incongruent data')
_=plt.xlabel('Difference in Mean (s)')
_=plt.ylabel('PDF under null')
plt.show()
# 5) compute p-value: p_bs
p_bs = np.sum(bs_replicates_diff_mean_shift <= diff_obs) / len(bs_replicates_diff_mean_shift)
# 6) print p-value and average bootstrap diff in mean
print('p-value bs_rep = ', p_bs)
print('Difference in bs_replicates mean =', np.sum(bs_replicates_diff_mean_shift) / len(bs_replicates_diff_mean_shift))
# METHOD 2: Use t-test on data set to find p-values
t_test = stats.ttest_rel(con, incon)
print(t_test)
```
## Results & Discussion
The differnece between the average time taken to complete the congruent test and the incongruent test was -7.96 seconds. This result was calculated from a sample size of just 24 individuals, and indicates that on average, indivduals take almost 8 seconds longer to complete the incongruent test. Due to the small sample size, the results were simulated 10000 times, using bootstrap simulations. The 95% confidence intervals computed from the bootstrap samples show that we are 95% confident that it takes between 5.64 and 10.36 seconds longer to complete the incongruent test compared to the congruent test.
In order to determine that this result was not obtained by random chance, a hypothesis test was carried out. The null hypothesis states that the mean times taken to complete the congruent test and the incongruent test are equal. In order to test this hypothesis, both datasets were shifted to have the same mean, and then 10000 bootstrap simulations were carried out on the shifted data. The difference in mean was calulated from the boostrap replicates and the results were plotted. As shown in the plot, the distribution of the difference in means is centred at around 0, and the average difference in mean, from the boostrap replicates, was calculated as -0.01 seconds.
The p-value was calulated as approximately 0, which indicates that the probability of getting the observed difference of mean from the experiment (-7.96 s) is almost 0, given that the null hypothesis is true, which states that the mean difference between the time taken to complete both tests is 0. Therefore, given that the p-value is approximately 0, we can reject the null hypothesis and conclude that there is a statistically significant difference between the time taken to complete the congruent test and incongruent test, and that on average, individuals take longer to complete the inconguent test.
As an alternative approach, a t-test was carried out on the paired data, using a t-test function from the Scipy package. This function computed a test statistic of -8.02 and a p-value of approximately 0. Therefore, these results are inline with the results obtained from the boostrap simulations method.
## Conclusion
Overall, these results matched with the expected results. In the congruent test the colours of the words match the spelling of the words. However, in the incongruent test, the colour of the words do not match the spelling of the words, which results in confusion of the information your brain is receiving. As a result, it is expected that the incongruent test will take a longer time to complete than the congruent test (ref: https://faculty.washington.edu/chudler/words.html#seffect).
| github_jupyter |
```
import tensorflow as tf
from bayes_tec.bayes_opt.maximum_likelihood_tec import *
import numpy as np
float_type = tf.float64
def test_solve():
import numpy as np
from seaborn import jointplot
import pylab as plt
plt.style.use('ggplot')
freqs = np.linspace(120e6,160e6,20)
tec_conversion = -8.448e9/freqs
true_tec = np.random.uniform(-0.2,0.2,size=int(1e3))#np.array([0.004]*1000)
noise_rads = np.random.uniform(0.05,0.8,size=int(1e3))#np.array([0.3]*1000)# a lot of noise on almost flat TEC is hard
true_phase = true_tec[...,None] * tec_conversion
phase = true_phase + noise_rads[...,None]*np.random.normal(size=true_phase.shape)
tec_min, phase_sigma = solve_ml_tec(phase,freqs,batch_size=int(1e3),verbose=True)
plt.scatter(true_tec,tec_min)
plt.xlabel("True tec")
plt.ylabel("Pred tec")
plt.show()
jointplot(true_tec,tec_min,kind='hex')
plt.show()
jointplot(true_tec,tec_min,kind='kde',alpha=0.6,marker='+',color='k')
plt.show()
plt.scatter(noise_rads, phase_sigma)
plt.xlabel("Pred phase noise")
plt.ylabel("True phase noise")
plt.show()
jointplot(noise_rads, phase_sigma,kind='hex')
plt.show()
jointplot(noise_rads, phase_sigma,kind='kde',alpha=0.6,marker='+',color='k')
plt.show()
def diagnostics():
import numpy as np
import pylab as plt
plt.style.use('ggplot')
freqs = np.linspace(120e6,160e6,20)
tec_conversion = -8.448e9/freqs
true_tec = np.random.uniform(-0.3,0.3,size=1000)#np.array([0.004]*1000)
noise_rads = np.array([0.3]*1000)# a lot of noise on almost flat TEC is hard
true_phase = true_tec[...,None] * tec_conversion
phase = true_phase + noise_rads[...,None]*np.random.normal(size=true_phase.shape)
_tec = true_tec[0]
with tf.Session(graph=tf.Graph()) as sess:
t_pl = tf.placeholder(float_type)
phase_pl = tf.placeholder(float_type)
tec_conversion_pl = tf.placeholder(float_type)
X_init, Y_init = init_population(phase_pl,tec_conversion_pl,N=5)
Xcur, Ycur = X_init, Y_init
X_,Y_,aq_,fmean_,fvar_ = [],[],[],[],[]
for i in range(21):
res = bayes_opt_iter(phase_pl, tec_conversion_pl, Xcur, Ycur, max_tec=0.4, t = t_pl)
X_.append(res.X)
Y_.append(res.Y)
aq_.append(res.aq)
fmean_.append(res.fmean)
fvar_.append(res.fvar)
Xcur = res.X
Ycur = res.Y
X, Y, aq, fmean, fvar = sess.run([X_, Y_, aq_, fmean_, fvar_], feed_dict={t_pl:1.,
phase_pl:phase,
tec_conversion_pl:tec_conversion})
indices = (np.arange(Y[-1].shape[0],dtype=np.int64), np.argmin(Y[-1][:,:,0],axis=1), np.zeros(Y[-1].shape[0], dtype=np.int64))
tec_min = X[-1][indices]
plt.scatter(tec_min, true_tec)
plt.xlabel("pred. tec")
plt.ylabel("true tec")
plt.title("Scatter of solutions")
plt.show()
plt.hist(indices[1],bins=20)
plt.title("Where was fmin attained")
plt.xlabel("iteration including random init pop")
plt.show()
scatter = []
for j in range(Y[-1].shape[1]):
indices = (np.arange(Y[-1].shape[0],dtype=np.int64), np.argmin(Y[-1][:,:j+1,0],axis=1), np.zeros(Y[-1].shape[0], dtype=np.int64))
tec_j = X[-1][indices]
scatter.append(np.percentile(np.abs(tec_j - true_tec),95))
plt.plot(scatter)
plt.title("95% conf interval of |true_tec - pred_tec|")
plt.xlabel("iteration")
plt.ylabel("mean delta tec")
plt.show()
tec_array = np.linspace(-0.4, 0.4, 100)
for i, (x, y, a, f, v) in enumerate(zip(X, Y, aq, fmean, fvar)):
y = y - y.mean(1,keepdims=True)
y = y / (np.std(y,axis=1,keepdims=True) + 1e-6)
plt.plot(tec_array, f[0,:], label=r'$\mathbb{E}[f]$')
plt.fill_between(tec_array, f[0,:] - 2*np.sqrt(v[0,:]), f[0,:] + 2*np.sqrt(v[0,:]),alpha=0.5, label=r'$\pm 2\sigma_f$')
a = a - np.min(a,axis=1,keepdims=True)
a = 3*a/np.max(a,axis=1,keepdims=True)
plt.plot(tec_array,a[0,:],label='norm. acquisition func.')
plt.scatter(x[0, :-1, 0], y[0,:-1, 0],c='k',label='sampled points')
plt.scatter(x[0, -1, 0], y[0,-1, 0],c='red',label='New sample point')
plt.vlines(_tec,-2,2,label='global. min',linestyles='--')
plt.xlabel("tec")
plt.ylabel("normalized neg-log-likelihood")
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.title("Iteration {}".format(i))
plt.show()
def test_speed(N=1e6):
import numpy as np
from timeit import default_timer
freqs = np.linspace(120e6,160e6,20)
tec_conversion = -8.448e9/freqs
true_tec = np.random.uniform(-0.2,0.2,size=int(N))#np.array([0.004]*1000)
noise_rads = np.random.uniform(0.05,0.8,size=int(N))#np.array([0.3]*1000)# a lot of noise on almost flat TEC is hard
true_phase = true_tec[...,None] * tec_conversion
phase = true_phase + noise_rads[...,None]*np.random.normal(size=true_phase.shape)
t0 = default_timer()
tec_min, phase_sigma = solve_ml_tec(phase,freqs,batch_size=int(N),verbose=True)
t1 = default_timer()
t = t1 - t0
print("Time {} [time] {} [samples/s] {} [ms/sample]".format(t,N/t, t/N*1000))
# test_speed(N=5e6)
from bayes_tec.datapack import DataPack
from timeit import default_timer
with DataPack('../../scripts/data/killms_datapack.hdf5') as datapack:
phase,axes = datapack.phase
_, freqs = datapack.get_freqs(axes['freq'])
Npol, Nd, Na, Nf, Nt = phase.shape
phase = phase.transpose((0,1,2,4,3))
phase = phase.reshape((-1, Nf))
t0 = default_timer()
tec_ml, sigma_ml = solve_ml_tec(phase, freqs, batch_size=int(1e6),max_tec=0.3, n_iter=21, t=1.,num_proposal=75, verbose=True)
t1 = default_timer()
print(t1-t0)
tec_ml = tec_ml.reshape((Npol, Nd, Na, Nt))
sigma_ml = sigma_ml.reshape((Npol, Nd, Na, Nt))
with h5py.File('ml_results.hdf5') as f:
f['tec'] = tec_ml
f['sigma'] = sigma_ml
```
| github_jupyter |
<a href="https://colab.research.google.com/github/strangelycutlemon/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module4-sequence-your-narrative/LS_DS_124_Sequence_your_narrative_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science_
# Sequence Your Narrative - Assignment
Today we will create a sequence of visualizations inspired by [Hans Rosling's 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo).
Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/):
- [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv)
- [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv)
- [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv)
- [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv)
- [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv)
Objectives
- sequence multiple visualizations
- combine qualitative anecdotes with quantitative aggregates
Links
- [Hans Rosling’s TED talks](https://www.ted.com/speakers/hans_rosling)
- [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474)
- "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays."
- [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling
# ASSIGNMENT
1. Replicate the Lesson Code
2. Take it further by using the same gapminder dataset to create a sequence of visualizations that combined tell a story of your choosing.
Get creative! Use text annotations to call out specific countries, maybe: change how the points are colored, change the opacity of the points, change their sized, pick a specific time window. Maybe only work with a subset of countries, change fonts, change background colors, etc. make it your own!
```
# TODO
!pip freeze
!pip install --upgrade seaborn
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
income = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv')
lifespan = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv')
population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv')
entities = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv')
concepts = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv')
income.shape, lifespan.shape, population.shape, entities.shape, concepts.shape
lifespan.head()
population.head()
pd.options.display.max_columns = 500
entities.head()
concepts[concepts['name_short'].str.contains('trade', na=False)]
```
https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf
```
print(income.shape)
print(lifespan.shape)
```
## Explore data
```
# Inner merge is default, which is just what we want
# However, a historian might like to do a left merge and look at missing data
df = pd.merge(income, lifespan)
print(df.shape)
df.head()
# df1 = pd.merge(income, lifespan, how='outer',
# left_index=True, right_on=['income', 'lifespan'],
# indicator=True)
# df1.query('_merge != "both"')
# df1.isna().sum()
df.isna().sum()
df= pd.merge(df, population)
df.head()
entities['world_6region'].value_counts()
entities['world_4region'].value_counts()
entities.head()
entity_columns_to_keep = ['country', 'name', 'world_6region']
entities = entities[entity_columns_to_keep]
entities.head()
merged = pd.merge(df, entities, left_on="geo", right_on="country")
merged.head()
merged = merged.drop('geo', axis='columns')
merged.head()
merged = merged.rename(columns = {
'country': 'country_code',
'time': 'year',
'income_per_person_gdppercapita_ppp_inflation_adjusted': 'income',
'life_expectancy_years': 'lifespan',
'population_total': 'population',
'name': 'country',
'world_6region': '6region',
'world_4region': '4region'
})
merged.country.unique()
usa = merged[merged.country == 'United States']
china = merged[merged.country == 'China']
usa[usa.year.isin([1818, 1918, 2018])]
import seaborn as sns
now = merged[merged.year == 2018]
now.head()
# sns.relplot(x=now['income'], y=now['lifespan'])
sns.relplot(x='income', y='lifespan', hue='world_6region', size='population', sizes=(10, 400), data=now);
plt.xscale('log')
plt.ylim(0,85)
now.sort_values('income', ascending=False)
now_qatar = now[now.country=='Qatar']
now_qatar.head()
sns.relplot(x='income', y='lifespan', hue='6region', size='population', sizes=(10, 400), data=now);
plt.xscale('log')
plt.ylim(0,85)
plt.title("The World in 2018")
plt.text(x=now_qatar.income-5000, y=now_qatar.lifespan+1, s='Qatar')
plt.show()
```
## Plot visualization
```
```
## Analyze outliers
```
```
## Plot multiple years
```
years = [1818, 1918, 2018]
centuries = merged[merged.year.isin(years)]
fig=sns.relplot(x='income', y='lifespan', hue='6region', size='population', sizes=(10, 400), col='year', data=centuries);
# fig.set(facecolor='grey')
# plt.set_facecolor("blue")
plt.xscale('log')
# plt.ylim(0,85)
# plt.title("The World in three centuries")
plt.text(x=now_qatar.income-5000, y=now_qatar.lifespan+1, s='Qatar')
plt.show()
fig=sns.relplot(x='income',y='lifespan', hue='6region', size='population', sizes=(30,400), data=now);
fig.set(facecolor='b')
plt.xscale('log')
plt.ylim(20,85)
plt.title("The World in 2018")
plt.show()
```
## Point out a story
```
years = [1918, 1938, 1978, 1998, 2018]
decades = merged[merged.year.isin(years)]
sns.relplot(x='income', y='lifespan', hue='6region', size='population', col='year', data=decades)
plt.show()
for year in years:
fig=sns.relplot(x='income',
y='lifespan',
hue='6region',
size='population',
sizes=(10, 400),
data=merged[merged.year==year]);
plt.xscale('log')
plt.xlim((150,150000))
plt.ylim((0,90))
plt.title(year)
plt.text(x=now_qatar.income-5000, y=now_qatar.lifespan+1, s='Qatar')
plt.show()
```
# **Exploration**
```
concepts.head()
# search for keyword in concepts
concepts[concepts['name_short'].str.contains('health', na=False)]
from google.colab import files
uploaded = files.upload()
health = pd.read_csv('/content/indicator_government share of total health spending.csv')
health
remove_me = 'General government expenditure on health as percentage of total expenditure on health'
health.columns = [column.replace(remove_me, 'Country') for column in health]
health
health = health.melt(id_vars='Country').dropna()
lifespan.head()
health.head()
# FINALLY. Renaming columns is the most annoying part of these exercises,
# because I can never remember what I did during the previous week
# to make it work.
entity_columns_to_keep = ['country', 'name', 'world_6region']
entities = entities[entity_columns_to_keep]
entities.head()
entities.columns = ['geo', 'Country', '6region']
df = pd.merge(lifespan, entities)
df.head()
health.columns = ['Country', 'time', 'spending']
health.head()
print(health.dtypes, df.dtypes)
s = health.time
health.time = pd.to_numeric(s, errors='raise')
health.dtypes
df = pd.merge(df, health)
df = pd.merge(df, population)
df.head()
```
# **Ready to plot**
```
sns.relplot(x='spending', y='life_expectancy_years', hue='6region', size='population_total', data=df);
plt.ylim(0,100)
plt.title("Lifespan and Gov share of health spending")
plt.show()
```
```
df_now = df[df.time == 2010]
df_now.head()
sns.relplot(x='spending', y='life_expectancy_years', hue='6region', size='population_total', data=df_now);
plt.ylim(0,100)
plt.title("Lifespan and Gov share of health spending")
plt.show()
```
# Plotting multiple **years**
```
years = [1995, 2000, 2005, 2010]
for year in years:
fig=sns.relplot(x='spending',
y='life_expectancy_years',
hue='6region',
size='population_total',
sizes=(10, 100),
data=df[df.time==year]);
plt.xlim(0,100)
plt.ylim(0,90)
plt.title((year, "Lifespan and Gov share of health spending"))
plt.show()
```
# STRETCH OPTIONS
## 1. Animate!
- [How to Create Animated Graphs in Python](https://towardsdatascience.com/how-to-create-animated-graphs-in-python-bb619cc2dec1)
- Try using [Plotly](https://plot.ly/python/animations/)!
- [The Ultimate Day of Chicago Bikeshare](https://chrisluedtke.github.io/divvy-data.html) (Lambda School Data Science student)
- [Using Phoebe for animations in Google Colab](https://colab.research.google.com/github/phoebe-project/phoebe2-docs/blob/2.1/tutorials/animations.ipynb)
## 2. Study for the Sprint Challenge
- Concatenate DataFrames
- Merge DataFrames
- Reshape data with `pivot_table()` and `.melt()`
- Be able to reproduce a FiveThirtyEight graph using Matplotlib or Seaborn.
## 3. Work on anything related to your portfolio site / Data Storytelling Project
```
# TODO
```
| github_jupyter |
# Data Labelling Analysis (DLA) Dataset C
```
#import libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import os
print('Libraries imported!!')
#define directory of functions and actual directory
HOME_PATH = '' #home path of the project
FUNCTIONS_DIR = 'EVALUATION FUNCTIONS/RESEMBLANCE'
ACTUAL_DIR = os.getcwd()
#change directory to functions directory
os.chdir(HOME_PATH + FUNCTIONS_DIR)
#import functions for data labelling analisys
from data_labelling import mix_data
from data_labelling import split_data
from data_labelling import DataPreProcessor
from data_labelling import ClassificationModels
#change directory to actual directory
os.chdir(ACTUAL_DIR)
print('Functions imported!!')
```
## 1. Read real and synthetic datasets
In this part real and synthetic datasets are read.
```
#Define global variables
DATA_TYPES = ['Real','GM','SDV','CTGAN','WGANGP']
SYNTHESIZERS = ['GM','SDV','CTGAN','WGANGP']
FILEPATHS = {'Real' : HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/C_Obesity_Data_Real_Train.csv',
'GM' : HOME_PATH + 'SYNTHETIC DATASETS/GM/C_Obesity_Data_Synthetic_GM.csv',
'SDV' : HOME_PATH + 'SYNTHETIC DATASETS/SDV/C_Obesity_Data_Synthetic_SDV.csv',
'CTGAN' : HOME_PATH + 'SYNTHETIC DATASETS/CTGAN/C_Obesity_Data_Synthetic_CTGAN.csv',
'WGANGP' : HOME_PATH + 'SYNTHETIC DATASETS/WGANGP/C_Obesity_Data_Synthetic_WGANGP.csv'}
categorical_columns = ['Gender','family_history_with_overweight','FAVC','CAEC','SMOKE','SCC','CALC','MTRANS','Obesity_level']
data = dict()
#iterate over all datasets filepaths and read each dataset
for name, path in FILEPATHS.items() :
data[name] = pd.read_csv(path)
for col in categorical_columns :
data[name][col] = data[name][col].astype('category')
data
```
## 2. Mix real data with synthetic data
```
mixed_data = dict()
for name in SYNTHESIZERS :
mixed_data[name] = mix_data(data['Real'], data[name])
mixed_data
```
- 0 for real data
- 1 for synthetic data
## 2. Split train and test data
```
train_len = 0.8
train_data = dict()
test_data = dict()
for name in SYNTHESIZERS :
print(name)
train_data[name], test_data[name] = split_data(mixed_data[name], train_len)
print(train_data[name].shape, test_data[name].shape)
print('Train data', train_data[name].groupby('Label').size())
print('Test data', test_data[name].groupby('Label').size())
print('##############################################')
```
## 3. Train Classifiers
```
categorical_columns = ['Gender','family_history_with_overweight','FAVC','CAEC','SMOKE','SCC','CALC','MTRANS','Obesity_level']
numerical_columns = ['Age','Height','Weight','FCVC','NCP','CH2O','FAF','TUE']
categories = [np.array([0, 1]), np.array([0, 1]), np.array([0, 1]), np.array([0, 1, 2, 3]), np.array([0, 1]),
np.array([0, 1]), np.array([0, 1, 2, 3]), np.array([0, 1, 2, 3, 4]), np.array([0, 1, 2, 3, 4, 5, 6])]
#initialize classifiers
classifiers_all = dict()
data_preprocessors = dict()
target = 'Label'
for name in SYNTHESIZERS :
print(name)
classifiers_all[name] = ClassificationModels()
data_preprocessors[name] = DataPreProcessor(categorical_columns, numerical_columns, categories)
x_train = data_preprocessors[name].preprocess_train_data(train_data[name].iloc[:, train_data[name].columns != target])
y_train = train_data[name].loc[:, target]
classifiers_all[name].train_classifiers(x_train, y_train)
print('####################################################')
```
## 5. Evaluate Classifiers
```
results_all = dict()
for name in SYNTHESIZERS :
print(name)
x_test = data_preprocessors[name].preprocess_test_data(test_data[name].loc[:, test_data[name].columns != target])
print(x_test.shape)
y_test = test_data[name].loc[:, target]
classifiers_all[name].evaluate_classifiers(x_test, y_test)
print('####################################################')
```
## 6. Analyse models results
```
fig, axs = plt.subplots(nrows=1, ncols=4, figsize=(8, 2.5))
axs_idxs = [[0,0], [0,1], [1,0], [1,1]]
axs_idxs = [0, 1, 2, 3]
idx = dict(zip(SYNTHESIZERS,axs_idxs))
for name in SYNTHESIZERS :
ax_plot = axs[idx[name]]
classifiers_all[name].plot_classification_metrics(ax_plot)
ax_plot.set_title(name, fontsize=10)
plt.tight_layout()
fig.savefig('DATA LABELLING RESULTS/CLASSIFICATION_METRICS.svg', bbox_inches='tight')
```
| github_jupyter |
### Problem Statement
Given a linked list with integer data, arrange the elements in such a manner that all nodes with even numbers are placed after odd numbers. **Do not create any new nodes and avoid using any other data structure. The relative order of even and odd elements must not change.**
**Example:**
* `linked list = 1 2 3 4 5 6`
* `output = 1 3 5 2 4 6`
```
class Node:
def __init__(self, data):
self.data = data
self.next = None
```
### Exercise - Write the function definition here
```
def even_after_odd(head):
"""
:param - head - head of linked list
return - updated list with all even elements are odd elements
"""
if head is None:
return head
odd_head = None
odd_tail = None
even_head = None
even_tail = None
current = head
while current:
if current.data % 2 == 1:
if odd_head is None:
odd_head = current
odd_tail = odd_head
else:
odd_tail.next = current
odd_tail = odd_tail.next
else:
if even_head is None:
even_head = current
even_tail = even_head
else:
even_tail.next = current
even_tail = even_tail.next
current = current.next
if odd_head is None:
return even_head
odd_tail.next = even_head
return odd_head
```
<span class="graffiti-highlight graffiti-id_xpuflcm-id_9q4n7o8"><i></i><button>Show Solution</button></span>
### Test - Let's test your function
```
# helper functions for testing purpose
def create_linked_list(arr):
if len(arr)==0:
return None
head = Node(arr[0])
tail = head
for data in arr[1:]:
tail.next = Node(data)
tail = tail.next
return head
def print_linked_list(head):
while head:
print(head.data, end=' ')
head = head.next
print()
def test_function(test_case):
head = test_case[0]
solution = test_case[1]
node_tracker = dict({})
node_tracker['nodes'] = list()
temp = head
while temp:
node_tracker['nodes'].append(temp)
temp = temp.next
head = even_after_odd(head)
temp = head
index = 0
try:
while temp:
if temp.data != solution[index] or temp not in node_tracker['nodes']:
print("Fail")
return
temp = temp.next
index += 1
print("Pass")
except Exception as e:
print("Fail")
arr = [1, 2, 3, 4, 5, 6]
solution = [1, 3, 5, 2, 4, 6]
head = create_linked_list(arr)
test_case = [head, solution]
test_function(test_case)
arr = [1, 3, 5, 7]
solution = [1, 3, 5, 7]
head = create_linked_list(arr)
test_case = [head, solution]
test_function(test_case)
arr = [2, 4, 6, 8]
solution = [2, 4, 6, 8]
head = create_linked_list(arr)
test_case = [head, solution]
test_function(test_case)
```
| github_jupyter |
# Validation
This notebook contains examples of some of the simulations that have been used to validate Disimpy's functionality by comparing the simulated signals to analytical solutions and signals generated by other simulators. Here, we simulate free diffusion and restricted diffusion inside cylinders and spheres.
```
# Import the required packages and modules
import os
import pickle
import numpy as np
import matplotlib.pyplot as plt
from disimpy import gradients, simulations, substrates, utils
from disimpy.gradients import GAMMA
# Define the simulation parameters
n_walkers = int(1e6) # Number of random walkers
n_t = int(1e3) # Number of time points
diffusivity = 2e-9 # In SI units (m^2/s)
```
## Free diffusion
In the case of free diffusion, the analytical expression for the signal is $S = S_0 \exp(-bD)$, where $S_0$ is the signal without diffusion-weighting, $b$ is the b-value, and $D$ is the diffusivity.
```
# Create a Stejskal-Tanner gradient array with ∆ = 40 ms and δ = 30 ms
T = 70e-3
gradient = np.zeros((1, 700, 3))
gradient[0, 1:300, 0] = 1
gradient[0, -300:-1, 0] = -1
bs = np.linspace(1, 3e9, 100)
gradient = np.concatenate([gradient for _ in bs], axis=0)
dt = T / (gradient.shape[1] - 1)
gradient, dt = gradients.interpolate_gradient(gradient, dt, n_t)
gradient = gradients.set_b(gradient, dt, bs)
# Show the waveform of the measurement with the highest b-value
fig, ax = plt.subplots(1, figsize=(7, 4))
for i in range(3):
ax.plot(np.linspace(0, T, n_t), gradient[-1, :, i])
ax.legend(['G$_x$', 'G$_y$', 'G$_z$'])
ax.set_xlabel('Time (s)')
ax.set_ylabel('Gradient magnitude (T/m)')
plt.show()
# Run the simulation
substrate = substrates.free()
signals = simulations.simulation(
n_walkers, diffusivity, gradient, dt, substrate)
# Plot the results
fig, ax = plt.subplots(1, figsize=(7, 4))
ax.plot(bs, np.exp(-bs * diffusivity), color='tab:orange')
ax.scatter(bs, signals / n_walkers, s=10, marker='o')
ax.legend(['Analytical signal', 'Simulated signal'])
ax.set_xlabel('b (ms/μm$^2$)')
ax.set_ylabel('S/S$_0$')
ax.set_yscale('log')
plt.show()
```
## Restricted diffusion and comparison to MISST
Here, diffusion inside cylinders and spheres is simulated and the signals are compared to those calculated with [MISST](http://mig.cs.ucl.ac.uk/index.php?n=Tutorial.MISST) that uses matrix operators to calculate the time evolution of the diffusion signal inside simple geometries. The cylinder is simulated using a triangular mesh and the sphere as an analytically defined surface.
```
# Load and show the cylinder mesh used in the simulations
mesh_path = os.path.join(
os.path.dirname(simulations.__file__), 'tests', 'cylinder_mesh_closed.pkl')
with open(mesh_path, 'rb') as f:
example_mesh = pickle.load(f)
faces = example_mesh['faces']
vertices = example_mesh['vertices']
cylinder_substrate = substrates.mesh(
vertices, faces, periodic=True, init_pos='intra')
utils.show_mesh(cylinder_substrate)
# Run the simulation
signals = simulations.simulation(
n_walkers, diffusivity, gradient, dt, cylinder_substrate)
# Load MISST signals
tests_dir = os.path.join(os.path.dirname(gradients.__file__), 'tests')
misst_signals = np.loadtxt(os.path.join(tests_dir,
'misst_cylinder_signal_smalldelta_30ms_bigdelta_40ms_radius_5um.txt'))
# Plot the results
fig, ax = plt.subplots(1, figsize=(7, 4))
ax.scatter(bs, signals / n_walkers, s=10, marker='o')
ax.scatter(bs, misst_signals, s=10, marker='.')
ax.set_xlabel('b (ms/μm$^2$)')
ax.set_ylabel('S/S$_0$')
ax.legend(['Disimpy', 'MISST'])
ax.set_title('Diffusion in a cylinder')
ax.set_yscale('log')
plt.show()
# Run the simulation
sphere_substrate = substrates.sphere(5e-6)
signals = simulations.simulation(
n_walkers, diffusivity, gradient, dt, sphere_substrate)
# Load MISST signals
tests_dir = os.path.join(os.path.dirname(gradients.__file__), 'tests')
misst_signals = np.loadtxt(os.path.join(tests_dir,
'misst_sphere_signal_smalldelta_30ms_bigdelta_40ms_radius_5um.txt'))
# Plot the results
fig, ax = plt.subplots(1, figsize=(7, 4))
ax.scatter(bs, signals / n_walkers, s=10, marker='o')
ax.scatter(bs, misst_signals, s=10, marker='.')
ax.set_xlabel('b (ms/μm$^2$)')
ax.set_ylabel('S/S$_0$')
ax.legend(['Disimpy', 'MISST'])
ax.set_title('Diffusion in a sphere')
ax.set_yscale('log')
plt.show()
```
## Signal diffraction pattern
In the case of restricted diffusion in a cylinder perpendicular to the direction of the diffusion encoding gradient with short pulses and long diffusion time, the signal minimum occurs at $0.61 · 2 · \pi/r$, where $r$ is the cylinder radius. Details are provided by [Avram et al](https://doi.org/10.1002/nbm.1277), for example.
```
# Create a Stejskal-Tanner gradient array with ∆ = 0.5 s and δ = 0.1 ms
T = 501e-3
gradient = np.zeros((1, n_t, 3))
gradient[0, 1:2, 0] = 1
gradient[0, -2:-1, 0] = -1
dt = T / (gradient.shape[1] - 1)
bs = np.linspace(1, 1e11, 250)
gradient = np.concatenate([gradient for _ in bs], axis=0)
gradient = gradients.set_b(gradient, dt, bs)
q = gradients.calc_q(gradient, dt)
qs = np.max(np.linalg.norm(q, axis=2), axis=1)
# Show the waveform of the measurement with the highest b-value
fig, ax = plt.subplots(1, figsize=(7, 4))
for i in range(3):
ax.plot(np.linspace(0, T, n_t), gradient[-1, :, i])
ax.legend(['G$_x$', 'G$_y$', 'G$_z$'])
ax.set_xlabel('Time (s)')
ax.set_ylabel('Gradient magnitude (T/m)')
plt.show()
# Run the simulation
radius = 10e-6
substrate = substrates.cylinder(
radius=radius, orientation=np.array([0., 0., 1.]))
signals = simulations.simulation(
n_walkers, diffusivity, gradient, dt, substrate)
# Plot the results
fig, ax = plt.subplots(1, figsize=(7, 4))
ax.scatter(1e-6 * qs, signals / n_walkers, s=10, marker='o')
minimum = 1e-6 * .61 * 2 * np.pi / radius
ax.plot([minimum, minimum], [0, 1], ls='--', lw=2, color='tab:orange')
ax.legend(['Analytical minimum', 'Simulated signal'])
ax.set_xlabel('q (μm$^{-1}$)')
ax.set_ylabel('S/S$_0$')
ax.set_yscale('log')
ax.set_ylim([1e-4, 1])
ax.set_xlim([0, max(1e-6 * qs)])
plt.show()
```
| github_jupyter |
```
!wget -q https://raw.githubusercontent.com/mannefedov/compling_nlp_hse_course/master/data/zhivago.txt
!ls -lh
import re
import string
from collections import Counter
import razdel
import nltk
import rusenttokenize
from pymystem3 import Mystem
from pymorphy2 import MorphAnalyzer
from nltk.stem.snowball import SnowballStemmer
from nltk.corpus import stopwords
from tqdm.auto import tqdm
with open("./zhivago.txt", 'r', encoding='utf-8') as f:
text = f.read()
```
## Задание 1 - Очистка
```
## удаляем xml-like теги
_text = re.sub('(\<(/?[^>]+)>)', ' ', text)
## строки из логов загрузчика
_text = re.sub('\d{2}.\d{2}.\d{4}', '', _text)
_text = re.sub('[^\S]*\.(ru)\S*', '', _text)
_text = re.sub('\d{1}.\d{1}', '', _text)
## Цифры, Латиницу(логи загрузчика), скобочки(ибо зачем)
_text = re.sub('[0-9a-zA-Z«»]', '', _text)
## Странные штуки в конце
_text = re.sub("[/+]", '', _text)
## -
_text = re.sub('\s–\s', '', _text)
## лишние пробелы
text = re.sub("\s+", ' ', _text)
```
## Задание 2 - токенизация/разделение
```
punctuation = "".join((set(string.punctuation) - set(".")))
text = text.translate(str.maketrans('', '', punctuation)).strip()
sentences = rusenttokenize.ru_sent_tokenize(text)
## Приводим к нижнему регистру после токенизации, т.к отсутствие регистра может повлиять на корректность токенизации
tokenized_sentences = [
tuple(token.text.lower() for token in razdel.tokenize(sentence)) for sentence in tqdm(sentences)
]
```
### 2.1 - Повторяющиеся предложения
```
counter = Counter(tokenized_sentences)
repeating_sentences = list(map(
lambda x: (" ".join(x[0]), x[1]), # детокенизируем для отображения
filter(
lambda x: x[1] >= 2 and x[0][0] != '–', # встречающиеся два и более раз, не являющиеся прямой речью (начинается с -)
counter.most_common()
)
))
```
Повторяющиеся предложения есть, всего их(без учета прямой речи) -
```
print(len(repeating_sentences))
```
Примеры таких предложений:
```
repeating_sentences[:10]
```
### 2.2 - Самый частотный токен
```
frequencies = Counter()
for sentence in tokenized_sentences:
frequencies.update(sentence)
most_frequent = list(filter(lambda x: len(x[0]) > 6, frequencies.most_common()))[0]
print(f"Самый частотный токен длинее 6 символов - {most_frequent[0]}, он встречается {most_frequent[1]} раз")
```
## 3 - Стемминг
```
stemmer = SnowballStemmer('russian')
all_words = list([word for sentence in tokenized_sentences for word in sentence])
stemmed_words = list(map(stemmer.stem, all_words))
```
### 3.2 Слово не изменилось после стеммизации
Если интерпретировать такую ошибку как сказано в условии - то таких ошибок очень много
```
## ошибки не-стеммизации
non_stemmed_idx = list(filter(lambda x: len(x[1]) > 4 and x[1] == stemmed_words[x[0]], enumerate(tqdm(all_words))))
len(non_stemmed_idx)
```
Однако я думаю что большинство из них не представляют собой действительно ошибки, просто слово является само себе "стеммой"
```
for i, w in non_stemmed_idx[:30]:
print(f"{i:^5}|{all_words[i]:^20} == {stemmed_words[i]:^20}")
```
Но и ошибки тоже есть, см. слова #260 и #269 - две словоформы одной лексемы ("будет"), и они не изменились после стеммирования, хотя нужно было бы
### 3.1 Одна стемма для разных слов
```
stem2words = {}
for i, word in enumerate(all_words):
stemm = stemmed_words[i]
if stemm not in stem2words:
stem2words[stemm] = set()
stem2words[stemm].add(word)
```
Будем смотреть такие слова, длина формы которых очень сильно отличается от длины стеммы
```
import numpy as np
error_pairs = {}
for key, forms in stem2words.items():
if len(key) <= 6 and np.mean([abs(len(key) - len(form)) for form in forms]) >= 5 and len(forms) > 2:
error_pairs[key] = forms
list(error_pairs.items())
```
Среди таких слов можно найти несколько примеров ошибок, удовлетворяющих условию
- 'пузыр': пузырившегося(деепричастие?) и пузырями(сущ.)
- 'выси': выси(сущ.) и высившаяся(деепричастие?)
## 4 - список стоп-слов из nltk
```
stop_words = stopwords.words('russian')
```
Посмотрим на самые частотные слова в нашем тексте, которые не встречаются в stopwords и посмотрим какие из них можно туда добавить
```
freq_words = {k:v for k, v in frequencies.most_common(200)}
dissset = set(freq_words.keys()) - set(stop_words)
print(dissset)
```
Первые четыре слов, которые можно добавить в стоп-слова - это вариации слова 'это':
- это
- эта
- этим
- этих
Почему:
1. В стоп словах уже есть несколько вариаций слова 'это': этот, этого, этом, эти, эту, этой. Поэтому, следуя той же логике, можно добавить отсутствующие вариации, которые мы видим в нашем тексте
```
list(filter(lambda x: x.startswith('э'), stop_words))
```
2. Они очень часто встречаются в тексте, например 'это' встречается 1001 раз
```
freq_words['это']
```
Пятое слово - это 'оно'
Почему его стоит добавить в список стоп-слов: в списке уже есть аналогичные слова для м.р, ж.р, и мн. числа. Выглядит как ошибка, что в списке нет формы для среднего рода.
```
list(filter(lambda x: x.startswith('он'), stop_words))
```
## Задание 5 - лемматизация
```
mystem = Mystem()
pymorhy = MorphAnalyzer()
vocab = list(set(all_words))
mystem_lemms = list(map(lambda x: mystem.lemmatize(x)[0], tqdm(vocab)))
pymorphy_lemms = list(map(lambda x: pymorhy.normal_forms(x)[0], tqdm(vocab)))
mismatch = [
(frequencies[vocab[i]], vocab[i], mystem_lemms[i], pymorphy_lemms[i])
for i in range(len(vocab))
if (mystem_lemms[i] != pymorphy_lemms[i])
]
mismatch = sorted(mismatch, key=lambda x: x[0], reverse=True)
print("|# occurs| word | mystem3 | pymorphy2 |")
print("-------------------------------------------------------")
for count, vocab, lemma_mystem, lemma_pymorphy in mismatch[:50]:
print(f"|{count:^8}|{vocab:^13}|{lemma_mystem:^14}|{lemma_pymorphy:^15}|")
```
Из результатов анализа мисматчей для наиболее частотных слов, видно, что каждая библиотека имеет свои проблемы:
1. Pymorphy2 подвержен Gender Bias: имена/отчества/фамилии в ж.р нормализуются в аналогичные, но в м.р.
Mystem так не ошибается, скорее всего из-за того что у него словарь меньшего размера и он обрабатывает такие слова как ошибки
2. Mystem некорректно лемматизирует некоторые устаревшие формы предлогов, см. 'ко', 'об', 'со', 'во'.
3. Pymorphy2 исправляет ошибки в ходе лемматизации слов, которые должны быть написаны через 'ё', но написаны с 'е', см. вперед, еще, все(вероятно).
4. Иногда это исправление некорректно, см. дальше. В т.ч исправления ошибок другого типа тоже могут быть некорректны, см. ктото, изза, гдето, какойто, чтото. В тоже время Mystem не делает ничего с словами, написанными с ошибкой.
В итоге можно сказать, что для анализа текстов такого рода больше подходит mystem.
| github_jupyter |
## Dependencies
```
import json, warnings, shutil
from jigsaw_utility_scripts import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from transformers import TFAutoModel, AutoTokenizer
warnings.filterwarnings("ignore")
```
## TPU configuration
```
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
```
# Load data
```
# database_base_path = '/kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/'
# k_fold = pd.read_csv(database_base_path + '5-fold.csv')
# valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv", usecols=['comment_text', 'toxic', 'lang'])
# print('Train set samples: %d' % len(k_fold))
# print('Validation set samples: %d' % len(valid_df))
# display(k_fold.head())
# # Unzip files
# !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_1.tar.gz
# # !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_2.tar.gz
# # !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_3.tar.gz
# # !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_4.tar.gz
# # !tar -xvf /kaggle/input/jigsaw-dataset-split-pb-roberta-large-192/fold_5.tar.gz
```
# Model parameters
```
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 192,
"BATCH_SIZE": 16 * strategy.num_replicas_in_sync,
"EPOCHS": 2,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": 1,
"N_FOLDS": 1,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
# Model
```
# module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
# def model_fn(MAX_LEN):
# input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
# base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
# sequence_output = base_model({'input_ids': input_ids})
# last_state = sequence_output[0]
# cls_token = last_state[:, 0, :]
# output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
# model = Model(inputs=input_ids, outputs=output)
# model.compile(optimizers.Adam(lr=config['LEARNING_RATE']),
# loss=losses.BinaryCrossentropy(),
# metrics=[metrics.BinaryAccuracy(), metrics.AUC()])
# return model
```
# Train
```
# history_list = []
# for n_fold in range(config['N_FOLDS']):
# tf.tpu.experimental.initialize_tpu_system(tpu)
# print('\nFOLD: %d' % (n_fold+1))
# # Load data
# base_data_path = 'fold_%d/' % (n_fold+1)
# x_train = np.load(base_data_path + 'x_train.npy')
# y_train = np.load(base_data_path + 'y_train.npy')
# x_valid = np.load(base_data_path + 'x_valid.npy')
# x_valid_ml = np.load(database_base_path + 'x_valid.npy')
# y_valid_ml = np.load(database_base_path + 'y_valid.npy')
# step_size = x_train.shape[0] // config['BATCH_SIZE']
# ### Delete data dir
# shutil.rmtree(base_data_path)
# # Train model
# model_path = 'model_fold_%d.h5' % (n_fold+1)
# es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
# restore_best_weights=True, verbose=1)
# checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
# save_best_only=True, save_weights_only=True, verbose=1)
# with strategy.scope():
# model = model_fn(config['MAX_LEN'])
# history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO),
# validation_data=(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO)),
# callbacks=[checkpoint, es],
# epochs=config['EPOCHS'],
# steps_per_epoch=step_size,
# verbose=1).history
# history_list.append(history)
# # Make predictions
# train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO))
# valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO))
# valid_ml_preds = model.predict(get_test_dataset(x_valid_ml, config['BATCH_SIZE'], AUTO))
# k_fold.loc[k_fold['fold_%d' % (n_fold+1)] == 'train', 'pred_%d' % (n_fold+1)] = np.round(train_preds)
# k_fold.loc[k_fold['fold_%d' % (n_fold+1)] == 'validation', 'pred_%d' % (n_fold+1)] = np.round(valid_preds)
# valid_df['pred_%d' % (n_fold+1)] = np.round(valid_ml_preds)
```
## Model loss graph
```
# sns.set(style="whitegrid")
# for n_fold in range(config['N_FOLDS']):
# print('Fold: %d' % (n_fold+1))
# plot_metrics(history_list[n_fold])
```
# Model evaluation
```
# display(evaluate_model(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Confusion matrix
```
# for n_fold in range(config['N_FOLDS']):
# print('Fold: %d' % (n_fold+1))
# train_set = k_fold[k_fold['fold_%d' % (n_fold+1)] == 'train']
# validation_set = k_fold[k_fold['fold_%d' % (n_fold+1)] == 'validation']
# plot_confusion_matrix(train_set['toxic'], train_set['pred_%d' % (n_fold+1)],
# validation_set['toxic'], validation_set['pred_%d' % (n_fold+1)])
```
# Model evaluation by language
```
# display(evaluate_model_lang(valid_df, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
# pd.set_option('max_colwidth', 120)
# display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(15))
def regular_encode(texts, tokenizer, maxlen=512):
enc_di = tokenizer.batch_encode_plus(
texts,
return_attention_masks=False,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=maxlen
)
return np.array(enc_di['input_ids'])
def build_model(transformer, max_len=512):
input_word_ids = layers.Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
sequence_output = transformer(input_word_ids)[0]
cls_token = sequence_output[:, 0, :]
out = layers.Dense(1, activation='sigmoid')(cls_token)
model = Model(inputs=input_word_ids, outputs=out)
model.compile(optimizers.Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
return model
AUTO = tf.data.experimental.AUTOTUNE
# Configuration
EPOCHS = 2
BATCH_SIZE = 16 * strategy.num_replicas_in_sync
MAX_LEN = 192
MODEL = 'jplu/tf-xlm-roberta-large'
tokenizer = AutoTokenizer.from_pretrained(MODEL)
train1 = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-toxic-comment-train.csv")
train2 = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-unintended-bias-train.csv")
train2.toxic = train2.toxic.round().astype(int)
valid = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv')
test = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/test.csv')
sub = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
# Combine train1 with a subset of train2
train = pd.concat([
train1[['comment_text', 'toxic']],
train2[['comment_text', 'toxic']].query('toxic==1'),
train2[['comment_text', 'toxic']].query('toxic==0').sample(n=100000, random_state=0)
])
x_train = regular_encode(train.comment_text.values, tokenizer, maxlen=MAX_LEN)
x_valid = regular_encode(valid.comment_text.values, tokenizer, maxlen=MAX_LEN)
x_test = regular_encode(test.content.values, tokenizer, maxlen=MAX_LEN)
y_train = train.toxic.values
y_valid = valid.toxic.values
train_dataset = (
tf.data.Dataset
.from_tensor_slices((x_train, y_train))
.repeat()
.shuffle(2048)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
valid_dataset = (
tf.data.Dataset
.from_tensor_slices((x_valid, y_valid))
.batch(BATCH_SIZE)
.cache()
.prefetch(AUTO)
)
test_dataset = (
tf.data.Dataset
.from_tensor_slices(x_test)
.batch(BATCH_SIZE)
)
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
with strategy.scope():
# transformer_layer = TFAutoModel.from_pretrained(config['base_model_path'])
transformer_layer = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
model = build_model(transformer_layer, max_len=MAX_LEN)
model.summary()
n_steps = x_train.shape[0] // BATCH_SIZE
train_history = model.fit(
train_dataset,
steps_per_epoch=n_steps,
validation_data=valid_dataset,
epochs=EPOCHS
)
n_steps = x_valid.shape[0] // BATCH_SIZE
train_history_2 = model.fit(
valid_dataset.repeat(),
steps_per_epoch=n_steps,
epochs=EPOCHS
)
sub['toxic'] = model.predict(test_dataset, verbose=1)
sub.to_csv('submission.csv', index=False)
```
| github_jupyter |
# MovieLens
##DUE APRIL 21, 2016
[MovieLens](http://www.movielens.org/) is a website where users can submit ratings for movies that they watch and receive recommendations for other movies they might enjoy. The data is collected and made publicly available for research. We will be working with a data set of 1 million user ratings of movies. You can find this data set and even larger ones at http://grouplens.org/datasets/movielens/.
## Reading in the Data
Note that the data consists of three data frames: one with information about the users, another containing the ratings, and yet another with information about the movies. See the readme file (/data/movielens/README) for more information.
```
import pandas as pd
unames = ['user_id', 'gender', 'age', 'occupation', 'zip']
users = pd.read_table('/data/movielens/users.dat', sep='::', header=None,
names=unames, engine="python")
rnames = ['user_id', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_table('/data/movielens/ratings.dat', sep='::', header=None,
names=rnames, engine="python")
mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('/data/movielens/movies.dat', sep='::', header=None,
names=mnames, engine="python")
ratings
```
## Question 1
Develop a way to rank movies. Why is it not a good idea to simply sort the movies by average rating? (You may want to try calculating this first.) Then, explain your methodology and use it to produce a list of the Top 10 movies of all time.
```
#ratings
# YOUR CODE HERE
#raise NotImplementedError()
# ratings_movie_id_avg = ratings.groupby(["movie_id" ]).mean()#.get(["rating"])
# #This would calculate the mean of each column, including userid and timestamp for ea movieid
# ratings_movie_id_avg
#Another way to rank movies:
# A 'top' movie is a movie with not only THOUSANDS of ratings, but on average, people gave it a 5 star rating.
num_ratings_per_movie = ratings.groupby(["movie_id"])[['rating']].count()
num_ratings_per_movie = num_ratings_per_movie.ix[num_ratings_per_movie['rating'] >= 2000 ] # get movies with >= # of ratings
num_ratings_per_movie = num_ratings_per_movie.reset_index() # call this after groupby
num_ratings_per_movie = num_ratings_per_movie.merge( ratings, on= 'movie_id')
top_movies = pd.merge(num_ratings_per_movie,users, on = "user_id")
top_movies = top_movies.ix[ top_movies['rating_y'] == 5].groupby('movie_id')[['rating_y']].mean()
top_movies.columns = ['average_rating']
top_movies.head(n=10)
# avg_ratings = ratings.ix[ratings['rating'] >= 4, :].groupby('movie_id')[ ['rating'] ].mean()
# # don't use get('rating') as that will use all cols when calling .mean() on it
# avg_ratings
```
It doesn't make sense to sort the movies by its average ratings alone because there may be some movies where only a few people gave it a rating. A movie with a rating defined by only a few reviewers is not a good rating because a few reviewers is not reprsentative of the entire audience at all.
The table outputted shows the top 10 movies of all time.
## Question 2
Restrict to movies with at least 200 ratings. For each movie, calculate the difference between the average male rating and the average female rating. Based on these differences between average male ratings and average female ratings, what movies were the most male-friendly? What movies were the most female-friendly?
```
# YOUR CODE HERE
#raise NotImplementedError()
num_ratings_per_movie = ratings.groupby(["movie_id"])[['rating']].count()
num_ratings_per_movie = num_ratings_per_movie.ix[num_ratings_per_movie['rating'] >= 200]
num_ratings_per_movie = num_ratings_per_movie.reset_index()
num_ratings_per_movie = num_ratings_per_movie.merge( ratings, on= 'movie_id')
#num_ratings_per_movie
#next step: merge num_ratings_per_movie_200 with original ratings table and then filter out.
data_merged_rate = pd.merge(num_ratings_per_movie,users, on = "user_id")
# # inner join:
data_merged_rate['Avg Female Rating'] = data_merged_rate.ix[ data_merged_rate['gender'] == 'F' ].groupby(['movie_id']).mean()['rating_y']
data_merged_rate['Avg Female Rating'] = data_merged_rate['Avg Female Rating'].fillna(0)
data_merged_rate['Avg Male Rating'] = data_merged_rate.ix[ data_merged_rate['gender'] == 'M' ].groupby( ['movie_id'] ).mean()['rating_y']
data_merged_rate['Avg Male Rating'] = data_merged_rate['Avg Male Rating'].fillna(0)
# #['rating'] is necessary because I only want to add a column
data_merged_rate['Diff Between Females\' and Males\' Avg Ratings' ] = data_merged_rate['Avg Female Rating']\
- data_merged_rate['Avg Male Rating']
data_merged_rate = data_merged_rate.sort_values(by = 'Diff Between Females\' and Males\' Avg Ratings' ,ascending = True)
#data_merged_rate
print(data_merged_rate.head(n = 3))
#for most male friendly
print("=========================================================================")
print(data_merged_rate.tail(n = 3))
# for most female friendly
```
Movies(movie_id) that were most male friendly : 3744, 1784, 3624
Movies(move_id) that were most female friendly: 1721, 2699, 1792
*note: If difference between females' and males' average ratings is positive, that means that on average, females gave a particular movie higher rating than males did (female_friendly). If difference is negative, males gave it higher ratings on average.
## Question 3
Calculate the average rating by genre. Note that a movie can belong to multiple genres. You will have to write some code that parses the `genres` column of the `movies` table. What genre had the highest average rating? What genre had the lowest?
```
# YOUR CODE HERE
#raise NotImplementedError()
# merge the movies to the ratings
data = movies.merge(ratings, on="movie_id")
# get all occurrences of genres in a list
genres = movies["genres"].str.cat(sep="|").split("|")
# get unique genres, convert back to list
genres = list(set(genres))
# add a boolean column to DataFrame for each genre
for genre in genres:
data[genre] = data["genres"].str.contains(genre)
# multiply each genre column by the ratings column,
# then .sum() to get the total of ratings for that genre
total_rating = data[genres].multiply(data['rating'], axis=0).sum()
# adding the booleans should give the number of ratings for each genre
num_rating = data[genres].sum()
# divide to get the average rating
#(total_rating / num_rating).sort_values()
print((total_rating / num_rating).sort_values().tail())
print((total_rating / num_rating).sort_values().head())
# split_genres = movies[ 'genres'].str.cat(sep = '|') # returns a string
# #split_genres
# split_genres = split_genres.split("|") # must concat before doing this step
# split_genres
# distinct_genres_list = []
# #create list of distinct genres
# # get unique genres, convert back to list
# #enres = list(set(genres))
# for genre in split_genres:
# if genre not in distinct_genres_list:
# distinct_genres_list.append(genre)
# data_merged_genre = pd.merge(ratings, movies, on = 'movie_id')
# data_merged_genre = data_merged_genre.sort_index()
# for genre in distinct_genres_list:
# data_merged_genre[genre] = data_merged_genre['genres'].str.contains(genre)
# data_merged_genre
# # multiply each genre column by the ratings column,
# # then .sum() to get the total of ratings for that genre
# total_rating = data_merged_genre[split_genres].multiply(data_merged_genre['rating'], axis=0).sum()
# # adding the booleans should give the number of ratings for each genre
# num_rating = data_merged_genre[genres].sum()
# # divide to get the average rating
# (total_rating / num_rating).sort_values()
# #.str.contains('Drama')
# #data_merged_genre['genres'].str.contains('Drama')
# #use .split?
# # data_merged_genre = data_merged_genre.groupby('genres' )[ ['rating'] ].mean()
# # print(data_merged_genre.sort_values(by = 'rating').tail())
# # print(data_merged_genre.sort_values(by = 'rating').head(n = 50))
```
Ignoring mixed genres, the Animation genre had the highest average ratings. Also, ignoring mixed genres, the Children genre had
the lowest average ratings.
CORRECTION: Horror genre had lowest avg ratings. Film-Noi has highest.
## Question 4
Formulate a question of your own that you can answer using this MovieLens data. State clearly what your question is and what your findings are. Bonus points are available if you find something interesting!
**Tip:** You may find the `occupation` column of `users` to be a rich source of interesting questions. See the README file (/data/movielens/README) for information about how `occupation` is coded in the data.
```
# YOUR CODE HERE
#raise NotImplementedError()
num_ratings_by_occupation = users.merge(ratings, on = 'user_id').groupby('occupation')[['rating']].count()
num_ratings_by_occupation.columns = ['number of ratings']
num_ratings_by_occupation
```
Q: How many ratings did each group of people by occupation give? Which occupation had people that gave the most ratings?
Users(people) with occupation 4 rated the most amount of movies as a group. One could speculate that those with occupation 4 were more willing to rate a movie that they watched. It is also possible that the movies those in occupation 4 chose to watch tended to provoke strong feelings.
## Submitting this Lab
Now, restart your kernel and re-run your entire notebook from beginning to end. Make sure there are no errors or bugs. When you have verified this, open the Terminal on JupyterHub and type
`nbgrader submit Lab-04-14 --course dlsun`
to submit this lab.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.