text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
### Our Mission
In this lesson you gained some insight into a number of techniques used to understand how well our model is performing. This notebook is aimed at giving you some practice with the metrics specifically related to classification problems. With that in mind, we will again be looking at the spam dataset from the earlier lessons.
First, run the cell below to prepare the data and instantiate a number of different models.
```
# Import our libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.ensemble import BaggingClassifier, RandomForestClassifier, AdaBoostClassifier
from sklearn.svm import SVC
import tests as t
# Read in our dataset
df = pd.read_table('smsspamcollection/SMSSpamCollection',
sep='\t',
header=None,
names=['label', 'sms_message'])
# Fix our response value
df['label'] = df.label.map({'ham':0, 'spam':1})
# Split our dataset into training and testing data
X_train, X_test, y_train, y_test = train_test_split(df['sms_message'],
df['label'],
random_state=1)
# Instantiate the CountVectorizer method
count_vector = CountVectorizer()
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test)
# Instantiate a number of our models
naive_bayes = MultinomialNB()
bag_mod = BaggingClassifier(n_estimators=200)
rf_mod = RandomForestClassifier(n_estimators=200)
ada_mod = AdaBoostClassifier(n_estimators=300, learning_rate=0.2)
svm_mod = SVC()
```
> **Step 1**: Now, fit each of the above models to the appropriate data. Answer the following question to assure that you fit the models correctly.
```
# Fit each of the 4 models
# This might take some time to run
naive_bayes.fit(training_data, y_train)
bag_mod.fit(training_data, y_train)
rf_mod.fit(training_data, y_train)
ada_mod.fit(training_data, y_train)
svm_mod.fit(training_data, y_train)
# The models you fit above were fit on which data?
a = 'X_train'
b = 'X_test'
c = 'y_train'
d = 'y_test'
e = 'training_data'
f = 'testing_data'
# Change models_fit_on to only contain the correct string names
# of values that you oassed to the above models
models_fit_on = {e, c} # update this to only contain correct letters
# Checks your solution - don't change this
t.test_one(models_fit_on)
```
> **Step 2**: Now make predictions for each of your models on the data that will allow you to understand how well our model will extend to new data. Then correctly add the strings to the set in the following cell.
```
# Make predictions using each of your models
preds_nb = naive_bayes.predict(testing_data)
preds_bag = bag_mod.predict(testing_data)
preds_rf = rf_mod.predict(testing_data)
preds_ada = ada_mod.predict(testing_data)
preds_svm = svm_mod.predict(testing_data)
# Which data was used in the predict method to see how well your
# model would work on new data?
a = 'X_train'
b = 'X_test'
c = 'y_train'
d = 'y_test'
e = 'training_data'
f = 'testing_data'
# Change models_predict_on to only contain the correct string names
# of values that you oassed to the above models
models_predict_on = {f} # update this to only contain correct letters
# Checks your solution - don't change this
t.test_two(models_predict_on)
```
Now that you have set up all your predictions, let's get to topics addressed in this lesson - measuring how well each of your models performed. First, we will focus on how each metric was calculated for a single model, and then in the final part of this notebook, you will choose models that are best based on a particular metric.
You will be writing functions to calculate a number of metrics and then comparing the values to what you get from sklearn. This will help you build intuition for how each metric is calculated.
> **Step 3**: As an example of how this will work for the upcoming questions, run the cell below. Fill in the below function to calculate accuracy, and then compare your answer to the built in to assure you are correct.
```
# accuracy is the total correct divided by the total to predict
def accuracy(actual, preds):
'''
INPUT
preds - predictions as a numpy array or pandas series
actual - actual values as a numpy array or pandas series
OUTPUT:
returns the accuracy as a float
'''
return np.sum(preds == actual)/len(actual)
print(accuracy(y_test, preds_nb))
print(accuracy_score(y_test, preds_nb))
print("Since these match, we correctly calculated our metric!")
```
> **Step 4**: Fill in the below function to calculate precision, and then compare your answer to the built in to assure you are correct.
```
# precision is the true positives over the predicted positive values
def precision(actual, preds):
'''
INPUT
(assumes positive = 1 and negative = 0)
preds - predictions as a numpy array or pandas series
actual - actual values as a numpy array or pandas series
OUTPUT:
returns the precision as a float
'''
tp = len(np.intersect1d(np.where(preds==1), np.where(actual==1)))
pred_pos = (preds==1).sum()
return tp/(pred_pos) # calculate precision here
print(precision(y_test, preds_nb))
print(precision_score(y_test, preds_nb))
print("If the above match, you got it!")
```
> **Step 5**: Fill in the below function to calculate recall, and then compare your answer to the built in to assure you are correct.
```
# recall is true positives over all actual positive values
def recall(actual, preds):
'''
INPUT
preds - predictions as a numpy array or pandas series
actual - actual values as a numpy array or pandas series
OUTPUT:
returns the recall as a float
'''
tp = len(np.intersect1d(np.where(preds==1), np.where(actual==1)))
act_pos = (actual==1).sum()
return tp/act_pos # calculate recall here
print(recall(y_test, preds_nb))
print(recall_score(y_test, preds_nb))
print("If the above match, you got it!")
```
> **Step 6**: Fill in the below function to calculate f1-score, and then compare your answer to the built in to assure you are correct.
```
# f1_score is 2*(precision*recall)/(precision+recall))
def f1(actual, preds):
'''
INPUT
preds - predictions as a numpy array or pandas series
actual - actual values as a numpy array or pandas series
OUTPUT:
returns the f1score as a float
'''
tp = len(np.intersect1d(np.where(preds==1), np.where(actual==1)))
pred_pos = (preds==1).sum()
prec = tp/(pred_pos)
act_pos = (actual==1).sum()
recall = tp/act_pos
return 2*(prec*recall)/(prec+recall) # calculate f1-score here
print(f1(y_test, preds_nb))
print(f1_score(y_test, preds_nb))
print("If the above match, you got it!")
```
> **Step 7:** Now that you have calculated a number of different metrics, let's tie that to when we might use one versus another. Use the dictionary below to match a metric to each statement that identifies when you would want to use that metric.
```
# add the letter of the most appropriate metric to each statement
# in the dictionary
a = "recall"
b = "precision"
c = "accuracy"
d = 'f1-score'
seven_sol = {
'We have imbalanced classes, which metric do we definitely not want to use?': c, # letter here
'We really want to make sure the positive cases are all caught even if that means we identify some negatives as positives': a, # letter here
'When we identify something as positive, we want to be sure it is truly positive': b, # letter here
'We care equally about identifying positive and negative cases': d # letter here
}
t.sol_seven(seven_sol)
```
> **Step 8:** Given what you know about the metrics now, use this information to correctly match the appropriate model to when it would be best to use each in the dictionary below.
```
# use the answers you found to the previous questiona, then match the model that did best for each metric
a = "naive-bayes"
b = "bagging"
c = "random-forest"
d = 'ada-boost'
e = "svm"
eight_sol = {
'We have imbalanced classes, which metric do we definitely not want to use?': a, # letter here
'We really want to make sure the positive cases are all caught even if that means we identify some negatives as positives': a, # letter here
'When we identify something as positive, we want to be sure it is truly positive': c, # letter here
'We care equally about identifying positive and negative cases': a # letter here
}
t.sol_eight(eight_sol)
# cells for work
# If you get stuck, also notice there is a solution available by hitting the orange button in the top left
def print_metrics(y_true, preds, model_name=None):
'''
INPUT:
y_true - the y values that are actually true in the dataset (numpy array or pandas series)
preds - the predictions for those values from some model (numpy array or pandas series)
model_name - (str - optional) a name associated with the model if you would like to add it to the print statements
OUTPUT:
None - prints the accuracy, precision, recall, and F1 score
'''
if model_name == None:
print('Accuracy score: ', format(accuracy_score(y_true, preds)))
print('Precision score: ', format(precision_score(y_true, preds)))
print('Recall score: ', format(recall_score(y_true, preds)))
print('F1 score: ', format(f1_score(y_true, preds)))
print('\n\n')
else:
print('Accuracy score for ' + model_name + ' :' , format(accuracy_score(y_true, preds)))
print('Precision score ' + model_name + ' :', format(precision_score(y_true, preds)))
print('Recall score ' + model_name + ' :', format(recall_score(y_true, preds)))
print('F1 score ' + model_name + ' :', format(f1_score(y_true, preds)))
print('\n\n')
# Print Bagging scores
print_metrics(y_test, preds_bag, 'bagging')
# Print Random Forest scores
print_metrics(y_test, preds_rf, 'random forest')
# Print AdaBoost scores
print_metrics(y_test, preds_ada, 'adaboost')
# Naive Bayes Classifier scores
print_metrics(y_test, preds_nb, 'naive bayes')
# SVM Classifier scores
print_metrics(y_test, preds_svm, 'svm')
```
As a final step in this workbook, let's take a look at the last three metrics you saw, f-beta scores, ROC curves, and AUC.
**For f-beta scores:** If you decide that you care more about precision, you should move beta closer to 0. If you decide you care more about recall, you should move beta towards infinity.
> **Step 9:** Using the fbeta_score works similar to most of the other metrics in sklearn, but you also need to set beta as your weighting between precision and recall. Use the space below to show that you can use [fbeta in sklearn](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html) to replicate your f1-score from above. If in the future you want to use a different weighting, [this article](http://mlwiki.org/index.php/Precision_and_Recall) does an amazing job of explaining how you might adjust beta for different situations.
```
# import fbeta_score
from sklearn.metrics import fbeta_score
# Show that you can produce the same f1_score results using fbeta_score
print(fbeta_score(y_test, preds_bag, beta=1))
print(f1_score(y_test, preds_bag))
```
> **Step 10:** Building ROC curves in python is a pretty involved process on your own. I wrote the function below to assist with the process and make it easier for you to do so in the future as well. Try it out using one of the other classifiers you created above to see how it compares to the random forest model below.
Run the cell below to build a ROC curve, and retrieve the AUC for the random forest model.
```
# Function for calculating auc and roc
def build_roc_auc(model, X_train, X_test, y_train, y_test):
'''
INPUT:
model - an sklearn instantiated model
X_train - the training data
y_train - the training response values (must be categorical)
X_test - the test data
y_test - the test response values (must be categorical)
OUTPUT:
auc - returns auc as a float
prints the roc curve
'''
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from scipy import interp
y_preds = model.fit(X_train, y_train).predict_proba(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(len(y_test)):
fpr[i], tpr[i], _ = roc_curve(y_test, y_preds[:, 1])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_preds[:, 1].ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
plt.plot(fpr[2], tpr[2], color='darkorange',
lw=2, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.show()
return roc_auc_score(y_test, np.round(y_preds[:, 1]))
# Finding roc and auc for the random forest model
build_roc_auc(rf_mod, training_data, testing_data, y_train, y_test)
# Your turn here - choose another classifier to see how it compares
build_roc_auc(naive_bayes, training_data, testing_data, y_train, y_test)
build_roc_auc(bag_mod, training_data, testing_data, y_train, y_test)
build_roc_auc(ada_mod, training_data, testing_data, y_train, y_test)
```
| github_jupyter |
# Mini Web App Finding Similar Members with the Meetup API
This notebook will present a little application that uses the Meetup.com API to get member info from the Houston Data Science Meetup group.
## Get your API Key
To make this tutorial work, you will need to get an [API key from Meetup][1]. Once you get your key, place it in the api_key.txt file. Do not write any other contents to the fil.
## Reading the API Key into a variable
We open the file and read the key into a variable as a string
[1]: https://secure.meetup.com/meetup_api/key/
```
with open('api_key.txt') as f:
key = f.read()
```
# Using the API
The [Meetup API][1] is quite extensive and has many endpoints to access much of the site's data.
### Get all members of Houston Data Science
We will use the **profiles** endpoint to first access information on each member. The URL name is **Houston-Data-Science**.
### Must loop to get all data
Meetup limits the maximum number of records returned to 200. We write a loop to continually call the API until all the of the records are returned.
### Use `requests` library
The popular `requests` library is used to make our web request. The response is JSON data which is converted to a Python list of dictionaries.
# NOT NECESSARY to do any API calls
The data from the API calls has been stored as a CSV file, so it's not necessary to actually make the following API calls. The second will take an extremely long time to complete.
[1]: https://secure.meetup.com/meetup_api
[2]: https://secure.meetup.com/meetup_api/console/?path=/:urlname/members
```
import requests
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# IGNORE to avoid API Call
members = []
offset = 0
while True:
url = f"https://api.meetup.com/Houston-Data-Science/members?key={key}&offset={offset}"
req = requests.get(url)
cur_list = req.json()
members.extend(cur_list)
if len(cur_list) < 200:
break
offset += 1
offset
len(members)
# Houston Data Science ID
hds_id = 20021425
```
### Iterate through each member
We iterate through each member and extract the member ID, name, and URL of their member photo into a dictionary. We then convert this into a Pandas DataFrame.
```
d = {'id': [], 'name': [], 'photo_url': []}
for member in members:
d['id'].append(member['id'])
d['name'].append(member['name'])
if 'photo' in member:
pho = member['photo']
if 'highres_link' in pho:
d['photo_url'].append(pho['highres_link'])
else:
d['photo_url'].append(pho.get('photo_link', ''))
else:
d['photo_url'].append('')
member_info = pd.DataFrame(d).drop_duplicates()
member_info.to_csv('../data/member_info.csv', index=False)
```
## Read in data from CSV to save time
```
member_info = pd.read_csv('../data/member_info.csv')
member_info.head()
member_info.shape
```
## Collect Groups from each member
The first API call returned the ID for each member. We now use the **profiles** end point to get member data. The following call takes over an hour as it is rate limited.
```
import time
from collections import OrderedDict
mem_groups = OrderedDict()
for i, mem_id in enumerate(member_info['id'].values):
if i % 300 == 0:
print(i)
time.sleep(.3)
url = f'https://api.meetup.com/2/profiles/?member_id={mem_id}&key={key}'
req = requests.get(url)
if req.ok:
try:
data = req.json()
except:
continue
if 'results' in data:
groups = data['results']
mem_groups[mem_id] = []
for group in groups:
mem_groups[mem_id].append(group['group']['name'])
group_map = OrderedDict()
for mid, groups in mem_groups.items():
for group in groups:
if group not in group_map:
group_map[group] = len(group_map)
r = len(mem_groups)
c = len(group_map)
arr = np.zeros((r, c))
arr.shape
for i, (mid, groups) in enumerate(mem_groups.items()):
for group in groups:
arr[i, group_map[group]] = 1
member_group_count = pd.DataFrame(data=arr, index=mem_groups.keys(), columns=group_map.keys())
member_group_count.head()
member_group_count.shape
filt_row = member_group_count.sum(axis='columns') > 5
filt_col = member_group_count.sum() > 100
mem_group_final = member_group_count.loc[filt_row, filt_col]
mem_group_final.shape
mem_group_final.to_csv('../data/member_groups.csv')
```
# Read in the group data here from CSV
```
mem_group_final = pd.read_csv('../data/member_groups.csv', index_col=0)
mem_group_final.head()
corr = mem_group_final.corr()
sns.clustermap(corr, figsize=(20, 20))
```
## Clustermap
A **`clustermap`** creates a heat map and performs hierarchical clustering at the same time. It will rearrange the column order so that the closest clusters of columns will appear together. The clustering is visualized with a dendogram outside of both the x and y axes.
```
# find similar members
corr_mem = mem_group_final.T.corr()
from ipywidgets import interact, interactive, fixed, interact_manual
from IPython.display import display, Image
import ipywidgets as widgets
def get_similar(name):
filt = member_info['name'] == name
df = member_info[filt]
if len(df) == 1:
photo_url = df['photo_url'].values[0]
mem_id = df['id'].values[0]
if photo_url != '':
if not isinstance(photo_url, float):
display(Image(url=photo_url, width=400))
else:
display(f"{name} does not have an uploaded image")
if mem_id in corr_mem.columns:
scores = corr_mem[mem_id].sort_values(ascending=False).iloc[1:6]
for id_, score in scores.items():
filt2 = member_info['id'] == id_
sim_df = member_info[filt2]
sim_name = sim_df['name'].values[0]
sim_photo_url = sim_df['photo_url'].values[0]
display(f'{sim_name} has similarity score {round(score, 2)}')
if not isinstance(sim_photo_url, float):
display(Image(url=sim_photo_url, width=100))
else:
display(f'{name} has not joined enough Meetup groups to find similar members')
elif len(df) > 1:
display("Not a unique name :(")
return None
corr_mem.head()
corr_mem.shape
vc = member_info['name'].value_counts()
uniq_mem = vc[vc == 1].index
idx1 = set(member_info.loc[member_info['name'].isin(uniq_mem), 'id'])
idx2 = set(corr_mem.index)
idx_final = list(idx1 & idx2)
filt = member_info['id'].isin(idx_final)
names = member_info.loc[filt, 'name'].sort_values().tolist()
w = widgets.Dropdown(options=names, description='Name', value=None)
interact(get_similar, name=w);
```
| github_jupyter |
# Week 6 - An introduction to machine learning (Part II) - Exercise and Solution
We'll apply some of the material from the previous lectures to recreating the analysis from a [nature machine intelligence](https://www.nature.com/natmachintell/) paper, ["An interpretable mortality prediction model for COVID-19 patients"](https://www.nature.com/articles/s42256-020-0180-7).
## 0. Setup
You will need to install the [xlrd] (https://xlrd.readthedocs.io/en/latest/) package to complete the Exercise.
To install this packages, launch the "Anaconda Prompt (Anaconda3)" program and run:
`conda install -c anaconda xlrd `
<img src="../img/az_conda_prompt.png">
### Training data
The original training datasets for the paper are linked as [Supplementary data](https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-020-0180-7/MediaObjects/42256_2020_180_MOESM3_ESM.zip). You don't have to download this since we have included the single Excel file we need for this example as `data/time_series_375_preprocess_en.xlsx` in this project. Below we provide code to read the Excel data into a Pandas DataFrame.
```
import datetime
import pandas as pd
TRAIN_PATH = '../data/time_series_375_preprocess_en.xlsx'
RANDOM_SEED=42
def load_training_data(path):
""" Load Excel sheet of measurements from patients (timepandas.DataFrame with MultiIndex ['PATIENT_ID', 'RE_DATE'] (the unique patient identifier and patient sample date, corresponding to columns [0,1] respectively of the loaded worksheet), then retain the last set of measurements made per patient, drop 'Admission time', 'Discharge time', 'gender' and 'age' features, and replace NaNs with -1.
"""
# Specify explicitly what columns we want to load and what their data types are expected to be.
DTYPES = {
'PATIENT_ID': int,
'RE_DATE': str,
'age': int,
'gender': int,
'Admission time': str,
'Discharge time': str,
'outcome': float,
'Hypersensitive cardiac troponinI': float,
'hemoglobin': float,
'Serum chloride': float,
'Prothrombin time': float,
'procalcitonin': float,
'eosinophils(%)': float,
'Interleukin 2 receptor': float,
'Alkaline phosphatase': float,
'albumin': float,
'basophil(%)': float,
'Interleukin 10': float,
'Total bilirubin': float,
'Platelet count': float,
'monocytes(%)': float,
'antithrombin': float,
'Interleukin 8': float,
'indirect bilirubin': float,
'Red blood cell distribution width': float,
'neutrophils(%)': float,
'total protein': float,
'Quantification of Treponema pallidum antibodies': float,
'Prothrombin activity': float,
'HBsAg': float,
'mean corpuscular volume': float,
'hematocrit': float,
'White blood cell count': float,
'Tumor necrosis factorα': float,
'mean corpuscular hemoglobin concentration': float,
'fibrinogen': float,
'Interleukin 1β': float,
'Urea': float,
'lymphocyte count': float,
'PH value': float,
'Red blood cell count': float,
'Eosinophil count': float,
'Corrected calcium': float,
'Serum potassium': float,
'glucose': float,
'neutrophils count': float,
'Direct bilirubin': float,
'Mean platelet volume': float,
'ferritin': float,
'RBC distribution width SD': float,
'Thrombin time': float,
'(%)lymphocyte': float,
'HCV antibody quantification': float,
'D-D dimer': float,
'Total cholesterol': float,
'aspartate aminotransferase': float,
'Uric acid': float,
'HCO3-': float,
'calcium': float,
'Amino-terminal brain natriuretic peptide precursor(NT-proBNP)': float,
'Lactate dehydrogenase': float,
'platelet large cell ratio ': float,
'Interleukin 6': float,
'Fibrin degradation products': float,
'monocytes count': float,
'PLT distribution width': float,
'globulin': float,
'γ-glutamyl transpeptidase': float,
'International standard ratio': float,
'basophil count(#)': float,
'2019-nCoV nucleic acid detection': float,
'mean corpuscular hemoglobin': float,
'Activation of partial thromboplastin time': float,
'High sensitivity C-reactive protein': float,
'HIV antibody quantification': float,
'serum sodium': float,
'thrombocytocrit': float,
'ESR': float,
'glutamic-pyruvic transaminase': float,
'eGFR': float,
'creatinine': float
}
# Specify which string columns should be interpreted as datetimes.
DATETIME_COLUMNS = ['RE_DATE', 'Admission time', 'Discharge time']
return (
pd.read_excel(path, index_col=[0,1], dtype=DTYPES, parse_dates=DATETIME_COLUMNS)
.sort_index()
.groupby('PATIENT_ID').last()
.drop(['Admission time', 'Discharge time'], axis=1)
.drop(['age', 'gender'], axis=1) # removed in later preprocessing step in original paper
)
def remove_columns_with_missing_data(df, threshold=0.2):
""" Remove all columns from DataFrame df where the proportion of missing records is greater than threshold.
"""
return df.dropna(axis=1, thresh=(1.0-threshold)*len(df))
data = load_training_data(path=TRAIN_PATH)
print(data.shape)
data.head()
```
To set things up, as done in the paper, we'll remove all the columns with more than 20% missing data, and separate out our predictors ('X') and response ('y') variables.
```
data = remove_columns_with_missing_data(data).fillna(-1)
X = data.drop('outcome', axis=1)
y = data.outcome.astype(int)
```
## Exercises
### 1. Split data into training and test sets.
### 2. Fit a RandomForestClassifier on the training set.
### 3. Evaluate the classifier performance by calculating the confusion matrix and the [F1 score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) on the test set.
### 4. Plot the feature importances of the fitted classifier (this is basically the main finding of the Nature paper).
### 5. Try running a different type of classifier and/or see how well you can do on the test set by tuning hyperparameters using cross-validation, grid search or otherwise.
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# 1D Degenerate Alfven Wave `GiRaFFEfood` Initial Data for `GiRaFFE`
## This module provides another initial data option for `GiRaFFE`, drawn from [this paper](https://arxiv.org/abs/1310.3274) .
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). The initial data has validated against the original `GiRaFFE`, as documented [here](Tutorial-Start_to_Finish_UnitTest-GiRaFFEfood_NRPy.ipynb).
### NRPy+ Source Code for this module: [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests_degen_Alfven_wave.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests_degen_Alfven_wave.py)
## Introduction:
### Degenerate Alfvén Wave:
This is a flat-spacetime test with initial data
\begin{align}
A_x &= 0 \\
A_y &= \left \{ \begin{array}{lll} -0.8/\pi & \mbox{if} & x \leq -0.1/\gamma_\mu \\
-(0.8/\pi) h_1(x) & \mbox{if} & -0.1/\gamma_\mu \leq x \leq 0.1/\gamma_\mu \\
2(\gamma_\mu x - 0.1) & \mbox{if} & x \geq 0.1/\gamma_\mu \end{array} \right.\\
A_z &= \left \{ \begin{array}{lll} -2(\gamma_\mu x + 0.1) & \mbox{if} & x \leq -0.1/\gamma_\mu \\
-(0.8/\pi) h_2(x) & \mbox{if} & -0.1/\gamma_\mu \leq x \leq 0.1/\gamma_\mu \\
-0.8/\pi & \mbox{if} & x \geq 0.1/\gamma_\mu \end{array} \right.
\end{align}
which generates the magnetic field in the wave frame,
\begin{align}
B'^{x'}(x') &= 0.0 \\
B'^y(x') &= 2 \cos(\phi) \\
B'^z(x') &= 2 \sin(\phi), \\
\end{align}
where
\begin{align}
\phi(x') &= \left \{ \begin{array}{lll} 0.0 & \mbox{if} & x' \leq -0.1 \\
2.5 \pi (x'+0.1) & \mbox{if} & -0.1 \leq x' \leq 0.1 \\
0.5 \pi & \mbox{if} & x' \geq 0.1
\end{array} \right.\\
\end{align}
The electric field in the wave frame is then given by
$$E'(x') = 0.$$
These are converted to the grid frame by
\begin{align}
B^x(0,x) = &\ B'^{x'}(\gamma_\mu x) , \\
B^y(0,x) = &\ \gamma_\mu [ B'^y(\gamma_\mu x) - \mu E'^z(\gamma_\mu x) ] , \\
B^z(0,x) = &\ \gamma_\mu [ B'^z(\gamma_\mu x) + \mu E'^y(\gamma_\mu x) ] ,
\end{align}
and
\begin{align}
E^x(0,x) = &\ E'^{x'}(\gamma_\mu x) , \\
E^y(0,x) = &\ \gamma_\mu [ E'^y(\gamma_\mu x) + \mu B'^z(\gamma_\mu x) ] ,\\
E^z(0,x) = &\ \gamma_\mu [ E'^z(\gamma_\mu x) - \mu B'^y(\gamma_\mu x) ],
\end{align}
and the velocity is given by $$\mathbf{v} = \frac{\mathbf{E} \times \mathbf{B}}{B^2}$$ in flat spacetime. Additionally, $h_1(x)=\cos[2.5\pi(\gamma_\mu x + 0.1)]$, $h_2(x) = \sin[2.5\pi(\gamma_\mu x + 0.1)]$, $-1<\mu<1$ is the wave speed relative to the grid frame, and $\gamma_\mu = (1-\mu^2)^{-1/2}$.
For the eventual purpose of testing convergence, any quantity $Q$ evolves as $Q(t,x) = Q(0,x-\mu t)$
See the [Tutorial-GiRaFFEfood_NRPy_Exact_Wald](Tutorial-GiRaFFEfood_NRPy.ipynb) tutorial notebook for more general detail on how this is used.
<a id='toc'></a>
# Table of Contents:
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Import core NRPy+ modules and set NRPy+ parameters
1. [Step 2](#vector_ak): Set the vector $A_k$
1. [Step 3](#vectors_for_velocity): Set the vectors $B^i$ and $E^i$ for the velocity
1. [Step 4](#vi): Calculate $v^i$
1. [Step 5](#code_validation): Code Validation against `GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests` NRPy+ module
1. [Step 6](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Import core NRPy+ modules and set NRPy+ parameters \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
Here, we will import the NRPy+ core modules and set the reference metric to Cartesian, set commonly used NRPy+ parameters, and set C parameters that will be set from outside the code eventually generated from these expressions. We will also set up a parameter to determine what initial data is set up, although it won't do much yet.
```
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0.a: Import the NRPy+ core modules and set the reference metric to Cartesian
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
# Step 1a: Set commonly used parameters.
thismodule = "GiRaFFEfood_NRPy_1D_tests-degen_Alfven_wave"
```
##### <a id='vector_ak'></a>
# Step 2: Set the vector $A_k$ \[Back to [top](#toc)\]
$$\label{vector_ak}$$
The vector potential is given as
\begin{align}
A_x &= 0 \\
A_y &= \left \{ \begin{array}{lll} -0.8/\pi & \mbox{if} & x \leq -0.1/\gamma_\mu \\
-(0.8/\pi) h_1(x) & \mbox{if} & -0.1/\gamma_\mu \leq x \leq 0.1/\gamma_\mu \\
2(\gamma_\mu x - 0.1) & \mbox{if} & x \geq 0.1/\gamma_\mu \end{array} \right.\\
A_z &= \left \{ \begin{array}{lll} -2(\gamma_\mu x + 0.1) & \mbox{if} & x \leq -0.1/\gamma_\mu \\
-(0.8/\pi) h_2(x) & \mbox{if} & -0.1/\gamma_\mu \leq x \leq 0.1/\gamma_\mu \\
-0.8/\pi & \mbox{if} & x \geq 0.1/\gamma_\mu, \end{array} \right.
\end{align}
where
$$h_1(x)=\cos[2.5\pi(\gamma_\mu x + 0.1)]$$ and $$h_2(x) = \sin[2.5\pi(\gamma_\mu x + 0.1)]$$
However, to take full advantage of NRPy+'s automated function generation capabilities, we want to write this without the `if` statements, replacing them with calls to `fabs()`. To do so, we will use the NRPy+ module `Min_Max_and_Piecewise_Expressions`.
```
mu_AW = par.Cparameters("REAL",thismodule,["mu_AW"], -0.5) # The wave speed
M_PI = par.Cparameters("#define",thismodule,["M_PI"], "")
gammamu = sp.sympify(1)/sp.sqrt(sp.sympify(1)-mu_AW**2)
# We'll use reference_metric.py to define x and y
x = rfm.xx_to_Cart[0]
h1_AW = sp.cos(sp.Rational(5,2)*M_PI*(gammamu*x+sp.Rational(1,10)))
h2_AW = sp.sin(sp.Rational(5,2)*M_PI*(gammamu*x+sp.Rational(1,10)))
```
Now, we can define the vector potential. We will rewrite $A_y$ to make use of the functions provided by `Min_Max_and_Piecewise_Expressions`. As shown below, we make sure that at each boundary, each $\leq$ is paired with a $>$. (This choice is arbitrary, we could just as easily choose $<$ and $\geq$.) This does not change the data since the function is continuous. However, it is necessary for the functions in `Min_Max_and_Piecewise_Expressions` to output the correct results.
\begin{align}
A_x &= 0 \\
A_y &= \left \{ \begin{array}{lll} -0.8/\pi & \mbox{if} & x \leq -0.1/\gamma_\mu \\
-(0.8/\pi) h_1(x) & \mbox{if} & -0.1/\gamma_\mu < x \leq 0.1/\gamma_\mu \\
2(\gamma_\mu x - 0.1) & \mbox{if} & x > 0.1/\gamma_\mu \end{array} \right.\\
A_z &= \left \{ \begin{array}{lll} -2(\gamma_\mu x + 0.1) & \mbox{if} & x \leq -0.1/\gamma_\mu \\
-(0.8/\pi) h_2(x) & \mbox{if} & -0.1/\gamma_\mu < x \leq 0.1/\gamma_\mu \\
-0.8/\pi & \mbox{if} & x > 0.1/\gamma_\mu, \end{array} \right.
\end{align}
```
AD = ixp.zerorank1(DIM=3)
import Min_Max_and_Piecewise_Expressions as noif
bound = sp.Rational(1,10)/gammamu
Ayleft = -sp.Rational(4,5)/M_PI
Aycenter = -sp.Rational(4,5)/M_PI * h1_AW
Ayright = sp.sympify(2)*(gammamu*x-sp.Rational(1,10))
Azleft = -sp.sympify(2)*(gammamu*x+sp.Rational(1,10))
Azcenter = -sp.Rational(4,5)/M_PI * h2_AW
Azright = -sp.Rational(4,5)/M_PI
AD[0] = sp.sympify(0)
AD[1] = noif.coord_leq_bound(x,-bound)*Ayleft\
+noif.coord_greater_bound(x,-bound)*noif.coord_leq_bound(x,bound)*Aycenter\
+noif.coord_greater_bound(x,bound)*Ayright
AD[2] = noif.coord_leq_bound(x,-bound)*Azleft\
+noif.coord_greater_bound(x,-bound)*noif.coord_leq_bound(x,bound)*Azcenter\
+noif.coord_greater_bound(x,bound)*Azright
```
<a id='vectors_for_velocity'></a>
# Step 3: Set the vectors $B^i$ and $E^i$ for the velocity \[Back to [top](#toc)\]
$$\label{vectors_for_velocity}$$
Now, we will set the magnetic and electric fields that we will need to define the initial velocities. First, we need to define, rewriting it with the same matching convention as above ($\leq$ with $>$):
\begin{align}
\phi(x') &= \left \{ \begin{array}{lll} 0.0 & \mbox{if} & x' \leq -0.1 \\
2.5 \pi (x'+0.1) & \mbox{if} & -0.1 < x' \leq 0.1 \\
0.5 \pi & \mbox{if} & x' > 0.1
\end{array} \right.\\
\end{align}
Note that in the definition of $B^i$, we need $f(x')$ where $x'=\gamma_\mu x$.
```
xprime = gammamu*x
bound = sp.Rational(1,10)
phileft = sp.sympify(0)
phicenter = sp.Rational(5,2)*M_PI*(xprime+sp.Rational(1,10))
phiright = sp.Rational(1,2)*M_PI
phi = noif.coord_leq_bound(xprime,-bound)*phileft\
+noif.coord_greater_bound(xprime,-bound)*noif.coord_leq_bound(x,bound)*phicenter\
+noif.coord_greater_bound(xprime,bound)*phiright
```
We will now set the magnetic field in the wave frame:
\begin{align}
B'^{x'}(x') &= 0.0 \\
B'^y(x') &= 2 \cos(\phi) \\
B'^z(x') &= 2 \sin(\phi), \\
\end{align}
```
BpU = ixp.zerorank1()
BpU[0] = sp.sympify(0)
BpU[1] = sp.sympify(2)*sp.cos(phi)
BpU[2] = sp.sympify(2)*sp.sin(phi)
```
Now, we will set the electric field in the wave frame:
$$E'(x') = 0.$$
```
EpU = ixp.zerorank1()
```
Next, we must transform the fields into the grid frame. We'll do the magnetic fields first.
\begin{align}
B^x(0,x) = &\ B'^{x'}(\gamma_\mu x) , \\
B^y(0,x) = &\ \gamma_\mu [ B'^y(\gamma_\mu x) - \mu E'^z(\gamma_\mu x) ] , \\
B^z(0,x) = &\ \gamma_\mu [ B'^z(\gamma_\mu x) + \mu E'^y(\gamma_\mu x) ] ,
\end{align}
```
BU = ixp.zerorank1()
BU[0] = BpU[0]
BU[1] = gammamu*(BpU[1]-mu_AW*EpU[2])
BU[2] = gammamu*(BpU[2]+mu_AW*EpU[1])
```
And now the electric fields:
\begin{align}
E^x(0,x) = &\ E'^{x'}(\gamma_\mu x) , \\
E^y(0,x) = &\ \gamma_\mu [ E'^y(\gamma_\mu x) + \mu B'^z(\gamma_\mu x) ] ,\\
E^z(0,x) = &\ \gamma_\mu [ E'^z(\gamma_\mu x) - \mu B'^y(\gamma_\mu x) ],
\end{align}
```
EU = ixp.zerorank1()
EU[0] = EpU[0]
EU[1] = gammamu*(EpU[1]+mu_AW*BpU[2])
EU[2] = gammamu*(EpU[2]-mu_AW*BpU[1])
```
<a id='vi'></a>
# Step 4: Calculate $v^i$ \[Back to [top](#toc)\]
$$\label{vi}$$
Now, we calculate $$\mathbf{v} = \frac{\mathbf{E} \times \mathbf{B}}{B^2},$$ which is equivalent to $$v^i = [ijk] \frac{E^j B^k}{B^2},$$ where $[ijk]$ is the Levi-Civita symbol and $B^2 = \gamma_{ij} B^i B^j$ is a trivial dot product in flat space.
```
LeviCivitaSymbolDDD = ixp.LeviCivitaSymbol_dim3_rank3()
B2 = sp.sympify(0)
for i in range(3):
# In flat spacetime, gamma_{ij} is just a Kronecker delta
B2 += BU[i]**2 # This is trivial to extend to curved spacetime
ValenciavU = ixp.zerorank1()
for i in range(3):
for j in range(3):
for k in range(3):
ValenciavU[i] += LeviCivitaSymbolDDD[i][j][k] * EU[j] * BU[k] / B2
```
<a id='code_validation'></a>
# Step 5: Code Validation against `GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the `GiRaFFE` Aligned Rotator initial data equations we intend to use between
1. this tutorial and
2. the NRPy+ [`GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py`](../edit/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py) module.
```
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy_1D_tests_degen_Alfven_wave as gfho
gfho.GiRaFFEfood_NRPy_1D_tests_degen_Alfven_wave()
def consistency_check(quantity1,quantity2,string):
if quantity1-quantity2==0:
print(string+" is in agreement!")
else:
print(string+" does not agree!")
sys.exit(1)
print("Consistency check between GiRaFFEfood_NRPy tutorial and NRPy+ module:")
for i in range(3):
consistency_check(ValenciavU[i],gfho.ValenciavU[i],"ValenciavU"+str(i))
consistency_check(AD[i],gfho.AD[i],"AD"+str(i))
```
<a id='latex_pdf_output'></a>
# Step 6: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GiRaFFEfood_NRPy_1D_tests.pdf](Tutorial-GiRaFFEfood_NRPy_1D_tests.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFEfood_NRPy_1D_tests",location_of_template_file=os.path.join(".."))
```
| github_jupyter |
Deep Learning
=============
Assignment 2
------------
Previously in `1_notmnist.ipynb`, we created a pickle with formatted datasets for training, development and testing on the [notMNIST dataset](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html).
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
```
First reload the data we generated in `1_notmnist.ipynb`.
```
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
```
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
* Then you can run the operations on this graph as many times as you want by calling `session.run()`, providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
```
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run this computation and iterate:
```
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
```
Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a `Placeholder` node which will be fed actual data at every call of `session.run()`.
```
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run it:
```
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
---
Problem
-------
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units [nn.relu()](https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#relu) and 1024 hidden nodes. This model should improve your validation / test accuracy.
---
```
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
hidden_layer_size = 1024
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_layer_size]))
biases1 = tf.Variable(tf.zeros([hidden_layer_size]))
hidden = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
weights2 = tf.Variable(
tf.truncated_normal([hidden_layer_size, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(hidden, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
print("train_prediction", train_prediction.get_shape())
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1), weights2) + biases2
)
print("valid_prediction.get_shape()", valid_prediction.get_shape())
test_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1), weights2) + biases2
)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step", step, ":", l)
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
```
| github_jupyter |
# Lecture 7: Load/save and structure data
[Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2020)
[<img src="https://mybinder.org/badge_logo.svg">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2020/master?urlpath=lab/tree/07/Load_save_and_structure_data.ipynb)
1. [Pandas dataframes](#Pandas-dataframes)
2. [Reading and writing data](#Reading-and-writing-data)
3. [Summary](#Summary)
You will learn to **load and save data** both to and from offline sources (e.g. CSV or Excel). You will learn about **pandas series and dataframes**, and how to clean, rename, structure and index your data.
**Links:**
1. Official [tutorials](https://pandas.pydata.org/pandas-docs/stable/getting_started/tutorials.html)
2. DataCamp's [pandas' cheat sheet](https://www.datacamp.com/community/blog/python-pandas-cheat-sheet)
```
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn')
```
<a id="Pandas-dataframes"></a>
# 1. Pandas dataframes
In Pandas, the fundamental object of interest is a pandas dataframe. For example:
```
X = pd.DataFrame(data = [[1,11.7,'Vitus'],[2,13.9,'Maximilian'],[3,14.6,'Bo-Bob']], columns=['id','inc','name'])
X
```
A dataframe is essentially a matrix.
* rows = observations
* columns = variables
```
X.info() # general information
```
**Note:** Show in the middle of some code.
```
from IPython.display import display
print('before\n')
display(X.head()) # first rows in dataset
print('\n\nafter')
```
## 1.1 Indexing ("subsetting")
Choosing a subset of the rows and/or columns of a dataframe is known as "indexing". All pandas dataframes are born with the method `.loc[]`.
* `df.loc[:, ['year']]` selects all rows (indicated by `:`) but only the column (variable) `year`.
* `df.loc[df['year'] == 2002, :]` selects the rows where the variable `year` is equal to 2002 and all columns (indicated by `:`)
* `df.loc[df['year'] == 2002, ['name']]` selects the variable `name` and shows the rows where `year` is equal to 2002.
In general, the syntax is `df.loc[CONDITION, [VARLIST]]`, where `CONDITION` is a vector of logical statements with the same length as the number of rows in the dataframe.
```
X.loc[X['id'] > 1, ['name']]
X.loc[X['id'] > 1] # all variables
```
**Alternatives:**
```
I = X['id'] > 1 # boolean series
X.loc[I, ['name']]
X.loc[X.id > 1, ['name']] # .VAR notation
```
## 1.2 Adding a variable
Variables are added with `df['newvar'] = SOMETHING`.
```
X['year'] = [2003, 2005, 2010]
X
```
**Note:** You cannot write `df.newvar = SOMETHING`. Some of you will forget. I promise.
The *something* can be an expression based on other variables.
```
X['inc_adj'] = X['inc'] / 1.02**(X['year']-2005)
X
```
## 1.3 Assignments to a subset of rows
Use a logical statement to select a subset of rows. Your RHS must then either be:
* a single value (all rows are set to this)
* a list of values with same length as the number of selected rows
```
X
Y = X.copy()
Y.loc[Y['id'] > 1, ['name']] = 'test'
Y
Y = X.copy()
Y.loc[(Y['name'] == 'Vitus') | (Y['year'] == 2005), ['name']] = ['Bib', 'Peter']
Y
Y = X.copy()
J = (Y['name'] == 'Maximilian') | (Y['year'] == 2010)
Y.loc[J, ['name']] = Y.loc[I, ['name']].values*2 # .values is required
Y
```
## 1.4 Copies vs. views
The `.loc[]` method returns a **copy**. Therefore the following cell does not work:.
```
Y = X.copy()
Z = Y.loc[Y['id'] > 1,['name']] # returns a copy
Z = 'test'
Y
```
**Looking** at the data it is natural to do:
```
Y['name']
Y.name
Y[['id','name']]
Y[Y['id'] > 1]
```
Importantly, this **does not work with assignment**:
```
Y = X.copy()
I = Y['id'] > 1
Z = Y['name'] # returns a view (same with Y.name)
Z[I] = 'test'
Y
Y = X.copy()
I = Y['id'] > 1
Z = Y[['id','name']] # returns a copy
Z.loc[I,['name']] = 'test'
Y
Y = X.copy()
I = Y['id'] > 1
Z = Y[I] # returns a copy
Z['name'] = 'test'
Y
```
## 1.5 The index
The first column in the dataset is referred to as the `index` of the dataframe. If you haven't done anything, it is just `[0, 1, 2, ....]`.
```
X.loc[0]
```
You can use many other things as indexes. For example the name:
```
Y = X.set_index('name') # returns a copy
Y # notice name is now below the other variables
Y.loc['Vitus']
```
## 1.6 Series and numpy arrays
When you select an individual variable, it has the data type `series`. Some functions work on a pandas series (e.g. most numpy functions), but it is sometimes nice to extract the underlying numpy objects:
* `df`: pandas dataframe
* `df['variable']`: pandas series
* `df['variabe'].values` (or `.to_numpy()`): Numpy array
```
type(X)
type(X[['year','inc_adj']]) # returns a copy
type(X['year']) # returns a view
type(X['year'].values) # returns a view
```
## 1.7 Calling functions
```
Y = X.copy()
Y
```
Row-by-row:
```
def adj_row_by_row(X):
return X['inc'] / 1.02**(X['year']-2005)
Y['inc_adj_alt1'] = Y.apply(adj_row_by_row,axis=1)
```
Function for numpy arrays:
```
def all_at_once(inc,year):
return inc / 1.02**(year-2005)
Y['inc_adj_alt2'] = all_at_once(Y['inc'].values,Y['year'].values)
```
Funcion for numpy arrays with inplace changes (i.e. a function without any return statement):
```
def all_at_once_inplace(inc,year):
inc[:] = all_at_once(inc,year)
Y['inc_adj_alt3'] = Y['inc']
all_at_once_inplace(Y['inc_adj_alt3'].values,Y['year'].values)
Y # all inc_adj* gives the same result
```
<a id="Reading-and-writing-data"></a>
# 2. Reading and writing data
To make sure that we have the "data" subfolder and that it has the datasets we need, we print its contents:
```
import os
os.listdir('data/')
```
## 2.1 Reading in data
Pandas offers a lot of facilities for reading and writing to different formats. The functions have logical names:
* CSV: `pd.read_csv()`
* SAS: `pd.read_sas()`
* Excel: `pd.read_excel()`
* Stata: `pd.read_stata()`
Whenever we look at larger dataframes, we will be using `df.head(10)` to inspect the first 10 rows, or `df.sample(10)` to look at 10 random rows (when the first 10 are special, for example).
```
# example: raw download from DST
# note: the file must be in a sub folder "data" to the folder where jupyter was launched
filename = 'data/RAS200.xlsx'
pd.read_excel(filename).head(10)
```
### Getting the right columns and rows
**Skipping rows:** Clearly, we should skip the first three rows and the first four columns
```
empl = pd.read_excel(filename, skiprows=2)
empl.head(10)
```
**Dropping columns:** The first couple of columns are not needed and contain only missing values (denoted by `NaN` (Not a Number)), so we will drop those.
```
drop_these = ['Unnamed: 0', 'Unnamed: 1', 'Unnamed: 2', 'Unnamed: 3']
empl.drop(drop_these, axis=1, inplace=True) # axis = 1 -> columns, inplace=True -> changed, no copy made
empl.head(5)
```
> **Alternative:** Use `del empl['Unnamed: 0']`.
### Renaming variables
Let's rename the first variable, which is now called `Unnamed: 4`. This is done using `df.rename(columns=dict)`, where dict must be a Python *dictionary*.
```
empl.rename(columns = {'Unnamed: 4':'municipality'}, inplace=True)
```
We also see that the employment rate in 2008 has been named `2008`. Having a variable that is named a number can cause problems with some functions (and many other programming languages do not even allow it), so let us change their names. To do so, we need to create a dictionary that maps each of the years {2008, ..., 2016} to {e2008, ..., e2016}.
```
myDict = {}
for i in range(2008, 2017): # range goes from 2008 to but not including 2017
myDict[str(i)] = f'e{i}'
myDict
empl.rename(columns = myDict, inplace=True)
empl.head(10)
```
Now we can find the employment rate in the municipality where Anders grew up:
```
empl.loc[empl.municipality == 'Lejre']
```
### Dropping observations that are not actually municipalities
The dataset contains observations like "Region Hovedstaden", which is not a municipality so we want to drop such rows. To do this, we can use the `df['var'].str` functionalities, in particular `df['var'].str.contains('PATTERN')`.
```
I = empl.municipality.str.contains('Region')
empl.loc[I, :]
```
Delete these rows.
```
for val in ['Region', 'Province', 'All Denmark']:
I = empl.municipality.str.contains(val)
empl = empl.loc[I == False] # keep everything else
```
### Summary statistics
To get an overview of employments across municipalities we can use the function `df.describe()`. Note that each observation (municipality) is weighted equally.
```
empl.describe()
```
We can also just get the mean for each year:
```
empl.mean()
```
## 2.2 Long vs. wide datasets: `pd.wide_to_long()`
Often in economic applications, it can be useful to switch between *wide* vs. *long* formats (long is sometimes referred to as *tall*, e.g. in Stata).
This is done by the commands `pd.wide_to_long()` (and `pd.long_to_wide()`).
Many types of analyses are easier to do in one format than in another so it is extremely useful to be able to switch comfortably between formats.
**Common:** Think of a dataset as having an "ID" and a "PERIOD" variable. In our dataset `empl`, the ID variable is `municipality`, and the period variable is `year`.
**Wide dataset:** The default from Statistics Denmark: each row corresponds to an ID and there is a variable for each PERIOD.
**Tall dataset:** There is one row for each combination of (ID, PERIOD).
In general, Pandas will assume that the variables in the *wide* format have a particular structure: namely they are of the form XPERIOD, where X is called the "stub". In our case, the variable names are e.g. `e2011`, so the stub is `e` and the period (for that variable) is `2011`.
```
empl_tall = pd.wide_to_long(empl, stubnames='e', i='municipality', j='year')
empl_tall.head(10)
```
**Note:** The variables `municipality` and `year` are now in the index!! We see that because they are "below" `e` in the `head` overview.
We can **select a specific municipality** using ``.xs``:
```
empl_tall.xs('Lejre',level='municipality')
```
Or ``.loc[]`` in a special way:
```
empl_tall.loc[empl_tall.index.get_level_values('municipality') == 'Lejre', :]
```
We can, alternatively, reset the index, and use `.loc` as normal:
```
empl_tall = empl_tall.reset_index()
empl_tall.loc[empl_tall.municipality == 'Lejre', :]
```
**Teaser:** As a quick teaser for what's to come, here's a cute little plot using the builtin pandas plot function.
```
empl_tall.loc[empl_tall['municipality'] == 'Lejre', :].plot(x='year',y='e');
```
We can even do it interactively:
```
import ipywidgets as widgets
def plot_e(dataframe, municipality):
I = dataframe['municipality'] == municipality
ax=dataframe.loc[I,:].plot(x='year', y='e', style='-o', legend='False')
widgets.interact(plot_e,
dataframe = widgets.fixed(empl_tall),
municipality = widgets.Dropdown(description='Municipality', options=empl_tall.municipality.unique(), value='Lejre')
);
```
## 2.3 Income
Next, we will read in the avg. disposable income for highly educated in each municipality. Here we do the cleaning, renaming and structuring in a few condensed lines.
```
# a. load
inc = pd.read_excel('data/INDKP107.xlsx', skiprows=2)
# b. clean and rename
inc.drop([f'Unnamed: {i}' for i in range(4)], axis=1, inplace=True)
inc.rename(columns = {'Unnamed: 4':'municipality'}, inplace=True) # using list comprehension
inc.rename(columns = {str(i): f'inc{i}' for i in range(2004,2018)}, inplace=True) # usinc dictionary comprehension
# c. drop rows with missing
inc.dropna(inplace=True)
# d. remove non-municipalities
for val in ['Region','Province', 'All Denmark']:
I = inc.municipality.str.contains(val)
inc.drop(inc[I].index, inplace=True) # .index -> get the indexes of the series
inc.head(5)
```
Convert wide -> tall:
```
inc_tall = pd.wide_to_long(df=inc, stubnames='inc', i='municipality', j='year')
inc_tall.reset_index(inplace=True)
inc_tall.head(5)
```
## 2.4 Municipal area
Finally, let's read in a dataset on municipality areas in km$^2$.
```
# a. load
area = pd.read_excel('data/areal.xlsx', skiprows=2)
# b. clean and rename
area.rename(columns = {'Unnamed: 0':'municipality','2019':'km2'}, inplace=True)
# c. drop rows with missing
area.dropna(inplace=True)
# d. remove non-municipalities
for val in ['Region','Province', 'All Denmark']:
I = area.municipality.str.contains(val)
area.drop(area[I].index, inplace=True)
area.head(5)
```
## 2.5 Writing data
As with reading in data, we have the corresponding functions:
* df.to_csv()
* df.to_excel()
* df.to_stata()
* df.to_sas()
* df.to_parquet()
Let's save our dataset to CSV form. We will set `index=False` to avoid saving the index (which does not mean anything here but can in other contexts be an annoying thing).
```
empl_tall.to_csv('data/RAS200_tall.csv', index=False)
inc_tall.to_csv('data/INDKP107_tall.csv', index=False)
area.to_csv('data/area.csv', index=False)
```
<a id="Summary"></a>
# 3. Summary
**This lecture**: We have discussed
1. The generel pandas framework (indexing, assigment, copies vs. views, functions)
2. Loading and saving data
3. Basic data cleaning (renaming, droping etc.)
4. Wide $\leftrightarrow$ long transformations
**Next lecture:** Basic data analysis.
| github_jupyter |
```
import pandas as pd
import numpy as np
import hddm
import sys
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
%matplotlib inline
pd.options.display.max_columns = None
#generate data
data, params = hddm.generate.gen_rand_data(params={'a': 2, 't': .4, 'v': .5},
size = 500)
#create column called nn_response with choice coded as -1 (lower bound) and 1 (upper bound)
data['nn_response'] = np.where(data.response==0, -1,data.response)
data.head(10)
#run nn_likelihood
m = hddm.HDDMnn(data,include='z')
m.sample(400,burn = 200)
m.print_stats()
samples = m.get_traces()
x = samples.describe()
type(samples)
pd.plotting.scatter_matrix(samples)
#plt.hist(samples['t'])
#plt.hist(samples['a'])
#plt.hist(samples['v'])
#compare to normal HDDM
m2 = hddm.HDDM(data,include='all')
m2.sample(1000,burn = 500)
m2.print_stats()
samples2 = m2.get_traces()
pd.plotting.scatter_matrix(samples2)
import numpy as np
from math import *
def multivariate_t_distribution(x,mu,Sigma,df,d):
'''
Multivariate t-student density:
output:
the density of the given element
input:
x = parameter (d dimensional numpy array or scalar)
mu = mean (d dimensional numpy array or scalar)
Sigma = scale matrix (dxd numpy array)
df = degrees of freedom
d: dimension
'''
Num = gamma(1. * (d + df)/2)
#print(np.dot(np.dot((x - mu), np.linalg.inv(Sigma)).T, (x - mu)))
#print(gamma(1. * (df / 2)) * np.power(df * pi,1. * (d / 2)) * np.power(np.linalg.det(Sigma), 1. / 2) * np.power(1 + (1./df) * np.dot(np.dot((x - mu), np.linalg.inv(Sigma)), (x - mu)), 1. * (d + df) / 2))
Denom = ( gamma(1. * (df / 2)) * np.power(df * pi,1. * (d / 2)) * np.power(np.linalg.det(Sigma), 1. / 2) * np.power(1 + (1./df) * np.dot(np.dot((x - mu),np.linalg.inv(Sigma)), (x - mu).T), 1. * (d + df) / 2))
d = np.divide(1. * Num, Denom)
return d
#written by Enzo Michelangeli, style changes by josef-pktd
# Student's T random variable
def multivariate_t_rvs(m, S, df = np.inf, n=1):
'''generate random variables of multivariate t distribution
Parameters
----------
m : array_like
mean of random variable, length determines dimension of random variable
S : array_like
square array of covariance matrix
df : int or float
degrees of freedom
n : int
number of observations, return random array will be (n, len(m))
Returns
-------
rvs : ndarray, (n, len(m))
each row is an independent draw of a multivariate t distributed
random variable
'''
m = np.asarray(m)
d = len(m)
if df == np.inf:
x = 1.
else:
x = np.random.chisquare(df, n) / df
z = np.random.multivariate_normal(np.zeros(d),S,(n,))
return m + z/np.sqrt(x)[:,None] # same output format as random.multivariate_normal
S = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
m = np.array([1, 2, 3])
X = multivariate_t_rvs(m = m, S = S, df = 1, n = 1000)
out = multivariate_t_distribution(X[:,:], m, S, df = 1, d = 3)
(np.dot((X - m), np.linalg.inv(S))).shape #, (X - m))
```
| github_jupyter |
# Bayesian Linear Regression part 4: Plots

Now I have [priors on the weights](2018-01-03-bayesian-linreg.ipynb) and [observations](2018-01-08-bayesian-linreg-sample.ipynb), and I used this to come up with [the mean and variance of the posterior on the weights](2018-01-09-bayesian-linreg-posterior.ipynb). In this post, I'll show some cool plots.
```
# imports!
import numpy as np
import matplotlib.pyplot as plt
# helper functions you can skip over :D
SAVE = True
def maybe_save_plot(filename):
if SAVE:
plt.tight_layout()
plt.savefig('images/' + filename, bbox_inches="tight")
```
## Set up
You can skip to "sampling from the posterior"! This computes `V_n` again using the code from [the last post](2018-01-09-bayesian-linreg-posterior.ipynb).
```
# Set up the prior
mu_w = 0
mu_b = 0
sigma_w = 0.2
sigma_b = 0.2
w_0 = np.hstack([mu_b, mu_w])[:, None]
V_0 = np.diag([sigma_b, sigma_w])**2
# Get observations
true_sigma_y = 0.1
true_w = np.array([[2, 0.3]]).T
X_in = 2 * np.random.rand(11, 1) - 1
Phi_X_in = np.hstack((
np.ones((X_in.shape[0], 1)), # pad with 1s for the bias term
X_in
))
true_sigma_y = 0.05
noise = true_sigma_y * np.random.randn(X_in.shape[0], 1)
y = Phi_X_in @ true_w + noise
# Compute the posterior
sigma_y = true_sigma_y # I'm going to guess the noise correctly
V0_inv = np.linalg.inv(V_0)
V_n = sigma_y**2 * np.linalg.inv(sigma_y**2 * V0_inv + (Phi_X_in.T @ Phi_X_in))
w_n = V_n @ V0_inv @ w_0 + 1 / (sigma_y**2) * V_n @ Phi_X_in.T @ y
```
#### Quick aside
Above I plot a 2D array to plot multiple lines, which makes matplotlib create a lot of duplicate labels. I'm not sure if plotting matrix is a bad idea to start with, but I did it anyway and used a helper function to deduplicate labels.
```
# hrm, plotting the matrix made for `N` duplicate labels.
# https://stackoverflow.com/questions/26337493/pyplot-combine-multiple-line-labels-in-legend
def get_dedup_labels(plt):
handles, labels = plt.gca().get_legend_handles_labels()
new_handles = []
new_labels = []
for handle, label in zip(handles, labels):
if label not in new_labels:
new_handles.append(handle)
new_labels.append(label)
return new_handles, new_labels
```
## Sampling from the posterior
Much like how I [sampled from the prior]({% post_url 2018-01-03-bayesian-linreg %}), I can sample weights from the posterior.
```
grid_size = 0.01
x_grid = np.arange(-1, 1, grid_size)[:, None]
N = 100
Phi_X = np.hstack((
np.ones((x_grid.shape[0], 1)), # pad with 1s for the bias term
x_grid
))
w = np.random.randn(N, 2) @ np.linalg.cholesky(V_n) + w_n.T
plt.clf()
plt.figure(figsize=(8, 6))
plt.plot(x_grid, Phi_X @ w.T, '-m', alpha=.2, label='weights sampled from posterior')
plt.plot(X_in, y, 'xk', label='observations')
plt.legend(*get_dedup_labels(plt))
maybe_save_plot('2018-01-10-samples') # Graph showing x's for observations, a line from the mean Bayesian prediction, and shaded area of uncertainty.
plt.show()
```
## Prediction with uncertainty
I can also use `V_n` to compute the uncertainty of predictions. The prediction is the true function with some added noise:
$$y = f(\textbf x) + v$$
where \\(v \sim \mathcal N(0, \sigma_y^2)\\). With a little math, I can compute the mean and variance of the prediction posterior's Gaussian distribution. It's [also given in the course notes](http://www.inf.ed.ac.uk/teaching/courses/mlpr/2017/notes/w7a_bayesian_inference_prediction.html#predictions-for-bayesian-linear-regression).
Then I can take the square root of that to get the standard deviation and plot [2 standard deviations](https://en.wikipedia.org/wiki/68–95–99.7_rule) from the mean. In code:
```
grid_size = 0.01
x_grid = np.arange(-1, 1, grid_size)[:, None]
Phi_X = np.hstack((
np.ones((x_grid.shape[0], 1)), # pad with 1s for the bias term
x_grid
))
stdev_pred = np.sqrt(np.sum(np.dot(Phi_X, V_n) * Phi_X, 1)[:, None] + sigma_y**2)
upper_bound = Phi_X @ w_n + 2 * stdev_pred
lower_bound = Phi_X @ w_n - 2 * stdev_pred
plt.clf()
plt.figure(figsize=(8, 6))
plt.plot(X_in, y, 'xk', label='observations')
# I think fill_between wants 1D arrays
plt.fill_between(x_grid[:, 0], lower_bound[:, 0], upper_bound[:, 0], alpha=0.2, label='two standard deviations')
plt.plot(x_grid, Phi_X @ w_n, label='mean prediction')
plt.legend()
maybe_save_plot('2018-01-10-uncertainty') # Graph showing x's for observations, a line from the mean Bayesian prediction, and shaded area of uncertainty.
plt.show()
```
Neat!
If I zoom out like I do below, it's clearer that the shaded area is squeezed around the observations. That's saying there is less uncertainty around where the observations are. That's intuitive; I should be more certain of my prediction around observations.

### Comparison
The difference between these two plots confused me at first but sorting it out was instructive.
In the first plot, I'm sampling from the distribution of the *weights*. I hear sampling from the weights' distribution is not always easy to do. It turns out to be easy when doing Bayesian linear regression using Gaussians for everything.
The second plot shows the distribution of the *prediction*. This is related to the distribution of the weights (equation from [the course notes](http://www.inf.ed.ac.uk/teaching/courses/mlpr/2017/notes/w7a_bayesian_inference_prediction.html#predictions-for-bayesian-linear-regression)):
$$p(y|\mathbf x, \mathcal D) = \int p(y | \mathbf x, \mathbf w) p(\mathbf w|\mathcal D) \, d \mathbf w$$
If I look at a single weight sampled from the weight's posterior, I can plot
\\( p(y|\mathbf x, \mathbf w) \\)
which for each \\(\mathbf x\\) is \\(\mathcal N(y; \mathbf w^{\top} \mathbf x, \sigma_y^2)\\). If I plot it, I get:
```
w = np.random.randn(1, 2) @ np.linalg.cholesky(V_n) + w_n.T
mean_pred = Phi_X @ w.T
plt.clf()
plt.figure(figsize=(8, 6))
upper_bound = mean_pred[:, 0] + 2 * sigma_y
lower_bound = mean_pred[:, 0] - 2 * sigma_y
plt.plot(x_grid, mean_pred[:, 0], '-m', label='weight sampled from posterior')
plt.fill_between(x_grid[:, 0], lower_bound, upper_bound, color='m', alpha=0.2, label='two standard deviations')
plt.plot(X_in, y, 'xk', label='observations')
maybe_save_plot('2018-01-10-sample-with-error') # Graph showing x's for observations, a line for one sample of the weights, and shaded area for uncertainty.
plt.show()
```
To get the prediction, I use the integral, which does a weighted sum (or [expectation](https://en.wikipedia.org/wiki/Expected_value)!) over a bunch (all) of these. Then I get:

### Bonus: basis functions
With linear regression, I can also use basis functions to match even cooler functions.
For fun, I tried polynomials by using a different \\(\Phi \\). The true function was a quadratic. This shows
trying to fit a 5 degree polynomial to it:
model_params = 6 # highest degree + 1
Phi_X_in = np.hstack(X_in**i for i in range(model_params))
Sampling priors gave me lots of squiggles. (It also reminds me of my hair a few years ago!)

I can plot the uncertainty.

I also can add a few more points from the underlying function and see how it changes.

## See Also
- Still thanks to [MLPR](http://www.inf.ed.ac.uk/teaching/courses/mlpr/2017/notes/)!
- I originally posted the bonus [here](https://gist.github.com/jessstringham/827d8582eb4e3e0c26e9b16f6105621a).
| github_jupyter |
```
import pickle as pkl
import pandas as pd
import imodels
import itertools
import os
from imodels.util.evaluate.compare_models import run_comparison
from sklearn.metrics import accuracy_score, f1_score
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
import matplotlib.pyplot as plt
from tqdm import tqdm
%load_ext autoreload
%autoreload 2
# change working directory to project root
if os.getcwd().split('/')[-1] != 'imodels':
os.chdir('..')
MODEL_COMPARISON_PATH = 'tests/test_data/comparison_data/'
MODEL_COMPARISON_FILE = MODEL_COMPARISON_PATH + 'model_comparisons.pkl'
```
# compare static performance of different models
```
df = pkl.load(open(MODEL_COMPARISON_FILE, 'rb'))['df'].round(3)
print('columns', df.columns, 'models', df.index)
```
# complexity-vs-accuracy plots for each model
```
COMPARISON_DATASETS = [
("breast-cancer", 13),
("breast-w", 15),
("credit-g", 31),
("haberman", 43),
("heart", 1574),
("labor", 4),
("vote", 56),
]
METRICS = [
('Acc.', accuracy_score),
('Time', None),
('Complexity', None)
]
def get_comparison_df(estimators):
'''Get results for running multiple estimators
'''
estimator_name = estimators[0][0]
model_comparison_file = MODEL_COMPARISON_PATH + f'{estimator_name}_comparisons.pkl'
if os.path.isfile(model_comparison_file):
result = pkl.load(open(model_comparison_file, 'rb'))['df']
else:
result = run_comparison(COMPARISON_DATASETS, METRICS, estimators, write=False, average=True, verbose=False)
pkl.dump({'df': result}, open(model_comparison_file, 'wb'))
return result
def viz_model(result):
'''Plot acc vs complexity
'''
complexities = result[result.index.str.contains('Complexity')]
accuracies = result[result.index.str.contains('Acc')]
complexity_sort_indices = complexities.argsort()
plt.plot(complexities[complexity_sort_indices], accuracies[complexity_sort_indices])
plt.xlabel('Complexity score')
plt.ylabel('Average accuracy across comparison datasets')
```
## Random Forest
```
est_rf = [
('random_forest', RandomForestClassifier(n_estimators=n, max_depth=d))
for n, d in itertools.product([2, 3, 4], [2, 3])
]
est_gb = [
('gradient_boosting', GradientBoostingClassifier(n_estimators=n, max_depth=d))
for n, d in itertools.product([2, 3, 4], [2, 3])
]
est_skope = [
('skope', imodels.SkopeRulesClassifier(n_estimators=n, max_depth=d))
for n, d in itertools.product([2, 4, 8, 16, 32, 64, 96], [2, 3])
]
est_rulefit = [
('rulefit', imodels.RuleFitClassifier(max_rules=n, tree_size=d))
for n, d in itertools.product([2, 4, 8, 16, 32, 48], [4, 8])
]
est_fplasso = [
('fplasso', imodels.FPLassoClassifier(max_rules=n, maxcardinality=c))
for n, c in itertools.product([2, 4, 8, 16, 32, 48, 96], [2, 3])
]
est_fpskope = [
('fpskope', imodels.FPSkopeClassifier(maxcardinality=c, max_depth_duplication=dd))
for c, dd in itertools.product([2, 3, 4], [1, 2, 3])
]
est_brl = [
('brl', imodels.BayesianRuleListClassifier(listlengthprior=l, maxcardinality=c))
for l, c in itertools.product([2, 4, 8, 16], [2, 3])
]
est_grl = [('grl', imodels.GreedyRuleListClassifier(max_depth=d)) for d in [2, 4, 8, 16]]
est_oner = [('oner', imodels.OneRClassifier(max_depth=d)) for d in [2, 3, 4, 5, 6, 7]]
est_brs = [('brs', imodels.BoostedRulesClassifier(n_estimators=n)) for n in [2, 4, 8, 16, 32]]
ests = [est_rf, est_gb, est_skope, est_rulefit, est_fplasso, est_fpskope, est_brl, est_grl, est_oner, est_brs]
plt.figure(dpi=250)
for est in tqdm(ests):
result = get_comparison_df(est)
complexities = result[result.index.str.contains('Complexity')]
accuracies = result[result.index.str.contains('Acc')]
complexity_sort_indices = complexities.argsort()
plt.plot(complexities[complexity_sort_indices],
accuracies[complexity_sort_indices], label=est[0][0].replace('_', ' '))
plt.xlabel('Complexity score')
plt.ylabel('Average accuracy across comparison datasets')
plt.legend(frameon=False, handlelength=1)
plt.show()
```
| github_jupyter |
# VERIFICATION TESTING
# HER2 One Scanner - Aperio FDA
- 5-Fold (80/20) split, No Holdout Set
- Truth = Categorical from Mean of 7 continuous scores
- Epoch at automatic Stop when loss<.001 change
- LeNet model, 10 layers, Dropout (0.7)
```
import numpy as np
import pandas as pd
import random
from keras.callbacks import EarlyStopping
from PIL import Image
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, Lambda
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from sklearn import metrics
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_curve, auc, classification_report
import csv
import cv2
import scipy
import os
%matplotlib inline
import matplotlib.pyplot as plt
#For single scanner
BASE_PATH = '/home/diam/Desktop/1Scanner_VerificationTest_HER2data/Aperio_FDA/'
#BASE PATH for working from home:
#BASE_PATH = '/home/OSEL/Desktop/HER2_data_categorical/'
#epochs = 10
batch_size = 32
num_classes = 3
#epochs = 35
```
## Get Data - Practice
```
#This is the version from Ravi's code:
#FDA
#X_FDA = []
#idx_FDA = []
#for index, image_filename in list(enumerate(BASE_PATH)):
# img_file = cv2.imread(BASE_PATH + '/' + image_filename)
# if img_file is not None:
#img_file = smisc.imresize(arr = img_file, size = (600,760,3))
# img_file = smisc.imresize(arr = img_file, size = (120,160,3))
# img_arr = np.asarray(img_file)
# X_FDA.append(img_arr)
# idx_FDA.append(index)
#X_FDA = np.asarray(X_FDA)
#idx_FDA = np.asarray(idx_FDA)
#random.seed(rs)
#random_id = random.sample(idx_FDA, len(idx_FDA)/2)
#random_FDA = []
#for i in random_id:
# random_FDA.append(X_FDA[i])
#random_FDA = np.asarray(random_FDA)
```
## Get Data - Real
```
def get_data(folder):
X = []
y = []
filenames = []
for hclass in os.listdir(folder):
if not hclass.startswith('.'):
if hclass in ["1"]:
label = 1
else: #label must be 1 or 2
if hclass in ["2"]:
label = 2
else:
label = 3
for image_filename in os.listdir(folder + hclass):
filename = folder + hclass + '/' + image_filename
img_file = cv2.imread(folder + hclass + '/' + image_filename)
if img_file is not None:
img_file = scipy.misc.imresize(arr=img_file, size=(120, 160, 3))
img_arr = np.asarray(img_file)
X.append(img_arr)
y.append(label)
filenames.append(filename)
X = np.asarray(X)
y = np.asarray(y)
z = np.asarray(filenames)
return X,y,filenames
X, y, z = get_data(BASE_PATH)
#print(X)
#print(y)
#print(z)
print(len(X))
print(len(y))
print(len(z))
#INTEGER ENCODE
#https://machinelearningmastery.com/how-to-one-hot-encode-sequence-data-in-python/
encoder = LabelEncoder()
y_cat = np_utils.to_categorical(encoder.fit_transform(y))
print(y_cat)
```
### Old Code
```
#encoder = LabelEncoder()
#encoder.fit(y)
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
#encoded_y_train = encoder.transform(y_train)
#encoded_y_test = encoder.transform(y_test)
#y_train = np_utils.to_categorical(encoded_y_train)
#y_test = np_utils.to_categorical(encoded_y_test)
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
```
## Fit Model with K-Fold X-Val
```
kf = KFold(n_splits = 5, random_state=5, shuffle=True)
print(kf.get_n_splits(y))
print(kf)
#for train_index, test_index in kf.split(y):
# X_train, X_test = X[train_index], X[test_index]
# print(train_index, test_index)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(y_cat):
fold+=1
print("fold #{}".format(fold))
X_train = X[train]
y_train = y_cat[train]
X_test = X[test]
y_test = y_cat[test]
#encoder = LabelEncoder()
#encoder.fit(y_test)
#y_train = np_utils.to_categorical(encoder.transform(y_train))
#y_test = np_utils.to_categorical(encoder.transform(y_test))
model = Sequential()
model.add(Lambda(lambda x: x * 1./255., input_shape=(120, 160, 3), output_shape=(120, 160, 3)))
model.add(Conv2D(32, (3, 3), input_shape=(120, 160, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.7))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=25, verbose=1, mode='auto')
model.fit(
X_train,
y_train,
validation_data=(X_test,y_test),
callbacks=[monitor],
shuffle=True,
batch_size=batch_size,
verbose=0,
epochs=1000)
pred = model.predict(X_test)
oos_y.append(y_test)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
#measure the fold's accuracy
y_compare = np.argmax(y_test,axis=1) #for accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print("Fold Score (accuracy): {}".format(score))
print(pred)
```
| github_jupyter |
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Introduction to MetPy</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
## Overview:
* **Teaching:** 15 minutes
* **Exercises:** 15 minutes
### Questions
1. What is MetPy?
1. How is MetPy structured?
1. How are units handled in MetPy?
### Objectives
1. <a href="#whatis">What is MetPy?</a>
1. <a href="#units">Units and MetPy</a>
1. <a href="#constants">MetPy Constants</a>
1. <a href="#calculations">MetPy Calculations</a>
<a name="whatis"></a>
## What is MetPy?
MetPy is a modern meteorological toolkit for Python. It is now a maintained project of [Unidata](http://www.unidata.ucar.edu) to serve the academic meteorological community. MetPy consists of three major areas of functionality:

### Plots
As meteorologists, we have many field specific plots that we make. Some of these, such as the Skew-T Log-p require non-standard axes and are difficult to plot in most plotting software. In MetPy we've baked in a lot of this specialized functionality to help you get your plots made and get back to doing science. We will go over making different kinds of plots during the workshop.
### Calculations
Meteorology also has a common set of calculations that everyone ends up programming themselves. This is error-prone and a huge duplication of work! MetPy contains a set of well tested calculations that is continually growing in an effort to be at feature parity with other legacy packages such as GEMPAK.
### File I/O
Finally, there are a number of odd file formats in the meteorological community. MetPy has incorporated a set of readers to help you deal with file formats that you may encounter during your research.
<a name="units"></a>
## Units and MetPy
Early in our scientific careers we all learn about the importance of paying attention to units in our calculations. Unit conversions can still get the best of us and have caused more than one major technical disaster, including the crash and complete loss of the $327 million [Mars Climate Orbiter](https://en.wikipedia.org/wiki/Mars_Climate_Orbiter).
In MetPy, we use the [pint](https://pint.readthedocs.io/en/latest/) library and a custom unit registry to help prevent unit mistakes in calculations. That means that every quantity you pass to MetPy should have units attached, just like if you were doing the calculation on paper! Attaching units is easy:
```
# Import the MetPy unit registry
from metpy.units import units
length = 10.4 * units.inches
width = 20 * units.meters
print(length, width)
```
Don't forget that you can use tab completion to see what units are available! Just about every imaginable quantity is there, but if you find one that isn't, we're happy to talk about adding it.
While it may seem like a lot of trouble, let's compute the area of a rectangle defined by our length and width variables above. Without units attached, you'd need to remember to perform a unit conversion before multiplying or you would end up with an area in inch-meters and likely forget about it. With units attached, the units are tracked for you.
```
area = length * width
print(area)
```
That's great, now we have an area, but it is not in a very useful unit still. Units can be converted using the `.to()` method. While you won't see m$^2$ in the units list, we can parse complex/compound units as strings:
```
area.to('m^2')
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Create a variable named <code>speed</code> with a value of 25 knots.</li>
<li>Create a variable named <code>time</code> with a value of 1 fortnight.</li>
<li>Calculate how many furlongs you would travel in <code>time</code> at <code>speed</code></li>
</ul>
</div>
```
# Your code goes here
```
<button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>View Solution</button>
<div id="sol1" class="collapse">
<code><pre>
speed = 25 * units.knots
time = 1 * units.fortnight
distance = speed * time
print(distance.to('furlongs'))
</pre></code>
</div>
### Temperature
Temperature units are actually relatively tricky (more like absolutely tricky as you'll see). Temperature is a non-multiplicative unit - they are in a system with a reference point. That means that not only is there a scaling factor, but also an offset. This makes the math and unit book-keeping a little more complex. Imagine adding 10 degrees Celsius to 100 degrees Celsius. Is the answer 110 degrees Celsius or 383.15 degrees Celsius (283.15 K + 373.15 K)? That's why there are delta degrees units in the unit registry for offset units. For more examples and explanation you can watch [MetPy Monday #13](https://www.youtube.com/watch?v=iveJCqxe3Z4).
Let's take a look at how this works and fails:
We would expect this to fail because we cannot add two offset units (and it does fail as an "Ambiguous operation with offset unit").
<pre style='color:#000000;background:#ffffff;'><span style='color:#008c00; '>10</span> <span style='color:#44aadd; '>*</span> units<span style='color:#808030; '>.</span>degC <span style='color:#44aadd; '>+</span> <span style='color:#008c00; '>5</span> <span style='color:#44aadd; '>*</span> units<span style='color:#808030; '>.</span>degC
</pre>
On the other hand, we can subtract two offset quantities and get a delta:
```
10 * units.degC - 5 * units.degC
```
We can add a delta to an offset unit as well:
```
25 * units.degC + 5 * units.delta_degF
```
Absolute temperature scales like Kelvin and Rankine do not have an offset and therefore can be used in addition/subtraction without the need for a delta verion of the unit.
```
273 * units.kelvin + 10 * units.kelvin
273 * units.kelvin - 10 * units.kelvin
```
<div class="alert alert-success">
<b>EXERCISE</b>:
A cold front is moving through, decreasing the ambient temperature of 25 degC at a rate of 2.3 degF every 10 minutes. What is the temperature after 1.5 hours?
</div>
```
# Your code goes here
```
<button data-toggle="collapse" data-target="#sol2" class='btn btn-primary'>View Solution</button>
<div id="sol2" class="collapse">
<code><pre>
temperature_change_rate = -2.3 * units.delta_degF / (10 * units.minutes)
temperature = 25 * units.degC
dt = 1.5 * units.hours
print(temperature + temperature_change_rate * dt)
</pre></code>
</div>
<a href="#top">Top</a>
<hr style="height:2px;">
<a name="constants"></a>
## MetPy Constants
Another common place that problems creep into scientific code is the value of constants. Can you reproduce someone else's computations from their paper? Probably not unless you know the value of all of their constants. Was the radius of the earth 6000 km, 6300km, 6371 km, or was it actually latitude dependent?
MetPy has a set of constants that can be easily accessed and make your calculations reproducible. You can view a [full table](https://unidata.github.io/MetPy/latest/api/generated/metpy.constants.html#module-metpy.constants) in the docs, look at the module docstring with `metpy.constants?` or checkout what's available with tab completion.
```
import metpy.constants as mpconst
mpconst.earth_avg_radius
mpconst.dry_air_molecular_weight
```
You may also notice in the table that most constants have a short name as well that can be used:
```
mpconst.Re
mpconst.Md
```
<a href="#top">Top</a>
<hr style="height:2px;">
<a name="calculations"></a>
## MetPy Calculations
MetPy also encompasses a set of calculations that are common in meteorology (with the goal of have all of the functionality of legacy software like GEMPAK and more). The [calculations documentation](https://unidata.github.io/MetPy/latest/api/generated/metpy.calc.html) has a complete list of the calculations in MetPy.
We'll scratch the surface and show off a few simple calculations here, but will be using many during the workshop.
```
import metpy.calc as mpcalc
import numpy as np
# Make some fake data for us to work with
np.random.seed(19990503) # So we all have the same data
u = np.random.randint(0, 15, 10) * units('m/s')
v = np.random.randint(0, 15, 10) * units('m/s')
print(u)
print(v)
```
Let's use the `wind_direction` function from MetPy to calculate wind direction from these values. Remember you can look at the docstring or the website for help.
```
direction = mpcalc.wind_direction(u, v)
print(direction)
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<li>Calculate the wind speed using the `wind_speed` function.</li>
<li>Print the wind speed in m/s and mph.</li>
</div>
```
# Your code goes here
```
<button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>View Solution</button>
<div id="sol3" class="collapse">
<code><pre>
speed = mpcalc.wind_speed(u, v)
print(speed)
print(speed.to('mph'))
</pre></code>
</div>
As one final demonstration, we will calculation the dewpoint given the temperature and relative humidity:
```
mpcalc.dewpoint_rh(25 * units.degC, 75 * units.percent)
```
<a href="#top">Top</a>
<hr style="height:2px;">
| github_jupyter |
<a href="https://colab.research.google.com/github/kirubarajan/roft/blob/master/annotation/analysis/research.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Dataset Loading and Cleaning
```
!pip install fsspec gcsfs
!pip install --upgrade matplotlib
import json
import pandas as pd
import tensorflow as tf
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats
from scipy import stats
sns.set_theme(style="whitegrid")
sns.set_palette(sns.color_palette("Set2"))
DATABASE_DUMP_FILE = 'gs://roft_buckups/10-25-21.json'
from google.colab import auth
auth.authenticate_user()
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default())
from google.colab import data_table
data_table.enable_dataframe_formatter()
with tf.io.gfile.GFile(DATABASE_DUMP_FILE, 'r') as f:
lines = f.readlines()
db = json.loads(lines[1])
def get_df(sql_model='core.annotation'):
df = pd.DataFrame(db)
df = df[df.model == sql_model]
if 'date' in df.columns.values:
df = df.set_index('date')
return pd.json_normalize(df.fields).assign(pk=df.pk.values)
df = pd.DataFrame(db)
print(set(df["model"].tolist()))
```
### Load all the tables
```
annotation_df = get_df()
profile_df = get_df('core.profile')
generation_df = get_df('core.generation')
prompt_df = get_df('core.prompt')
playlist_df = get_df('core.playlist')
decodingstrategy_df = get_df('core.decodingstrategy')
user_df = get_df('auth.user')
```
### Modify column names to avoid duplicates across tables.
```
prompt_df = prompt_df.rename(columns={"body": "prompt_body"})
generation_df = generation_df.rename(columns={"body": "gen_body"})
decodingstrategy_df = decodingstrategy_df.rename(
columns={"name": "dec_strat", "value": "dec_strat_value"})
annotation_df["date"] = pd.to_datetime(annotation_df["date"])
```
### Merge all the relevant tables together.
```
gen_to_playlist = {}
for idx, row in playlist_df.iterrows():
shortname = row["shortname"]
version = row["version"]
generations = row["generations"]
for gen_id in generations:
gen_to_playlist[gen_id] = (shortname, version)
full_df = annotation_df.join(generation_df.set_index('pk'), on='generation')
full_df = full_df.join(prompt_df.set_index('pk'), 'prompt')
full_df = full_df.join(decodingstrategy_df.set_index('pk'), 'decoding_strategy')
full_df = full_df.join(user_df.set_index('pk'), 'annotator')
playlist_names = []
playlist_versions = []
for idx, row in full_df.iterrows():
gen_id = row["generation"]
playlist_info = gen_to_playlist[gen_id]
playlist_names.append(playlist_info[0])
playlist_versions.append(playlist_info[1])
full_df["playlist_name"] = playlist_names
full_df["playlist_version"] = playlist_versions
```
### Filter out annotations not part of Version 2
```
full_df = full_df[full_df.apply(lambda row: row["playlist_version"]=="0.2", axis=1)]
original_df = full_df
```
## Filter out unacceptable users
Methods for filtering out annotations for users who have NOT agreed to have their data analyzed (and filtering out us).
**TO HAVE YOUR PROFILE FILTERED OUT, PLEASE FILL OUT THE GOOGLE FORM AND SPECIFY "NO" TO THE QUESTION ABOUT WHETHER YOU WANT TO PARTICIPATE IN RESEARCH.**
```
SURVEY_SPREADSHEET_ID = '1j9-nqsGFhpKSas_z1-IRKORlVbmjTzJ4_u_q6u-hOvg'
KEY = "ageed_to_research"
worksheet = gc.open_by_key(SURVEY_SPREADSHEET_ID).sheet1
rows = worksheet.get_all_values()
survey_df = pd.DataFrame.from_records(rows[1:], columns=rows[0])
survey_filter_df = survey_df[survey_df[KEY] == 'Yes']
# all the users who GAVE US PERMISSION
users_filter_df = user_df[user_df.username.isin(survey_filter_df["username"].values)]
# Filter all of the data frames that we use for analysis.
full_df = full_df[full_df.annotator.isin(users_filter_df.pk)]
annotation_df = annotation_df[annotation_df.annotator.isin(users_filter_df.pk)]
```
##Filter out annotators we don't like.
ID 4334 used exploits and other scripts to manipulate the site.
The others had over 90% of annotations on the same boundary index.
```
full_df = full_df[~full_df['annotator'].isin([4415, 5349, 4644, 4334])]
```
## Filter out the extra all-human examples from Recipes
```
import random
import collections
recipe_frequencies = collections.Counter(full_df[full_df["playlist_name"]=="Recipes"]["boundary"])
print(recipe_frequencies)
avg_freq_non_final = int(np.round(np.mean([recipe_frequencies[i] for i in range(9)])))
random.seed(2342)
def filter_fn(row):
if (row["playlist_name"] == "Recipes") and (row["boundary"] == 9):
return random.random() < (avg_freq_non_final / recipe_frequencies[9])
else:
return True
full_df = full_df[full_df.apply(filter_fn, axis=1)]
recipe_frequencies = collections.Counter(full_df[full_df["playlist_name"]=="Recipes"]["boundary"])
recipe_frequencies
```
##Add survey results to the DataFrame
```
full_df = full_df.join(survey_df.set_index('username'), 'username')
```
##Process survey responses
Map the familiarity questions to a 1-5 labeling scheme and process the "major" column to a list of major codes.
```
def remap_familiarity_labels(x):
if x == "I've never heard of them.":
return 1
elif x == "I've read about them in the news or a blog post.":
return 2
elif x == "I’ve been excitedly following them.":
return 3
elif x == "I’ve used them before (either with the OpenAI API, HuggingFace Transformers, etc.).":
return 4
else:
return -1
def remap_genre_fam_labels(x):
if x == "Never":
return 1
elif x == "Once to a few times per year":
return 2
elif x == "Once to a few times per month":
return 3
elif x == "Once to a few times per week":
return 4
elif x == "Daily":
return 5
else:
return -1
full_df = full_df.rename(columns={"What did you (or what are you planning to) major/minor in?": "major",
"How often do you consult a recipe when preparing food?": "recipe_familiarity",
'How often do you read news from credible news publishers (Philadelphia Inquirer, Wall Street Journal, New York Times, etc.)?':'news_familiarity',
'How often do you read fiction on the internet (fan fiction, creative writing sub-reddits, ebooks, etc.)?':'stories_familiarity',
'familiarity':'gen_familiarity',
'Did you read the RoFT Guide before you tried the game?': 'read_guide'})
full_df['recipe_familiarity'] = full_df['recipe_familiarity'].apply(remap_genre_fam_labels)
full_df['news_familiarity'] = full_df['news_familiarity'].apply(remap_genre_fam_labels)
full_df['stories_familiarity'] = full_df['stories_familiarity'].apply(remap_genre_fam_labels)
full_df['gen_familiarity'] = full_df['gen_familiarity'].apply(remap_familiarity_labels)
```
## Process Majors
Parse the free text responses into one of 34 different major codes
```
#@title Majors
def process_major(x):
major_labels = []
CIS = ['Computer and Information Science', 'CIS', 'Computer Science', 'Computer science', 'CS', 'Comp sci', 'computer science', 'cis', 'CSCI']
MCIT = ['MICT', 'MCIT', 'Computer and Information Technology', 'Information Technology', 'Computer & Information Technology', 'Computer and information tech', 'mcit', 'OMCIT', 'Computer and Information Tech', 'Computer Information and Technology', 'Computer Information Technology', 'Computer and Info Tech', 'CIT', 'computer information technology', 'Computer Science and information Technology', 'cit']
FIN = ['Finance', 'finance', 'Business Analytics']
ACCT = ['accounting', 'Accounting']
BA = ['Business Administration']
QM = ['quantitative methods']
CMPE = ['Computer Engineering']
PHYS = ['Physics', 'physics']
COM = ['Communications', 'communications']
COG = ['cognitive science', 'Cognitive Science']
CBIO = ['Computational Biology']
ROBO = ['ROBO', 'Robotics']
LING = ['Linguistics', 'LING']
EE = ['Environmental Engineering']
ESE = ['ESE', 'electrical engineering', 'EE']
NETS = ['NETS', 'Networked and Social Systems Engineering', 'Systems Engineering']
DATS = ['Data Science', 'data science', 'DATS']
BIO = ['Biology', 'biology']
ARTH = ['Art History']
HIST = ['History', 'history', 'HIST']
PHIL = ['Philosophy']
ENT = ['Entrepreneurship']
DMD = ['DMD', 'Digital Media Design']
MATH = ['MATH', 'math', 'mathematics', 'Mathematics']
MED = ['medicine']
NEURO = ['neuroscience', 'Neuroscience']
BE = ['BE', 'bioengineering', 'Bioengineering']
CBE = ['Chemical Engineering']
CIV = ['civil engineering', 'Civil Engineer']
MEAM = ['MEAM', 'mechanical engineering', 'Mechanical Engineering']
ECON = ['Economics', 'ECON']
CGGT = ['CGGT', 'Computer Graphics and Game Technology']
SCMP = ['Scientific Computing']
if any(substring in x for substring in CIS):
major_labels.append("CIS")
if any(substring in x for substring in MCIT):
major_labels.append("MCIT")
if any(substring in x for substring in FIN):
major_labels.append("FIN")
if any(substring in x for substring in ACCT):
major_labels.append("ACCT")
if any(substring in x for substring in BA):
major_labels.append("BA")
if any(substring in x for substring in QM):
major_labels.append("QM")
if any(substring in x for substring in PHYS):
major_labels.append("PHYS")
if any(substring in x for substring in COM):
major_labels.append("COM")
if any(substring in x for substring in COG):
major_labels.append("COG")
if any(substring in x for substring in CBIO):
major_labels.append("CBIO")
if any(substring in x for substring in ROBO):
major_labels.append("ROBO")
if any(substring in x for substring in LING):
major_labels.append("LING")
if any(substring in x for substring in EE):
major_labels.append("EE")
if any(substring in x for substring in ESE):
major_labels.append("ESE")
if any(substring in x for substring in NETS):
major_labels.append("NETS")
if any(substring in x for substring in DATS):
major_labels.append("DATS")
if any(substring in x for substring in BIO):
major_labels.append("BIO")
if any(substring in x for substring in ARTH):
major_labels.append("ARTH")
if any(substring in x for substring in HIST):
major_labels.append("HIST")
if any(substring in x for substring in PHIL):
major_labels.append("PHIL")
if any(substring in x for substring in ENT):
major_labels.append("ENT")
if any(substring in x for substring in DMD):
major_labels.append("DMD")
if any(substring in x for substring in MATH):
major_labels.append("MATH")
if any(substring in x for substring in MED):
major_labels.append("MED")
if any(substring in x for substring in NEURO):
major_labels.append("NEURO")
if any(substring in x for substring in BE):
major_labels.append("BE")
if any(substring in x for substring in CIV):
major_labels.append("CIV")
if any(substring in x for substring in MEAM):
major_labels.append("MEAM")
if any(substring in x for substring in ECON):
major_labels.append("ECON")
if any(substring in x for substring in CGGT):
major_labels.append("CGGT")
if any(substring in x for substring in CMPE):
major_labels.append("CMPE")
if any(substring in x for substring in CBE):
major_labels.append("CBE")
if any(substring in x for substring in SCMP):
major_labels.append("SCMP")
if x == 'Engineering':
major_labels.append("ENG")
if x == 'Computer':
major_labels.append("MCIT")
if x == 'AI':
major_labels.append("AI")
if x == 'urban planning':
major_labels.append('URB')
if x == 'Stories':
major_labels.append('STOR')
return major_labels
full_df['major'] = full_df['major'].apply(process_major)
```
## Cleaning up
Rename columns and delete unused columns
```
## TODO: Figure out what the "decoding_strategy" and "dataset" fields actually do -- do they indicate the source generations file? If so, might be able to use to recover CTRL splits
columns_to_drop = ['attention_check', 'dec_strat', 'password', 'last_login', 'is_superuser', 'is_staff', 'first_name', 'last_name', 'email', 'is_active', 'groups', 'user_permissions', 'playlist_version', 'ageed_to_research', 'dataset', 'decoding_strategy']
full_df = full_df.drop(columns_to_drop, axis=1)
full_df = full_df.rename(columns={'system':'model','playlist_name':'dataset', 'boundary':'predicted_boundary_index', 'num_sentences':'true_boundary_index'})
full_df['true_boundary_index'] = full_df['true_boundary_index'] - 1
```
##Reorder Columns
Re-order columns so that they make sense and are easier to read
```
column_order = ['date', 'username', 'dataset', 'model', 'dec_strat_value', 'prompt', 'prompt_body', 'generation', 'gen_body','true_boundary_index', 'predicted_boundary_index', 'points', 'major', 'english', 'read_guide', 'recipe_familiarity', 'news_familiarity', 'stories_familiarity', 'gen_familiarity', 'prompt_index', 'annotator', 'date_joined', 'Timestamp', 'Email Address']
full_df[column_order]
```
# Helper Functions
```
def map_playlist_name(playlist):
"""Converts playlist names to the ones we want to use in the paper."""
if playlist == 'New York Times':
return "News"
elif playlist == 'Presidential Speeches':
return "Speeches"
elif playlist == 'Recipes':
return "Recipes"
elif playlist == 'Short Stories':
return "Stories"
def map_p_value(x):
"Converts float p values to the naames we want to use in the paper."""
if x >= 0:
return "$p={}$".format(x)
else:
return "random"
def save(filename):
plt.tight_layout()
plt.savefig(filename)
plt.show()
```
# Dataset Statistics
## Counts
```
full_df.groupby('dataset').count()
original_df.groupby('playlist_name').count()
```
## Decoding Strategies
```
df = full_df.groupby(['dataset', 'dec_strat_value']).count()
df = df.reset_index()
values = [-1, 0.0, 0.4, 1.0]
def get_dec_strat_counts(dataset):
df2 = df[df["dataset"]==dataset].filter(items=["dataset", "dec_strat_value", "pk"])
counts = [df2[df2["dec_strat_value"]==v]["pk"].tolist() for v in values]
counts = [c[0] if c else 0 for c in counts]
return counts
nytimes_counts = get_dec_strat_counts("New York Times")
df2 = df.filter(items=["dataset", "dec_strat_value", "pk"])
df2 = df2.groupby(['dataset', 'dec_strat_value']).sum().unstack()
df2 = df2.fillna(0)
df2.columns = [str(a[-1]) for a in df2.columns.to_flat_index()]
sums = df2.sum(axis=1)
df2["-1.0"] = df2["-1.0"] / sums
df2["0.0"] = df2["0.0"] / sums
df2["0.4"] = df2["0.4"] / sums
df2["1.0"] = df2["1.0"] / sums
df2 = df2.reset_index()
df2
def decide_label(i):
v = float(values[i])
if v >= 0:
return "p={}".format(v)
else:
return "rand"
def plot(dataset):
print(dataset)
ax = plt.figure(figsize=[12, 4]).subplots(1, 1)
df3 = df2[df2["dataset"]==dataset]
ax = df3.plot(kind='barh', stacked=True, width=0.1, ax=ax)
for i, c in enumerate(ax.containers):
labels = [decide_label(i) if v.get_width() > 0.0 else '' for v in c]
ax.bar_label(c, labels=labels, label_type='center', fontsize=26)
ax.get_legend().remove()
plt.grid(False)
plt.axis('off')
plt.tight_layout()
ax.set_position((0, 0, 1, 1))
ax.xaxis.set_major_locator(matplotlib.ticker.NullLocator())
ax.yaxis.set_major_locator(matplotlib.ticker.NullLocator())
plt.savefig("decoding_dist_{}.pdf".format(dataset.lower().replace(" ", "_")), transparent=True)
plt.show()
return ax
plot("Short Stories")
plot("Recipes")
plot("New York Times")
plot("Presidential Speeches")
values
```
# Analysis
## Mean points
```
def analyze_per_playlist():
info_to_return = []
playlist_names = set(playlist_df["shortname"].tolist())
model_names = set(generation_df["system"].tolist())
for playlist in playlist_names:
for model in model_names:
df = full_df[(full_df["dataset"]==playlist) &
(full_df["model"]==model)]
if len(df) > 0:
info = {"playlist": playlist,
"model": model,
"mean score": np.mean(df["points"]),
"median score": np.median(df["points"]),
"fraction_nonzero": len(df[df["points"] > 0]) / len(df),
"num_annotations": len(df)
}
info_to_return.append(info)
return pd.DataFrame(info_to_return)
analyze_per_playlist()
def analyze_per_decoding_strat():
info_to_return = []
playlist_names = set(playlist_df["shortname"].tolist())
model_names = set(generation_df["system"].tolist())
for playlist in playlist_names:
for model in model_names:
for top_p_value in [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
df = full_df[(full_df["dec_strat_value"]==top_p_value) &
(full_df["dataset"]==playlist) &
(full_df["model"]==model)]
if len(df) > 0:
info = {"p-value": top_p_value,
"playlist": playlist,
"model": model,
"mean_score": np.mean(df["points"]),
"std_dev": np.std(df["points"]),
"median_score": np.median(df["points"]),
"fraction_nonzero": len(df[df["points"] > 0]) / len(df),
"num_annotations": len(df),
}
info_to_return.append(info)
return pd.DataFrame(info_to_return)
per_p_df = analyze_per_decoding_strat()
per_p_df
```
### Comparison of XL models across p-values for NYT and Reddit
**Conclusion**: Sampling from full distribution (p=1.0) has worse quality. Argmax sampling (with repetition penalty) seems to be consistently better for text quality -- this is consistent with Daphne's research.
```
filtered_df = full_df[
(full_df["model"]=="gpt2-xl") &
(full_df["dataset"]=="Short Stories") | (full_df["dataset"]=="New York Times") &
(full_df["dec_strat_value"] != -1)]
filtered_df["dec_strat_value"] = filtered_df["dec_strat_value"].apply(map_p_value)
p = sns.barplot(x="dataset", y="points", hue="dec_strat_value", data=filtered_df)
# p.set_title("Comparison of Generation Performance of GPT2-XL across values of top-p")
p.set_xlabel("Dataset", fontsize = 16)
p.set_ylabel("Mean Score", fontsize = 16)
p.legend(loc="lower right").set_title("Method")
plt.tight_layout()
plt.savefig("topp.pdf")
```
### Comparison of GPT2-XL p=0.4 across reddit, nyt, and recipes
**Conclusion:** We see that Recipes are the most difficult, then NYT and short stories. This shows that generation systems struggle with structured text and are better at more open-ended generations (?). Also shows that domain knowledge is important. (although NYT being better than reddit is interesting).
```
filtered_df = full_df[((full_df["model"]=="gpt2-xl") | (full_df["model"]=="ctrl")) &
(full_df["dec_strat_value"]==0.4)]
filtered_df["dec_strat_value"] = filtered_df["dec_strat_value"].apply(map_p_value)
filtered_df["dataset"] = filtered_df["dataset"].apply(map_playlist_name)
p = sns.barplot(x="dataset", y="points", data=filtered_df, order=["News", "Recipes", "Stories", "Speeches"])
# p.set_title("Comparison of Generation Performance of GPT2-XL across Textual Genre")
p.set_xlabel("")
p.set_ylabel("Mean Score", fontsize = 16)
plt.savefig("genre.pdf")
```
### Comparison of Model Size across same dataset and p-value
**Conclusion**: Bigger Models are better (good sanity check, nice to know this is confirmed) -- don't use GPT3 here because we only have 89 annotations
```
filtered_df = per_p_df[(per_p_df["playlist"]=="Short Stories") &
(per_p_df["p-value"]==0.4) &
(per_p_df["model"]!="davinci")]
p = sns.barplot(x="model", y="mean_score", hue="p-value", data=filtered_df)
p.set_title("Comparison of Generation Performance on Short Stories across different size models")
p.set_xlabel("Model", fontsize = 16)
p.set_ylabel("Mean Score", fontsize = 16)
```
### Comparison of fine-tuning performance
```
filtered_df = per_p_df[(per_p_df["playlist"]=="Recipes")]
p = sns.barplot(x="model", y="mean_score", hue="p-value", data=filtered_df)
p.set_title("Comparison of Generation Performance of Fine-tuning on Recipes")
p.set_xlabel("Model", fontsize = 16)
p.set_ylabel("Mean Score", fontsize = 16)
```
###Mean Points for users that have no familiarity with generated text vs. users that do
```
df_familiarity=full_df.groupby(["username", "gen_familiarity"]).points.mean().reset_index()
df_familiarity.rename({"gen_familiarity": "Familiarity with NLG", "points": "Avg points earned"}, axis=1, inplace=True)
sns.violinplot(x="Familiarity with NLG", y="Avg points earned", data=df_familiarity)
plt.ylim([-0.6, 5.5])
```
### Mean Points for native vs. non-native English speakers
```
df_language=full_df.groupby(["username", "english"]).points.mean().reset_index()
df_language.rename({"english": "Native English speaker", "points": "Avg points earned"}, axis=1, inplace=True)
sns.violinplot(x="Native English speaker", y="Avg points earned", data=df_language)
plt.ylim([-0.6, 5.5])
```
##Mean points for users with familiarity in a given domain
TODO
## Point Distributions
### Per playlist
```
import collections
df = full_df[full_df["model"].isin(('ctrl', 'finetuned', 'gpt2-xl'))]
df = df[df["dec_strat_value"] == 0.4]
playlists = set(df["dataset"].tolist())
to_plot = []
for playlist in playlists:
points = df[df["dataset"]==playlist].points.tolist()
points = collections.Counter(points)
heights = np.array(list(points.values())) / sum(points.values())
for point_value, height in zip(points, heights):
to_plot.append({"Points earned": point_value,
"Fraction of annotations":height,
"Domain": map_playlist_name(playlist)})
to_plot = pd.DataFrame(to_plot)
sns.barplot(x="Points earned", y="Fraction of annotations", hue="Domain", data=to_plot)
save("point_distribution.pdf")
```
### Per annotator
```
full_df.groupby('annotator').points.mean().plot.hist(
title='Achieved Points Distribution'
)
```
## Find bad annotators
```
full_df["predicted_boundary_index"]
full_df.groupby('annotator').sum()
def find_problematic_annotators(df):
"""Analyze whether annotators improve in aggregate over k annotations."""
too_many_same_df = []
for annotator in set(df["annotator"].tolist()):
guesses = df[df["annotator"] == annotator]["predicted_boundary_index"].tolist()
# Check of they almost always guessed the same boundary
modal_guess = scipy.stats.mode(guesses)
fraction_modal = modal_guess.count[0] / len(guesses)
if fraction_modal > 0.9:
too_many_same_df.append((annotator, modal_guess, fraction_modal, len(guesses)))
return pd.DataFrame(
too_many_same_df, columns=["annotator", "guess", "fractional_model", "num_annotations"])
df = find_problematic_annotators(full_df)
df
ys = full_df[full_df["annotator"] == 4415].sort_values("date")["predicted_boundary_index"].tolist()
plt.scatter(x=range(len(ys)), y=ys)
plt.xlabel("Annotation")
plt.ylabel("Chosen boundary")
```
## Annotator performance over time
Of the annotators who did at least K annotations, plot their mean score over time
```
def analyze_progress(df, k=50):
"""Analyze whether annotators improve in aggregate over k annotations."""
all_score_series = []
annotators = df[df["pk"] > k].reset_index()["annotator"].tolist()
for annotator in annotators:
annotations = annotation_df[annotation_df["annotator"] == annotator]
score_series = annotations.sort_values("date")["points"][:k].tolist()
all_score_series.append(score_series)
return np.array(all_score_series), len(annotators)
def analyze_and_plot(s, n, k):
print(n)
data = np.array(s)
data = np.mean(data, axis=0)
print("spearmanr: %.2f, %f" % stats.spearmanr(range(k), data))
print("pearsonr: %.2f, %f" % stats.pearsonr(range(k), data))
plt.plot(range(1, k+1), data)
plt.ylabel("Mean score")
plt.xlabel("$n$th annotation")
plt.title("Performance over time")
plt.show()
```
### Analysis of annotators getting better over time (1st Batch)
**Conclusion**: We see no correlation on the first batch of annotators. They do not improve over time
```
k = 50
s, n = analyze_progress(full_df[(full_df['date'] < '2021-10-1')].groupby('annotator').count(), k)
analyze_and_plot(s, n, k)
k = 100
s, n = analyze_progress(full_df[(full_df['date'] < '2021-10-1')].groupby('annotator').count(), k)
analyze_and_plot(s, n, k)
k = 200
s, n = analyze_progress(full_df[(full_df['date'] < '2021-10-1')].groupby('annotator').count(), k)
analyze_and_plot(s, n, k)
```
### Analysis of annotators getting better over time (2nd Batch)
**Conclusion**: We actually see a positive correlation (over 0.3) for k=50, 100, and 200 on the second batch of annotators. They DO actually improve over time. This suggests that, with the correct instructions, annotators may be able to be taught how to improve at detecting generated text.
```
k = 50
s, n = analyze_progress(full_df[(full_df['date'] > '2021-10-1')].groupby('annotator').count(), k)
analyze_and_plot(s, n, k)
k = 100
s, n = analyze_progress(full_df[(full_df['date'] > '2021-10-1')].groupby('annotator').count(), k)
analyze_and_plot(s, n, k)
k = 200
s, n = analyze_progress(full_df[(full_df['date'] > '2021-10-1')].groupby('annotator').count(), k)
analyze_and_plot(s, n, k)
```
## Inter-annotator agreement
### Difference in abilities
```
def analyze_ability_differences(df, k=50):
"""Analyze whether annotators improve in aggregate over k annotations."""
df = df.groupby('annotator').count()
all_score_series = []
annotators = df[df["pk"] >= k].reset_index()["annotator"].tolist()
for annotator in annotators:
annotations = annotation_df[annotation_df["annotator"] == annotator]
score_series = annotations.sort_values("date")["points"][:k].tolist()
all_score_series.append(score_series)
return all_score_series
scores = analyze_ability_differences(full_df, 50)
sum_scores = [sum(s) for s in scores]
print("Mean score:", np.mean(sum_scores))
print("Std score:", np.std(sum_scores))
df = full_df.groupby('annotator').count()
annotators = df[df["pk"] >= 50].reset_index()["annotator"].tolist()
df = full_df[full_df["annotator"].isin(annotators)]
df = df.groupby('annotator').sum()
# df = annotation_df.groupby('generation').boundary.apply(list).reset_index()
# df = df[df.apply(lambda row: len(row["boundary"]) >= 4, axis=1)]
```
### Fraction agreement
For every pair of annotators who annotated the same generaton, what fraction guessed the same boundary?
```
annotations_per_gen = annotation_df.groupby('generation')
num_annotations_per_gen = annotations_per_gen.points.count()
def analyze_fraction_agreements():
generation_ids = set(full_df["generation"].tolist())
annotations_per_gen = full_df.groupby('generation')
overall_num_annotations = 0
overall_num_agreements = 0
x = annotations_per_gen.predicted_boundary_index.value_counts()
for idx, generation in enumerate(generation_ids):
chosen_boundaries = x[generation]
chosen_boundaries = {k: chosen_boundaries[k] for k in chosen_boundaries.keys()}
total_annotations = sum(chosen_boundaries.values())
if total_annotations > 1:
total_agreements = sum(v for v in chosen_boundaries.values() if v > 1)
overall_num_annotations += total_annotations
overall_num_agreements += total_agreements
print("Out of {} total annotations on generations with >1 annotation, {} were in agreement with another annotation on the true boundary position. That is {}".format(
overall_num_annotations, overall_num_agreements, overall_num_agreements/overall_num_annotations
))
analyze_fraction_agreements()
# TODO: Figure out what the baseline of random guessing would be,
```
### Krippendorf's Alpha
```
import nltk.metrics
from nltk.metrics.agreement import AnnotationTask
df = full_df.groupby('generation').predicted_boundary_index.apply(list).reset_index()
df = df[df.apply(lambda row: len(row["predicted_boundary_index"]) >= 4, axis=1)]
annotation_data = []
for idx, row in full_df.iterrows():
coder = row["annotator"]
item = row["generation"]
label = row["predicted_boundary_index"]
annotation_data.append((coder,item,label))
ann_task = AnnotationTask(annotation_data)
print(ann_task.alpha())
```
## Profile Statistics
```
def user_stats(df, name):
data = {"name": name}
data["num_participants"] = len(np.unique(df["annotator"]).tolist())
data["num_annotations"] = len(df)
data["mean_annotations_per_participant"] = data["num_annotations"] / data["num_participants"]
data["mean_points"] = np.mean(df["points"])
data["std_points"] = np.std(df["points"])
return data
all_data = []
all_data.append(user_stats(full_df, "overall"))
all_data.append(user_stats(full_df[(full_df['date'] <= '2021-10-1')], "Section A"))
all_data.append(user_stats(full_df[(full_df['date'] > '2021-10-1')], "Section B"))
pd.DataFrame(all_data)
```
# Time Tracking Analysis
TODO: double check that timestamps are properly merged with the correct annotations. Also find a better way to calculate time delta than shift down by 1 and filter.
```
timing_df = get_df('core.timestamp')
timing_df = timing_df.rename(columns={'date': 'timestamp'}).merge(full_df, left_on='annotation', right_on='pk')
timing_df['timestamp'] = pd.to_datetime(timing_df.timestamp)
timing_df['delta'] = timing_df.timestamp - timing_df.timestamp.shift(1)
timing_df['delta'] = timing_df.delta.dt.components.milliseconds / 1000.0 + timing_df.delta.dt.components.seconds + timing_df.delta.dt.components.minutes * 60.0
timing_df = timing_df[timing_df.delta < 120] # TODO: Do this more properly instead of shift and filter
```
## Median Time Delta
```
timing_df.delta.median()
```
## Correlation Between Time Delta and Decoding Hyperparameter (top-p)
```
timing_df[['dec_strat_value', 'delta']].corr()
```
## Time Delta Distribution
```
timing_df.delta.hist(bins=100, range=(0,10))
#plot( kind='hist', range=(0, 60), bins=30, figsize=(20, 10), title=('Turn Duration Distribution (seconds)') )
```
## Correlation Between Time Delta and Sentence Length
```
timing_df['gen_body_length'] = timing_df.gen_body.str.len()
timing_df.groupby('annotation').agg({'delta': 'sum', 'gen_body_length': 'mean'}).corr()
```
## Correlation Between Time Delta and Points
```
timing_df.groupby('annotation').agg({'delta': 'sum', 'points': 'mean'}).corr()
```
## Comparison of Average Time Deltas Between Text Domains
```
# Median
timing_df.groupby('dataset').agg({'delta': 'median'}).plot.bar()
# Mean
timing_df.groupby('dataset').agg({'delta': 'mean'}).plot.bar()
```
## Comparison of Average Time Deltas Between Models
```
# Median
timing_df.groupby('model').agg({'delta': 'median'}).plot.bar()
# Mean
timing_df.groupby('model').agg({'delta': 'mean'}).plot.bar()
```
| github_jupyter |
We build a multi-layer perceptron with its hidden layers batch normalized, and contrast it with the version without
batch normalization.
We train and evaluate both versions of the multi-layer perceptron on MNIST dataset.
```
import os
import gzip
import numpy as np
import matplotlib.pyplot as plt
import autodiff as ad
from autodiff import initializers
from autodiff import optimizers
random_state = np.random.RandomState(0)
def read_mnist_labels(fn):
with gzip.open(fn, 'rb') as f:
content = f.read()
num_images = int.from_bytes(content[4:8], byteorder='big')
labels = np.zeros((num_images, 10), dtype=np.float32)
indices = np.fromstring(content[8:], dtype=np.uint8)
labels[range(num_images), indices] += 1
return labels
def read_mnist_images(fn):
with gzip.open(fn, 'rb') as f:
content = f.read()
num_images = int.from_bytes(content[4:8], byteorder='big')
height = int.from_bytes(content[8:12], byteorder='big')
width = int.from_bytes(content[12:16], byteorder='big')
images = np.fromstring(content[16:], dtype=np.uint8).reshape((num_images, height, width))
images = images.astype(np.float32) / 255.
return images
```
Make sure you have the downloaded the following 4 files, and place them under the current directory.
```
train_images = read_mnist_images('train-images-idx3-ubyte.gz')
train_labels = read_mnist_labels('train-labels-idx1-ubyte.gz')
test_images = read_mnist_images('t10k-images-idx3-ubyte.gz')
test_labels = read_mnist_labels('t10k-labels-idx1-ubyte.gz')
tni = initializers.TruncatedNormalInitializer(mean=0.0, stddev=0.01, seed=0)
zi = initializers.ZerosInitializer()
oi = initializers.OnesInitializer()
```
Build the version of MLP with batch norm. Note the function `fused_batch_norm` takes the moving statistics as input. They will be updated in training mode, and treated as estimates of population statistics in test mode.
```
def build_batch_norm(is_training=True, epsilon=1e-3, decay=0.997):
inputs = ad.placeholder((None, 784))
labels = ad.placeholder((None, 10))
weight1 = ad.variable((784, 100), tni)
offset1 = ad.variable((100,), zi)
scale1 = ad.variable((100,), oi)
moving_mean1 = ad.variable((100,), zi, trainable=False)
moving_variance1 = ad.variable((100,), oi, trainable=False)
weight2 = ad.variable((100, 100), tni)
offset2 = ad.variable((100,), zi)
scale2 = ad.variable((100,), oi)
moving_mean2 = ad.variable((100,), zi, trainable=False)
moving_variance2 = ad.variable((100,), oi, trainable=False)
weight3 = ad.variable((100, 10), tni)
bias3 = ad.variable((10,), zi)
hidden1 = ad.matmul(inputs, weight1)
hidden1 = ad.fused_batch_norm(
hidden1, scale1, offset1, moving_mean1, moving_variance1,
epsilon=epsilon, decay=decay, is_training=is_training)
hidden1 = ad.sigmoid(hidden1)
hidden2 = ad.matmul(hidden1, weight2)
hidden2 = ad.fused_batch_norm(
hidden2, scale2, offset2, moving_mean2, moving_variance2,
epsilon=epsilon, decay=decay, is_training=is_training)
hidden2 = ad.sigmoid(hidden2)
logits = ad.matmul(hidden2, weight3) + bias3
loss = ad.softmax_cross_entropy_loss(labels, logits)
return inputs, labels, logits, loss
```
Build the version of MLP without batch norm.
```
def build_mlp():
inputs = ad.placeholder((None, 784))
labels = ad.placeholder((None, 10))
weight1 = ad.variable((784, 100), tni)
bias1 = ad.variable((100,), zi)
weight2 = ad.variable((100, 100), tni)
bias2 = ad.variable((100,), zi)
weight3 = ad.variable((100, 10), tni)
bias3 = ad.variable((10,), zi)
hidden1 = ad.matmul(inputs, weight1) + bias1
hidden1 = ad.sigmoid(hidden1)
hidden2 = ad.matmul(hidden1, weight2) + bias2
hidden2 = ad.sigmoid(hidden2)
logits = ad.matmul(hidden2, weight3) + bias3
loss = ad.softmax_cross_entropy_loss(labels, logits)
return inputs, labels, logits, loss
```
We create three separate graphs, which hold the MLP w/ BN in training mode, MLP w/ BN in test mode, and the regular MLP w/o BN.
```
graph_bn = ad.Graph()
with graph_bn.as_default_graph():
(inputs_bn, labels_bn, logits_bn, loss_bn,
) = build_batch_norm(is_training=True)
graph_bn_test = ad.Graph()
with graph_bn_test.as_default_graph():
(inputs_bn_test, labels_bn_test, logits_bn_test, loss_bn_test,
) = build_batch_norm(is_training=False)
graph = ad.Graph()
with graph.as_default_graph():
inputs, labels, logits, loss = build_mlp()
```
Create three `RunTime` instances, so the three graphs can be run separately.
```
# MLP w/ BN in training mode
graph_bn.initialize_variables()
runtime_bn = ad.RunTime()
graph_bn.set_runtime(runtime_bn)
# MLP w/ BN in test mode
graph_bn_test.initialize_variables()
runtime_bn_test = ad.RunTime()
graph_bn_test.set_runtime(runtime_bn_test)
# MLP w/o BN
graph.initialize_variables()
runtime = ad.RunTime()
graph.set_runtime(runtime)
# For BN, get the references to the variable nodes for training and test graph
# so we can assign variable's value in training graph to test graph
weights_bn = graph_bn.get_variables(False)
weights_bn_test = graph_bn_test.get_variables(False)
gd = optimizers.GradientDescentOptimizer(alpha=0.01)
```
As we train both MLPs, we compute the accuracy on test set every 50 mini-batches.
```
iterations = 30000
batch = 50
accuracies_bn = []
accuracies = []
for i in range(iterations):
which = random_state.choice(train_images.shape[0], batch, False)
inputs_val = train_images[which].reshape((-1, 784))
labels_val = train_labels[which]
feed_dict_bn = {inputs_bn: inputs_val, labels_bn: labels_val}
feed_dict = {inputs: inputs_val, labels: labels_val}
with runtime_bn.forward_backward_cycle():
gd.optimize(loss_bn, feed_dict_bn)
with runtime.forward_backward_cycle():
gd.optimize(loss, feed_dict)
# compute test accuracy every 50 mini batches
if i % 50 == 0:
inputs_val = test_images.reshape((-1, 784))
labels_val = test_labels
feed_dict_bn_test = {inputs_bn_test: inputs_val}
feed_dict = {inputs: inputs_val}
# assgin variable's value in training graph to test graph
for w_bn_test, w_bn in zip(weights_bn_test, weights_bn):
w_bn_test.set_val(w_bn.val)
with runtime_bn_test.forward_backward_cycle():
logits_bn_test_val = logits_bn_test.forward(feed_dict_bn_test)
with runtime.forward_backward_cycle():
logits_val = logits.forward(feed_dict)
acc_bn = np.mean(np.argmax(logits_bn_test_val, axis=1) == np.argmax(labels_val, axis=1))
acc = np.mean(np.argmax(logits_val, axis=1) == np.argmax(labels_val, axis=1))
accuracies_bn.append((i, acc_bn))
accuracies.append((i, acc))
accuracies_bn = np.array(accuracies_bn)
accuracies = np.array(accuracies)
```
Test accuracy is plot as a function of training iterations.
**The MLP w/ BN clearly converges faster, and generalizes better the version w/o BN.**
```
plt.plot(accuracies_bn[:, 0], accuracies_bn[:, 1], color='r')
plt.plot(accuracies[:, 0], accuracies[:, 1], color='b')
plt.ylim([0.8, 1.])
plt.legend(['w/ batch norm', 'w/o batch norm'])
plt.xlabel('iterations')
plt.ylabel('test accuracy')
fig = plt.gcf()
fig.set_size_inches(12, 6)
plt.show()
```
| github_jupyter |
# References
I've made use of some great kernels already - check them out and give them an upvote if any of this is useful!
### Preprocessing
- https://www.kaggle.com/christofhenkel/how-to-preprocessing-when-using-embeddings
- https://www.kaggle.com/theoviel/improve-your-score-with-text-preprocessing-v2
### Model Architecture
-- https://www.kaggle.com/tunguz/bi-gru-cnn-poolings-gpu-kernel-version
### Other
- https://www.kaggle.com/dborkan/benchmark-kernel
- https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/discussion/87245
# Import Libraries
```
import gc
import re
import operator
import numpy as np
import pandas as pd
from gensim.models import KeyedVectors
from sklearn import model_selection
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Embedding, Input, Dense, CuDNNGRU,concatenate, Bidirectional, SpatialDropout1D, Conv1D, GlobalAveragePooling1D, GlobalMaxPooling1D
from keras.optimizers import RMSprop, Adam
from keras.models import Model
from keras.callbacks import EarlyStopping
import seaborn as sns
```
# Import Data
Let's have a little look at the data.
```
train = pd.read_csv("../input/jigsaw-unintended-bias-in-toxicity-classification/train.csv")
test = pd.read_csv("../input/jigsaw-unintended-bias-in-toxicity-classification/test.csv")
print("Train shape : ",train.shape)
print("Test shape : ",test.shape)
train.head()
test.head()
```
The extra features in the training set must be for analysing the bias. This is going to be an interesting competition with such a metric!
```
# Only 13GB of ram available, we gotta be careful !
df = pd.concat([train[['id','comment_text']], test], axis=0)
del(train, test)
gc.collect()
```
# Embeddings
To start we'll just take the FastText Common Crawl embeddings. Later, we'll hopefully combine multiple embeddings.
```
ft_common_crawl = '../input/fasttext-crawl-300d-2m/crawl-300d-2M.vec'
embeddings_index = KeyedVectors.load_word2vec_format(ft_common_crawl)
gc.collect()
```
# Preprocessing Text
As with most NLP tasks, we will start by using some pre-trained embeddings for our words. This provides us with a numerical representation of our input that we can use for modelling. Mapping words to embeddings isn't always straight forward, however: the data may not be very tidy.
The first step, then, is to ensure we get as many words mapped to a suitable embedding as possible. To do this, we'll make use of two excellent kernels:
- https://www.kaggle.com/christofhenkel/how-to-preprocessing-when-using-embeddings
- https://www.kaggle.com/theoviel/improve-your-score-with-text-preprocessing-v2
```
def build_vocab(texts):
sentences = texts.apply(lambda x: x.split()).values
vocab = {}
for sentence in sentences:
for word in sentence:
try:
vocab[word] += 1
except KeyError:
vocab[word] = 1
return vocab
def check_coverage(vocab, embeddings_index):
known_words = {}
unknown_words = {}
nb_known_words = 0
nb_unknown_words = 0
for word in vocab.keys():
try:
known_words[word] = embeddings_index[word]
nb_known_words += vocab[word]
except:
unknown_words[word] = vocab[word]
nb_unknown_words += vocab[word]
pass
print('Found embeddings for {:.3%} of vocab'.format(len(known_words) / len(vocab)))
print('Found embeddings for {:.3%} of all text'.format(nb_known_words / (nb_known_words + nb_unknown_words)))
unknown_words = sorted(unknown_words.items(), key=operator.itemgetter(1))[::-1]
return unknown_words
```
We will lower() all words, then look up those that do not appear in lower case in the embeddings but do in upper case, and add them.
```
df['comment_text'] = df['comment_text'].apply(lambda x: x.lower())
gc.collect()
vocab = build_vocab(df['comment_text'])
oov = check_coverage(vocab, embeddings_index)
oov[:10]
gc.collect()
```
Immediately we see contractions are an issue for FastText (such as "was not" -> "wasn't"). Let's try and fix this.
```
contraction_mapping = {"ain't": "is not", "aren't": "are not","can't": "cannot", "'cause": "because", "could've": "could have", "couldn't": "could not", "didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hasn't": "has not", "haven't": "have not", "he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is", "I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would", "i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would", "it'd've": "it would have", "it'll": "it will", "it'll've": "it will have","it's": "it is", "let's": "let us", "ma'am": "madam", "mayn't": "may not", "might've": "might have","mightn't": "might not","mightn't've": "might not have", "must've": "must have", "mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have","o'clock": "of the clock", "oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have", "she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is", "should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have","so's": "so as", "this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would", "there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have", "they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have", "wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are", "we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are", "what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is", "where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have", "why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have", "would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all", "y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have","you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have", "you're": "you are", "you've": "you have" }
del(vocab,oov)
gc.collect()
def known_contractions(embed):
known = []
for contract in contraction_mapping:
if contract in embed:
known.append(contract)
return known
print("- Known Contractions -")
print(" FastText :")
print(known_contractions(embeddings_index))
def clean_contractions(text, mapping):
specials = ["’", "‘", "´", "`"]
for s in specials:
text = text.replace(s, "'")
text = ' '.join([mapping[t] if t in mapping else t for t in text.split(" ")])
return text
df['comment_text'] = df['comment_text'].apply(lambda x: clean_contractions(x, contraction_mapping))
vocab = build_vocab(df['comment_text'])
oov = check_coverage(vocab, embeddings_index)
oov[:10]
```
Looks like punctuation is the next issue here, so let's sort it out.
```
del(vocab,oov)
gc.collect()
punct = "/-'?!.,#$%\'()*+-/:;<=>@[\\]^_`{|}~" + '""“”’' + '∞θ÷α•à−β∅³π‘₹´°£€\×™√²—–&'
def unknown_punct(embed, punct):
unknown = ''
for p in punct:
if p not in embed:
unknown += p
unknown += ' '
return unknown
print(unknown_punct(embeddings_index, punct))
punct_mapping = {"_":" ", "`":" "}
def clean_special_chars(text, punct, mapping):
for p in mapping:
text = text.replace(p, mapping[p])
for p in punct:
text = text.replace(p, f' {p} ')
return text
df['comment_text'] = df['comment_text'].apply(lambda x: clean_special_chars(x, punct, punct_mapping))
vocab = build_vocab(df['comment_text'])
oov = check_coverage(vocab, embeddings_index)
oov[:100]
```
There is a lot of words here that just aren't going to have any embeddings. We could go further and try to correct mispellings, but that is likely a small improvement we can worry about when we're trying to improve the model further.
```
del(vocab,oov)
gc.collect()
```
## Swears
Let's replace any swear words we don't have an embedding for with something we do ;)
```
swear_words = [
' 4r5e ',
' 5h1t ',
' 5hit ',
' a55 ',
' anal ',
' anus ',
' ar5e ',
' arrse ',
' arse ',
' ass ',
' ass-fucker ',
' asses ',
' assfucker ',
' assfukka ',
' asshole ',
' assholes ',
' asswhole ',
' a_s_s ',
' b!tch ',
' b00bs ',
' b17ch ',
' b1tch ',
' ballbag ',
' balls ',
' ballsack ',
' bastard ',
' beastial ',
' beastiality ',
' bellend ',
' bestial ',
' bestiality ',
' biatch ',
' bitch ',
' bitcher ',
' bitchers ',
' bitches ',
' bitchin ',
' bitching ',
' bloody ',
' blow job ',
' blowjob ',
' blowjobs ',
' boiolas ',
' bollock ',
' bollok ',
' boner ',
' boob ',
' boobs ',
' booobs ',
' boooobs ',
' booooobs ',
' booooooobs ',
' breasts ',
' buceta ',
' bugger ',
' bum ',
' bunny fucker ',
' butt ',
' butthole ',
' buttmuch ',
' buttplug ',
' c0ck ',
' c0cksucker ',
' carpet muncher ',
' cawk ',
' chink ',
' cipa ',
' cl1t ',
' clit ',
' clitoris ',
' clits ',
' cnut ',
' cock ',
' cock-sucker ',
' cockface ',
' cockhead ',
' cockmunch ',
' cockmuncher ',
' cocks ',
' cocksuck ',
' cocksucked ',
' cocksucker ',
' cocksucking ',
' cocksucks ',
' cocksuka ',
' cocksukka ',
' cok ',
' cokmuncher ',
' coksucka ',
' coon ',
' cox ',
' crap ',
' cum ',
' cummer ',
' cumming ',
' cums ',
' cumshot ',
' cunilingus ',
' cunillingus ',
' cunnilingus ',
' cunt ',
' cuntlick ',
' cuntlicker ',
' cuntlicking ',
' cunts ',
' cyalis ',
' cyberfuc ',
' cyberfuck ',
' cyberfucked ',
' cyberfucker ',
' cyberfuckers ',
' cyberfucking ',
' d1ck ',
' damn ',
' dick ',
' dickhead ',
' dildo ',
' dildos ',
' dink ',
' dinks ',
' dirsa ',
' dlck ',
' dog-fucker ',
' doggin ',
' dogging ',
' donkeyribber ',
' doosh ',
' duche ',
' dyke ',
' ejaculate ',
' ejaculated ',
' ejaculates ',
' ejaculating ',
' ejaculatings ',
' ejaculation ',
' ejakulate ',
' f u c k ',
' f u c k e r ',
' f4nny ',
' fag ',
' fagging ',
' faggitt ',
' faggot ',
' faggs ',
' fagot ',
' fagots ',
' fags ',
' fanny ',
' fannyflaps ',
' fannyfucker ',
' fanyy ',
' fatass ',
' fcuk ',
' fcuker ',
' fcuking ',
' feck ',
' fecker ',
' felching ',
' fellate ',
' fellatio ',
' fingerfuck ',
' fingerfucked ',
' fingerfucker ',
' fingerfuckers ',
' fingerfucking ',
' fingerfucks ',
' fistfuck ',
' fistfucked ',
' fistfucker ',
' fistfuckers ',
' fistfucking ',
' fistfuckings ',
' fistfucks ',
' flange ',
' fook ',
' fooker ',
' fuck ',
' fucka ',
' fucked ',
' fucker ',
' fuckers ',
' fuckhead ',
' fuckheads ',
' fuckin ',
' fucking ',
' fuckings ',
' fuckingshitmotherfucker ',
' fuckme ',
' fucks ',
' fuckwhit ',
' fuckwit ',
' fudge packer ',
' fudgepacker ',
' fuk ',
' fuker ',
' fukker ',
' fukkin ',
' fuks ',
' fukwhit ',
' fukwit ',
' fux ',
' fux0r ',
' f_u_c_k ',
' gangbang ',
' gangbanged ',
' gangbangs ',
' gaylord ',
' gaysex ',
' goatse ',
' God ',
' god-dam ',
' god-damned ',
' goddamn ',
' goddamned ',
' hardcoresex ',
' hell ',
' heshe ',
' hoar ',
' hoare ',
' hoer ',
' homo ',
' hore ',
' horniest ',
' horny ',
' hotsex ',
' jack-off ',
' jackoff ',
' jap ',
' jerk-off ',
' jism ',
' jiz ',
' jizm ',
' jizz ',
' kawk ',
' knob ',
' knobead ',
' knobed ',
' knobend ',
' knobhead ',
' knobjocky ',
' knobjokey ',
' kock ',
' kondum ',
' kondums ',
' kum ',
' kummer ',
' kumming ',
' kums ',
' kunilingus ',
' l3itch ',
' labia ',
' lmfao ',
' lust ',
' lusting ',
' m0f0 ',
' m0fo ',
' m45terbate ',
' ma5terb8 ',
' ma5terbate ',
' masochist ',
' master-bate ',
' masterb8 ',
' masterbat3 ',
' masterbate ',
' masterbation ',
' masterbations ',
' masturbate ',
' mo-fo ',
' mof0 ',
' mofo ',
' mothafuck ',
' mothafucka ',
' mothafuckas ',
' mothafuckaz ',
' mothafucked ',
' mothafucker ',
' mothafuckers ',
' mothafuckin ',
' mothafucking ',
' mothafuckings ',
' mothafucks ',
' mother fucker ',
' motherfuck ',
' motherfucked ',
' motherfucker ',
' motherfuckers ',
' motherfuckin ',
' motherfucking ',
' motherfuckings ',
' motherfuckka ',
' motherfucks ',
' muff ',
' mutha ',
' muthafecker ',
' muthafuckker ',
' muther ',
' mutherfucker ',
' n1gga ',
' n1gger ',
' nazi ',
' nigg3r ',
' nigg4h ',
' nigga ',
' niggah ',
' niggas ',
' niggaz ',
' nigger ',
' niggers ',
' nob ',
' nob jokey ',
' nobhead ',
' nobjocky ',
' nobjokey ',
' numbnuts ',
' nutsack ',
' orgasim ',
' orgasims ',
' orgasm ',
' orgasms ',
' p0rn ',
' pawn ',
' pecker ',
' penis ',
' penisfucker ',
' phonesex ',
' phuck ',
' phuk ',
' phuked ',
' phuking ',
' phukked ',
' phukking ',
' phuks ',
' phuq ',
' pigfucker ',
' pimpis ',
' piss ',
' pissed ',
' pisser ',
' pissers ',
' pisses ',
' pissflaps ',
' pissin ',
' pissing ',
' pissoff ',
' poop ',
' porn ',
' porno ',
' pornography ',
' pornos ',
' prick ',
' pricks ',
' pron ',
' pube ',
' pusse ',
' pussi ',
' pussies ',
' pussy ',
' pussys ',
' rectum ',
' retard ',
' rimjaw ',
' rimming ',
' s hit ',
' s.o.b. ',
' sadist ',
' schlong ',
' screwing ',
' scroat ',
' scrote ',
' scrotum ',
' semen ',
' sex ',
' sh!t ',
' sh1t ',
' shag ',
' shagger ',
' shaggin ',
' shagging ',
' shemale ',
' shit ',
' shitdick ',
' shite ',
' shited ',
' shitey ',
' shitfuck ',
' shitfull ',
' shithead ',
' shiting ',
' shitings ',
' shits ',
' shitted ',
' shitter ',
' shitters ',
' shitting ',
' shittings ',
' shitty ',
' skank ',
' slut ',
' sluts ',
' smegma ',
' smut ',
' snatch ',
' son-of-a-bitch ',
' spac ',
' spunk ',
' s_h_i_t ',
' t1tt1e5 ',
' t1tties ',
' teets ',
' teez ',
' testical ',
' testicle ',
' tit ',
' titfuck ',
' tits ',
' titt ',
' tittie5 ',
' tittiefucker ',
' titties ',
' tittyfuck ',
' tittywank ',
' titwank ',
' tosser ',
' turd ',
' tw4t ',
' twat ',
' twathead ',
' twatty ',
' twunt ',
' twunter ',
' v14gra ',
' v1gra ',
' vagina ',
' viagra ',
' vulva ',
' w00se ',
' wang ',
' wank ',
' wanker ',
' wanky ',
' whoar ',
' whore ',
' willies ',
' willy ',
' xrated ',
' xxx '
]
replace_with_fuck = []
for swear in swear_words:
if swear[1:(len(swear)-1)] not in embeddings_index:
replace_with_fuck.append(swear)
replace_with_fuck = '|'.join(replace_with_fuck)
replace_with_fuck
def handle_swears(text):
text = re.sub(replace_with_fuck, ' fuck ', text)
return text
df['comment_text'] = df['comment_text'].apply(lambda x: handle_swears(x))
gc.collect()
```
Let's split the data back into train and test
```
train = df.iloc[:1804874,:]
test = df.iloc[1804874:,:]
train.head()
del(df)
gc.collect()
```
# Further Preparation
```
train.head()
train_orig = pd.read_csv("../input/jigsaw-unintended-bias-in-toxicity-classification/train.csv")
train_orig.head()
train = pd.concat([train,train_orig[['target']]],axis=1)
train.head()
del(train_orig)
gc.collect()
```
Convert target to binary flag
```
train['target'] = np.where(train['target'] >= 0.5, True, False)
```
Split into train/validation sets
```
train_df, validate_df = model_selection.train_test_split(train, test_size=0.1)
print('%d train comments, %d validate comments' % (len(train_df), len(validate_df)))
```
Tokenize the text
```
MAX_NUM_WORDS = 100000
TOXICITY_COLUMN = 'target'
TEXT_COLUMN = 'comment_text'
# Create a text tokenizer.
tokenizer = Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer.fit_on_texts(train_df[TEXT_COLUMN])
# All comments must be truncated or padded to be the same length.
MAX_SEQUENCE_LENGTH = 256
def pad_text(texts, tokenizer):
return pad_sequences(tokenizer.texts_to_sequences(texts), maxlen=MAX_SEQUENCE_LENGTH)
```
Create our embedding matrix
```
gc.collect()
EMBEDDINGS_DIMENSION = 300
embedding_matrix = np.zeros((len(tokenizer.word_index) + 1,EMBEDDINGS_DIMENSION))
num_words_in_embedding = 0
for word, i in tokenizer.word_index.items():
if word in embeddings_index.vocab:
embedding_vector = embeddings_index[word]
embedding_matrix[i] = embedding_vector
num_words_in_embedding += 1
train_text = pad_text(train_df[TEXT_COLUMN], tokenizer)
train_labels = train_df[TOXICITY_COLUMN]
validate_text = pad_text(validate_df[TEXT_COLUMN], tokenizer)
validate_labels = validate_df[TOXICITY_COLUMN]
gc.collect()
```
# Model Architecture
Adding dropout / 1d conv / concatenated poolings based on the architecture presented @ https://www.kaggle.com/tunguz/bi-gru-cnn-poolings-gpu-kernel-version
```
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedding_layer = Embedding(len(tokenizer.word_index) + 1,
EMBEDDINGS_DIMENSION,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
x = embedding_layer(sequence_input)
x = SpatialDropout1D(0.2)(x)
x = Bidirectional(CuDNNGRU(64, return_sequences=True))(x)
x = Conv1D(64, kernel_size = 2, padding = "valid", kernel_initializer = "he_uniform")(x)
avg_pool1 = GlobalAveragePooling1D()(x)
max_pool1 = GlobalMaxPooling1D()(x)
x = concatenate([avg_pool1, max_pool1])
preds = Dense(1, activation='sigmoid')(x)
model = Model(sequence_input, preds)
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=Adam(),
metrics=['acc'])
```
# Model Training
```
BATCH_SIZE = 1024
NUM_EPOCHS = 100
model.fit(
train_text,
train_labels,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_data=(validate_text, validate_labels),
callbacks = [EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=3)])
```
# Predict & Submit
Let's submit this as our first submission. Once we have a reasonable pipeline setup, we can move on to looking at the competition metric in more detail.
```
submission = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/sample_submission.csv', index_col='id')
submission['prediction'] = model.predict(pad_text(test[TEXT_COLUMN], tokenizer))
submission.reset_index(drop=False, inplace=True)
submission.head()
submission.to_csv('submission.csv', index=False)
```
| github_jupyter |
(nm_ill_conditioning_roundoff_errors)=
# Ill-conditioning and roundoff errors
## Ill-conditioned matrices
The conditioning (or lack of, i.e. the ill-conditioning) of matrices we are trying to invert is incredibly important for the success of any algorithm.
As long as the matrix is non-singular, i.e. \\(\det(A)\ne 0\\), then an inverse exists, and a linear system with that \\(A\\) has a unique solution. What happens when we consider a matrix that is nearly singular, i.e. \\(\det(A)\\) is very small?
```{index} Matrix norm
```
Well smallness is a relative term and so we need to ask the question of how large or small $\det(A)$ is compared to something. That something is the **norm** of the matrix.
```{margin} Note
Norms are always in absolute terms, therefore, they are always positive. We will use \\(||||\\) to symbolise a norm of a matrix.
```
Matrices come in all shape and sizes, and their determinants come in all kinds of values. We know that a ill conditioned matrix has a determinant that is small in absolute terms, but the size of determinants is a relative thing, and we need some kind of comparison to determine what is "small" and what is "large". Thus, we can create such a reference calculating the norms of the matrix. In this notebook, we will explore how to find the norm and how does the norm relate to the ill conditioning of the matrix.
## Vector norms
```{index} Vector norms
```
Just as for vectors \\(\pmb{v}\\) (assumed as a \\(n\times 1\\) column vector) where we have multiple possible norms to help us decide quantify the magnitude of a vector:
$$
||\pmb{v}||_2 = \sqrt{v_1^2 + v_2^2 + \ldots + v_n^2} = \left(\sum_{i=1}^n v_i^2 \right)^{1/2}, \quad{\textrm{the two-norm or Euclidean norm}}\\\\\\
||\pmb{v}||_1 = |v_1| + |v_2| + \ldots + |v_n| = \sum_{i=1}^n |v_i|, \quad{\textrm{the one-norm or taxi-cab norm}}\\\\\\
||\pmb{v}||_{\infty} = \max\{|v_1|,|v_2|, \ldots, |v_n| = \max_{i=1}^n |v_i|,\quad{\textrm{the max-norm or infinity norm}}
$$
## Matrix norms
```{index} Matrix norms
```
We can define measures of the size of matrices, e.g. for \\(A\\) which for complete generality we will assume is of shape \\(m\times n\\):
$$
||A||_F = \left(\sum_{i=1}^m \sum_{j=1}^n A_{ij}^2 \right)^{1/2}, \quad{\textrm{the matrix Euclidean or Frobenius norm}}\\\\\\
||A||_{\infty} = \max_{i=1}^m \sum_{j=1}^n|A_{i,j}|, \quad{\textrm{the maximum absolute row-sum norm}}\\\\\\
$$
Note that while these norms give different results (in both the vector and matrix cases), they are consistent or equivalent in that they are always within a constant factor of one another (a result that is true for finite-dimensional or discrete problems as here). This means we don't really need to worry too much about which norm we're using.
Let's evaluate some examples.
```
import numpy as np
import scipy.linalg as sl
A = np.array([[10., 2., 1.],
[6., 5., 4.],
[1., 4., 7.]])
print("A =", A)
# The Frobenius norm (default)
# equivalent to sl.norm(A)
print("SciPy norm = ", sl.norm(A, 'fro'))
# The maximum absolute row-sum
print("Maximum absolute row-sum = ", sl.norm(A,np.inf))
# The maximum absolute column-sum
print("Maximum absolute column-sum", sl.norm(A,1))
# The two-norm - note not the same as the Frobenius norm
# also termed the spectral norm
print("SciPy spectral norm =", sl.norm(A,2))
# Spectral norm definition
print("Spectral norm by hand =", np.sqrt(np.real((np.max(sl.eigvals( A.T @ A))))))
```
## Norm implementation
We will write some code to explicitly compute the two matrix norms defined mathematically above (i.e. the Frobenius and the maximum absolute row-sum norms) and compare against the values found above using in-built scipy functions.
```
def frob(A):
m, n = A.shape
squsum = 0.
for i in range(m):
for j in range(n):
squsum += A[i,j]**2
return np.sqrt(squsum)
def mars(A):
m, n = A.shape
maxarsum = 0.
for i in range(m):
arsum = np.sum(np.abs(A[i]))
maxarsum = arsum if arsum > maxarsum else maxarsum
return maxarsum
A = np.array([[10., 2., 1.],
[6., 5., 4.],
[1., 4., 7.]])
print("A =", A)
print("Are our norms the same as SciPy?",
frob(A) == sl.norm(A,'fro') and mars(A) == sl.norm(A,np.inf))
```
## Matrix conditioning
The (ill-)conditioning of a matrix is measured with the matrix condition number:
\\[\textrm{cond}(A) = \|A\|\|A^{-1}\|.\\]
If this is close to one then \\(A\\) is termed well-conditioned; the value increases with the degree of ill-conditioning, reaching infinity for a singular matrix.
Let's evaluate the condition number for the matrix above.
```
A = np.array([[10., 2., 1.],[6., 5., 4.],[1., 4., 7.]])
print("A =", A)
print("SciPy cond(A) =", np.linalg.cond(A))
print("Default condition number uses matrix two-norm =", sl.norm(A,2)*sl.norm(sl.inv(A),2))
print("sl.norm(A,2)*sl.norm(sl.inv(A),2) =", sl.norm(A,2)*sl.norm(sl.inv(A),2))
print("SciPy Frobenius cond(A) = ", np.linalg.cond(A,'fro'))
print("sl.norm(A,'fro')*sl.norm(sl.inv(A),'fro') =", sl.norm(A,'fro')*sl.norm(sl.inv(A),'fro'))
```
The condition number is expensive to compute, and so in practice the relative size of the determinant of the matrix can be gauged based on the magnitude of the entries of the matrix.
### Example
We know that a singular matrix does not result in a unique solution to its corresponding linear matrix system. But what are the consequences of near-singularity (ill-conditioning)?
Consider the following example
\\[
\left(
\begin{array}{cc}
2 & 1 \\\\\\
2 & 1 + \epsilon \\\\\\
\end{array}
\right)\left(
\begin{array}{c}
x \\\\\\
y \\\\\\
\end{array}
\right) = \left(
\begin{array}{c}
3 \\\\\\
0 \\\\\\
\end{array}
\right)
\\]
When \\(\epsilon=0\\) the two columns/rows are not linear independent, and hence the determinant of this matrix is zero, the condition number is infinite, and the linear system does not have a solution (as the two equations would be telling us the contradictory information that \\(2x+y\\) is equal to 3 and is also equal to 0).
Let's consider a range of values \\(\epsilon\\) and calculate matrix deteterminant and condition number:
```
A = np.array([[2.,1.],
[2.,1.]])
b = np.array([3.,0.])
print("Matrix is singular, det(A) = ", sl.det(A))
for i in range(3):
A[1,1] += 0.001
epsilon = A[1,1]-1.0
print("Epsilon = %g, det(A) = %g, cond(A) = %g." % (epsilon, sl.det(A), np.linalg.cond(A)),
"inv(A)*b =", sl.inv(A) @ b)
```
We find for \\(\epsilon=0.001\\) that \\(\det(A)=0.002\\) (i.e. quite a lot smaller than the other coefficients in the matrix) and \\(\textrm{cond}(A)\approx 5000\\).
Change to \\(\epsilon=0.002\\) causes 100% change in both components of the solution. This is the consequence of the matrix being ill-conditioned - we should not trust the numerical solution to ill-conditioned problems.
A way to see this is to recognise that computers do not perform arithmetic exactly - they necessarily have to [truncate numbers](http://www.mathwords.com/t/truncating_a_number.htm) at a certain number of significant figures, performing multiple operations with these truncated numbers can lead to an erosion of accuracy. Often this is not a problem, but these so-called [roundoff](http://mathworld.wolfram.com/RoundoffError.html) errors in algorithms generating \\(A\\), or operating on \\(A\\) as in Gaussian elimination, will lead to small inaccuracies in the coefficients of the matrix. Hence, in the case of ill-conditioned problems, will fall foul of the issue seen above where a very small error in an input to the algorithm led to a far larger error in an output.
## Roundoff errors
```{index} Roundoff errors
```
```{margin} Note
For some examples of catastrophic failures due to round off errors see [Prof. Kees Vuik](https://profs.info.uaic.ro/~ancai/CN/bibliografie/CN_disasters.htm).
```
As an example, consider the mathematical formula
\\[f(x)=(1-x)^{10}.\\]
We can relatively easily expand this out by hand
\\[f(x)=1- 10x + 45x^2 - 120x^3 + 210x^4 - 252x^5 + 210x^6 - 120x^7 + 45x^8 - 10x^9 + x^{10}.\\]
Mathematically these two expressions for \\(f(x)\\) are identical; when evaluated by a computer different operations will be performed, which should give the same answer. For numbers \\(x\\) away from \\(1\\) these two expressions do return (pretty much) the same answer.
However, for \\(x\\) close to 1 the second expression adds and subtracts individual terms of increasing size which should largely cancel out, but they don't to sufficient accuracy due to round off errors; these errors accumulate with more and more operations, leading a [loss of significance](https://en.wikipedia.org/wiki/Loss_of_significance).
```
import matplotlib.pyplot as plt
def f1(x):
return (1. - x)**10
def f2(x):
return (1. - 10.*x + 45.*x**2 - 120.*x**3 +
210.*x**4 - 252.*x**5 + 210.*x**6 -
120.*x**7 + 45.*x**8 - 10.*x**9 + x**10)
xi = np.linspace(0, 2, 1000)
fig, axes = plt.subplots(1, 3, figsize=(14, 3))
ax1 = axes[0]
ax2 = axes[1]
ax3 = axes[2]
ax1.plot(xi, f1(xi), label = "unexpanded")
ax1.plot(xi, f2(xi), label = "expanded")
ax1.legend(loc="best")
ax1.set_ylabel("$f(x)$", fontsize=14)
ax2.plot(xi, 1.-f1(xi)/f2(xi) * 100, label="Relative\ndifference\nin %")
ax2.legend(loc="best")
ax2.set_xlabel("x", fontsize=14)
ax2.set_ylabel(r"$1-\frac{unexpanded}{expanded}$", fontsize=14)
ax3.set_xlim(0.75, 1.25)
ax3.plot(xi, 1.-f1(xi)/f2(xi) * 100, label="Relative\ndifference\nin %")
ax3.legend(loc="best")
ax3.set_ylabel(r"$1-\frac{unexpanded}{expanded}$", fontsize=14)
plt.suptitle("Comparison of $(1-x)^{10}$ expansion", fontsize=14)
plt.subplots_adjust(wspace=0.4)
plt.show()
```
As we can see on the graph, for most of the domain, i.e. far away from 1.0, the expansion is almost the same as the unexpanded version. Near \\(x=1\\), the expansion creates huge errors in terms of relative difference.
### Algorithm stability
The susceptibility for a numerical algorithm to dampen (inevitable) errors, rather than to magnify them as we have seen in examples above, is termed stability. This is a concern for numerical linear algebra as considered here, as well as for the numerical solution of differential equations. In that case you don't want small errors to grow and accumulate as you propagate the solution to an ODE or PDE forward in time say. If your algorithm is not inherently stable, or has other limitation, you need to understand and appreciate this, as it can cause catastrophic failures!
| github_jupyter |
```
# nlp with recurrent neural networks
# autocheck word complete grammer check translation chatbot
# sentiment analysis / character generation
# bag of words implementation
def bag_of_words(text):
# find the words
words = text.lower().split(' ')
bag = {}
vocab = {}
word_encoding = 1
for word in words:
if word in vocab:
encoding = vocab[word]
else:
vocab[word] = word_encoding
encoding = word_encoding
word_encoding += 1
if encoding in bag:
bag[encoding] += 1
else:
bag[encoding] = 1
return bag, vocab
text = 'TF is Awesome TF'
bag, vocab = bag_of_words(text)
print(bag)
print(vocab)
# embedding turns words into vectors.
# the vector has a defined length and shows similarity to other words.
# embedding layer tries to learn relations between words
# so that more similar words have closer vectors (with lower angles).
# pretrained word embeddings are availble also.
# imdb reviews
# alreadly numberized by community
from keras.datasets import imdb
from keras.preprocessing import sequence
import tensorflow as tf
import os
import numpy as np
VOCAB_SIZE = 88584
MAXLEN = 250
BATCH_SIZE = 64
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words = VOCAB_SIZE)
train_data[0]
len(train_data[0]), len(train_data[1])
# padding all sequences to the length of 250. adds zeros to front
train_data = sequence.pad_sequences(train_data, MAXLEN)
test_data = sequence.pad_sequences(test_data, MAXLEN)
len(train_data[0]), len(train_data[1])
# model
inputs = tf.keras.Input(shape=(250,))
embed = tf.keras.layers.Embedding(VOCAB_SIZE, 32)(inputs)
lstm1 = tf.keras.layers.LSTM(32)(embed)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(lstm1)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(train_data, train_labels, epochs=10, validation_split=0.2)
results = model.evaluate(test_data, test_labels)
results
# making predictions
word_index = imdb.get_word_index()
def encode_text(text):
tokens = tf.keras.preprocessing.text.text_to_word_sequence(text)
print(tokens)
tokens = [word_index[word] if word in word_index else 0 for word in tokens]
# works on list of lists, so the first one is only required.
return sequence.pad_sequences([tokens], MAXLEN)[0]
text = 'that movie was just amazing, so amazing'
encoded = encode_text(text)
print(encoded)
import numpy as np
def predict(text):
encoded_text = encode_text(text)
pred = np.zeros((1, 250))
pred[0] = encoded_text
# list of lists
result = model.predict(pred)
print(result[0])
positive_review = 'That movie was so awesome! I really loved it and would watch it again because it was amazingly great'
predict(positive_review)
negative_review = '''That movie sucked. I hated it and wouldn't watch it again. Was one of the worst things I've ever watched'''
predict(negative_review)
```
| github_jupyter |
```
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger("exchangelib").setLevel(logging.WARNING)
```
# Connecting melusine to an Outlook Exchange mailbox
The main use-case for Melusine is **email routing**. Melusine mostly focuses on the Machine Learning aspects of email routing, however, in order to make routing effective, ML models need to be connected to a mailbox.
To connect Melusine to a mailbox and process emails, possible options are:
**Option 1: (Online processing) - Exposing the ML models through an API**
With this option, Melusine is used solely to predict target folders, the action of moving emails from a folder to another (or from a mailbox to another) is taken care of by an email processing system. The email processing system is typically run by the company's IT department.
> Example: An email processing system sends a request to the Melusine API. The request contains the email content and associated metadata while the API response contains the predicted target folder for the email. Based on the API response, the email processing system is responsible for effectively moving the email in the right folder.
**Option 2: (Batch processing) - Connecting Melusine to a mailbox using a python email client**
With this option, a script is scheduled to regularly collect the emails in the inbox, predict the target folders and move the emails to the predicted folders. In this scenario, the emails are moved to the right folders directly from the python code, it is not necessary to interact with an email processing system.
> Everyday at 8 a.m, a script is run. The script uses the `ExchangeConnector` to load the emails in an Exchange mailbox, then the Melusine ML functionalities are used to run prediction on each email and finally the `ExchangeConnector` is used again to move the emails to their predicted target folder.
This tutorial demonstrates how the Melusine `ExchangeConnector` can help you with end-to-end email routing. The ExchangeConnector uses the `exchangelib` package behind the scene.
```
>>> pip install exchangelib
```
# Routing process
The process imagined for email routing using Melusine is the following:
* Emails are received on the mailbox mymailbox@maif.fr
* Melusine is used to predict the target folder for the incoming emails
* The `ExchangeConnector` is used to move the emails to the predicted target folders
Since ML models are not perfect, some emails might be misclassified. When that happens, consumers of the mailbox are encouraged to move the emails to the appropriate "correction folder".
The emails in the correction folders will constitute training data for future model trainings and thus improve the model.
# The ExchangeConnector
The Melusine `ExchangeConnector` is instanciated with the following arguments:
* `mailbox_address`: Email address of the mailbox (ex: mymailbox@maif.fr). By default, the login address is used
* `credentials`: ExchangeLib credentials
* `config`: ExchangeLib configuration
* `routing_folder_path`: Path to the folder that contains the routed emails
* `correction_folder_path`: Path to the folder that contains the corrected emails
* `done_folder_path`: Path to the folder that contains "Done" emails (emails that have already been processed)
* `target_column`: When routing, name of the DataFrame column containing target folders"target" (Default: target)
* `account_args`: Extra arguments to instantiate an ExchangeLib Account object
* `sender_address`: Email address to be used to send emails
## Exchange authentification
Authentification methods may differ depending on the user context.
This tutorial uses a **Basic Authentification** which works for most personal Outlook Exchange Accounts.
Other types of authentification methods are shown below but if it doesn't work for you,
you should investigate the `exchangelib` [documentation](https://ecederstrand.github.io/exchangelib/#setup-and-connecting).
```
from exchangelib import Credentials, Configuration, FaultTolerance
from melusine.connectors import ExchangeConnector
authentification_method = "basic"
```
### Basic Authentification
Connect to an outlook mailbox using a login and a password
```
if authentification_method == "basic":
# Parameters
my_mailbox_address = "mymailbox@maif.fr"
my_sender_address = my_mailbox_address
my_password = "melusineisawesome"
max_wait = 60
# Exchangelib configurations
credentials = Credentials(my_mailbox_address, my_password)
config = Configuration(
retry_policy=FaultTolerance(max_wait=max_wait),
credentials=credentials,
)
# Instantiate connector
connector = ExchangeConnector(
credentials=credentials,
config=config,
mailbox_address=my_mailbox_address,
sender_address=my_sender_address,
)
```
### Basic Authentification by Delegation
```
from exchangelib import DELEGATE, NTLM
if authentification_method == "delegate":
# Parameters
my_mailbox_address = "mymailbox@maif.fr"
my_sender_address = my_mailbox_address
my_password = "melusineisawesome"
max_wait = 60
account_args = {
autodiscover=False,
access_type=DELEGATE,
}
# Exchangelib configurations
credentials = Credentials(my_mailbox_address, my_password)
config = Configuration(
retry_policy=FaultTolerance(max_wait=max_wait),
credentials=credentials,
server=my_server,
auth_type=NTLM,
)
# Instantiate connector
connector = ExchangeConnector(
credentials=credentials,
config=config,
mailbox_address=my_mailbox_address,
sender_address=my_sender_address,
account_args=account_args
)
```
### OAuth2 Authentification
```
from exchangelib import OAUTH2, OAuth2Credentials
if authentification_method == "oauth2":
# Parameters
my_mailbox_address = "mymailbox@maif.fr"
my_sender_address = my_mailbox_address
my_client_id = "my_client_id"
my_client_secret = "my_client_secret"
my_tenant_id = "my_tenant_id"
max_wait = 60
account_args = {
autodiscover=False,
access_type=DELEGATE,
}
# Exchangelib configurations
credentials = OAuth2Credentials(
client_id=my_client_id, client_secret=my_client_secret, tenant_id=my_tenant_id
)
config = Configuration(
retry_policy=FaultTolerance(max_wait=max_wait),
credentials=credentials,
auth_type=OAUTH2,
)
# Instantiate connector
connector = ExchangeConnector(
credentials=credentials,
config=config,
mailbox_address=my_mailbox_address,
sender_address=my_sender_address,
)
```
# Send fake emails
In this section a set of fake emails are sent to the mailbox. The fake emails have _"[Melusine Test]"_ as a header to make sure they are not confused with your real emails.
In the following sections, Melusine and the `ExchangeConnector` will be used to route these emails.
## Send emails
The `send_email` method is used to send emails.
```
fake_emails = [
{
"header": "[Melusine Test]",
"body": "This should go to folder Test1"
},
{
"header": "[Melusine Test]",
"body": "This should go to folder Test2"
},
{
"header": "[Melusine Test]",
"body": "This should go to folder Test3"
}
]
for email_dict in fake_emails:
connector.send_email(
to=[my_mailbox_address],
header=email_dict["header"],
body=email_dict["body"],
attachments=None
)
```
**Expected output:**
You should receive 3 emails in your mailbox
# Create folders
In the email routing scenario considered, the following folders are needed:
**Target folders**
These are the folders where the routed emails will be stored.
* `Inbox / ROUTING / Test1`
* `Inbox / ROUTING / Test2`
* `Inbox / ROUTING / Test3`
**Correction folders**
When an email is erroneously routed to a target folder, mailbox consumers can move the email to the appropriate "Correction folder".
* `Inbox / CORRECTION / Test1`
* `Inbox / CORRECTION / Test2`
* `Inbox / CORRECTION / Test3`
**Done folder**
Once the emails in the correction folders have been processed (ex: for model re-training), the correction folders can be flushed by moving all the emails in the Done folder.
* `Inbox / DONE`
## Setup ROUTING folder structure
```
# Print path to the default routing folder (We will update it later)
f"Default ROUTING folder path : '{connector.routing_folder_path}'"
# Create the base routing folder
connector.create_folders(["ROUTING"], base_folder_path=None)
# Create the routing subfolders
connector.create_folders(["Test1", "Test2", "Test3"], base_folder_path="ROUTING")
# Setup the routing folder path
connector.routing_folder_path = "ROUTING"
f"Updated ROUTING folder path :'{connector.routing_folder_path}'"
# Print folder structure
print(connector.routing_folder.tree())
```
**Expected output:**
<pre>
ROUTING
├── Test1
├── Test2
└── Test3
</pre>
## Setup the CORRECTION folder structure
```
f"Default CORRECTION folder path :'{connector.correction_folder_path}'"
# Create the base CORRECTION folder at the inbox root
connector.create_folders(["CORRECTION"], base_folder_path=None)
# Create the correction subfolders
connector.create_folders(["Test1", "Test2", "Test3"], base_folder_path="CORRECTION")
# Setup the correction folder path
connector.correction_folder_path = "CORRECTION"
f"Updated CORRECTION folder path :'{connector.correction_folder_path}'"
# Print folder structure
print(connector.correction_folder.tree())
```
**Expected output:**
<pre>
CORRECTION
├── Test1
├── Test2
└── Test3
</pre>
## Setup the DONE folder
```
# Create the DONE folder at the inbox root
connector.create_folders(["DONE"], base_folder_path=None)
# Setup the done folder path
connector.done_folder_path = "DONE"
f"Updated DONE folder path :'{connector.done_folder_path}'"
# Print folder structure
print(connector.mailbox_account.inbox.tree())
```
**Expected output:**
<pre>
Boîte de réception
├── ROUTING
│ ├── Test1
│ ├── Test2
│ └── Test3
├── CORRECTION
│ ├── Test1
│ ├── Test2
│ └── Test3
└── DONE
</pre>
# Load emails
Before emails can be routed, we need to load the content of new emails.
The `get_emails` method loads the content of a mailbox folder (by default: the inbox folder).
```
df_emails = connector.get_emails(max_emails=50, ascending=False)
# Pick only test emails
mask = df_emails["header"] == "[Melusine Test]"
df_emails = df_emails[mask].copy()
# reverse order
df_emails = df_emails.reindex(index=df_emails.index[::-1])
df_emails.drop(["message_id"], axis=1)
```
**Expected output:**
| | message_id | body | header | date | from | to | attachment |
|---:|:--------------------------------------------------------------------------------|:---------|:----------------|:--------------------------|:-----------------------------|:---------------------------------|:-------------|
| 61 | <1> | This should go to folder Test1 | [Melusine Test] | 2021-05-04T19:07:56+00:00 | mymailbox@maif.fr | ['mymailbox@maif.fr'] | |
| 62 | <2> | This should go to folder Test2 | [Melusine Test] | 2021-05-04T19:07:55+00:00 | mymailbox@maif.fr | ['mymailbox@maif.fr'] | |
| 63 | <3> | This should go to folder Test3 | [Melusine Test] | 2021-05-04T19:07:56+00:00 | mymailbox@maif.fr | ['mymailbox@maif.fr'] | |
# Predict target folders
Melusine offers a variety of models (CNN, RNN, Transformers, etc) to predict the destination folder of an email. However, this tutorial focuses on the exchange connector so the ML model prediction part is mocked. Feel free to check out the `tutorial08_full_pipeline_detailed.ipynb` to see how to use Melusine for ML predictions.
```
import re
def mock_predictions(emails):
# Use a regex to find the target folder
emails["target"] = "Test" + emails["body"].str.extract(r"Test(\d)")
# Introduce a missclassification
emails.loc[0, "target"] = "Test2"
return emails
df_emails = mock_predictions(df_emails)
df_emails[["header", "body", "target"]]
```
**Expected output:**
| | header | body | target |
|---:|:----------------|:-------------------------------|:---------|
| 76 | [Melusine Test] | This should go to folder Test1 | Test1 |
| 77 | [Melusine Test] | This should go to folder Test2 | Test2 |
| 78 | [Melusine Test] | This should go to folder Test3 | Test2 |
As you can see, there is a prediction error, an email was incorrectly classified as Test2
# Route emails
Now that we have predicted the target folders for each email, we use the `ExchangeConnector` to move the emails in the mailbox.
The `route_emails` does exactly that. Its argument are:
classified_emails,
on_error="warning",
id_column="message_id",
target_column="target",
* `classified_emails`: The DataFrame containing the emails and their predicted target folder
* `raise_missing_folder_error`: If activated, an error is raised when the target folder does not exist in the mailbox. Otherwise, a warning is printed and the emails are left in the inbox.
* `id_column`: Name of the DataFrame column containing the message ID
* `target_column`: Name of the DataFrame column containing the target folder
```
connector.route_emails(df_emails)
connector.get_emails(base_folder_path="ROUTING/Test2")[["header", "body"]]
```
**Expected output:**
| | message_id | body | header | date | from | to | attachment |
|---:|:--------------------------------------------------------------------------------|:---------|:----------------|:--------------------------|:-----------------------------|:---------------------------------|:-------------|
| 61 | <1> | This should go to folder Test1 | [Melusine Test] | 2021-05-04T19:07:56+00:00 | mymailbox@maif.fr | ['mymailbox@maif.fr'] | |
| 62 | <2> | This should go to folder Test2 | [Melusine Test] | 2021-05-04T19:07:55+00:00 | mymailbox@maif.fr | ['mymailbox@maif.fr'] | |
Two emails have been routed to the folder `Test2` !
# Make corrections
## Move emails to correction folders
Corrections should be made by the mailbox consumers directly in the mailbox.
Go to your mailbox and move the emails that says:
**"This should go to folder Test1"**
(currently in the Test2 folder)
To the correction folder `CORRECTION/Test1`
## Load corrected data
```
df_corrections = connector.get_corrections()
df_corrections
```
**Expected output:**
| | message_id | body | header | date | from | to | attachment |
|---:|:--------------------------------------------------------------------------------|:---------|:----------------|:--------------------------|:-----------------------------|:---------------------------------|:-------------|
| 61 | <1> | This should go to folder Test1 | [Melusine Test] | 2021-05-04T19:07:56+00:00 | mymailbox@maif.fr | ['mymailbox@maif.fr'] | |
The emails loaded from the correction folder can now be used to train a new ML model !
# Move corrected emails to the "Done" folder
```
connector.move_to_done(df_corrections["message_id"])
```
# Conclusion
With the `ExchangeConnector` you should be able to easily implement email routing for your mailbox using Melusine !
**Hint :** If you like Melusine, don't forget to add a star on [GitHub](https://github.com/MAIF/melusine)
| github_jupyter |
# BepiColombo First Venus Swingby Hands-On Lesson
Virtual SPICE Training for BepiColombo, July 21-22, 2020
## Overview
In this lesson you will develop a series of simple programs that
demonstrate the usage of SpiceyPy to compute a variety of different
geometric quantities applicable to experiments carried out by BepiColombo
MPO during the first Venus swingby.
You may find it useful to consult the permuted index, the headers of
various source modules, and several Required Reading documents available at
the NAIF site.
## Find the time of the Venus Swingby using WebGeocalc
First of all exercise the usage of WebGeocalc by finding the closest approach of BEPICOLOMBO MPO to Venus. Use the ESA SPCIE Service WebGeocalc instance for this purpose.
* Use an extended Time Window from "2020-07-10" UTC to "2024-10-10" UTC. Is this the first Venus swingby according to the current kernels?
* If not find a way using the same Calculation to obtan the closest approach of the first Venus swingby. Once you have obtained it, save the resulting time.
* With the resulting time, compute the distance to Venus using Cosmographia and cross check with your own program if the distance is correct.
## Visualize the Venus swingby using SPICE-Enhanced Cosmographia and SPOT
Start SPICE-enhanced Cosmographia and load the BepiColombo Scenario. Then choose the time of the Venus Closest Approach for the first Venus Swingby and display the MPO-Venus distance, does it correspond to the distance that you have obtained with WebGeocalc and with your program?
You can also use the public instance of SPOT to visualize the first Venus swingby, you can access it from here: http://bepicolombo.esac.esa.int/itl-viewer/venus_flyby_1/, is the distance the same as well?
## Visualize the Venus MERTIS Observation with Cosmographia and SPOT
Using SPOT access "Sensors FoV" and activate "MERTIS_TIR" then click on "View", you will now see the visualization using a view direction parallel to the MERTIS TIR Field-of-View, use the time slider to check when Venus will be in the Field-of-View.
Afterwards using Cosmographia, load the MERTIS TIR sensor configuration file (on top of the appropriate BepiColombo scenario) and try to replicate the same view-point in the same way that it has been shown during the WebGeocalc and Cosmographia lecture.
## Intersecting Vectors with an Ellipsoid (bis)
Write a program given an input UTC time string that computes the intersection of the MPO MERTIS TIR
boresight and field of view (FOV) boundary vectors with the surface of Venus.
The program presents each point of intersection as
* Planetocentric (latitudinal) coordinates in the IAU_VENUS frame.
For each of the sensor FOV boundary and boresight vectors, if an
intersection is found, the program displays the results of the above
computations, otherwise it indicates no intersection exists.
Use this program to compute values at the following times:
* "2020 OCT 13 15:45" UTC
* "2020 OCT 14 05:05" UTC
* "2020 OCT 14 22:15" UTC
Can you explain the different results?
## Getting serious: Compute the time intervals when Venus is in the MERTIS TIR Field-of-View.
Compute the time intervals when Venus is visible by the MERTIS_TIR_SPACE Field-of-View in the time frame of the first Venus swingby using the Geometry Finder Sub-System. Print the resulting time windows.
If you feel lost you might find it useful to search for documentation either in the header of the corresponding SPICE API or in the documentation of SpiceyPy: https://spiceypy.readthedocs.io/en/master/event_finding.html
Double-check you results with WebGeocalc and with Cosmographia and/or SPOT, do you see any differences? If so, can you explain why?
## Extra Credit: Compute the Solar Electrical Propulsion arcs (time intervals when the SEP is on)
One of the latest features added to the BepiColombo SPICE Kernel Dataset is the possibility to compute the intervals when the Solar Electrical Propulsion is ON. This is not a straightforward computation with SPICE nor you have the kernels available in the GitHub repository, therefore in order to do so follow these steps:
* Identify and download/retrieve the appropriate kernels to compute the SEP intervals and add them to the meta-kernel or FURNSH them
* Identify a kernel that will provide you with indications on how to compute the SEP periods.
* Write a program to obtain the time intervals as suggested by the before-mentioned kernel.
* Double-check your results using WebGeocalc
TIP: The Solar Electrical Propulsion frames are mounted on MTM, not on MPO nor MMO.
| github_jupyter |
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="..\images\qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.lv" target="_blank">Abuzer Yakaryilmaz</a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
<h2>Quantum Teleportation</h2>
[Watch Lecture](https://youtu.be/4PYeoqALKHk)
<hr>
_**Prepare a few pieces of papers**_
- _**to draw the circuit of the following protocol step by step and**_
- _**to solve some of tasks requiring certain calculations.**_
<hr>
Asja wants to send a qubit to Balvis by using only classical communication.
Let $ \ket{v} = \myvector{a\\b} \in \mathbb{R}^2 $ be the quantum state.
_Discussion:_ If Asja has many copies of this qubit, then she can collect the statistics based on these qubits and obtain an approximation of $ a $ and $ b $, say $ \tilde{a} $ and $\tilde{b}$, respectively. After this, Asja can send $ \tilde{a} $ and $\tilde{b}$ by using many classical bits, the number of which depends on the precision of the amplitudes.
On the other hand, If Asja and Balvis share the entangaled qubits in state $ \sqrttwo\ket{00} + \sqrttwo\ket{11} $ in advance, then it is possible for Balvis to create $ \ket{v} $ in his qubit after receiving two bits of information from Asja.
<h3> Protocol </h3>
The protocol uses three qubits as specified below:
<img src='../images/quantum_teleportation_qubits.png' width="25%" align="left">
Asja has two qubits and Balvis has one qubit.
Asja's quantum message (key) is $ \ket{v} = \myvector{a\\b} = a\ket{0} + b\ket{1} $.
The entanglement between Asja's second qubit and Balvis' qubit is $ \sqrttwo\ket{00} + \sqrttwo\ket{11} $.
So, the quantum state of the three qubits is
$$ \mypar{a\ket{0} + b\ket{1}}\mypar{\sqrttwo\ket{00} + \sqrttwo\ket{11}}
= \sqrttwo \big( a\ket{000} + a \ket{011} + b\ket{100} + b \ket{111} \big). $$
<h4> CNOT operator by Asja </h4>
Asja applies CNOT gate to her qubits where $q[2]$ is the control qubit and $q[1]$ is the target qubit.
<h3>Task 1</h3>
Calculate the new quantum state after this CNOT operator.
<a href="B56_Quantum_Teleportation_Solutions.ipynb#task1">click for our solution</a>
<h3>Hadamard operator by Asja</h3>
Asja applies Hadamard gate to $q[2]$.
<h3>Task 2</h3>
Calculate the new quantum state after this Hadamard operator.
Verify that the resulting quantum state can be written as follows:
$$
\frac{1}{2} \ket{00} \big( a\ket{0}+b\ket{1} \big) +
\frac{1}{2} \ket{01} \big( a\ket{1}+b\ket{0} \big) +
\frac{1}{2} \ket{10} \big( a\ket{0}-b\ket{1} \big) +
\frac{1}{2} \ket{11} \big( a\ket{1}-b\ket{0} \big) .
$$
<a href="B56_Quantum_Teleportation_Solutions.ipynb#task2">click for our solution</a>
<h3> Measurement by Asja </h3>
Asja measures her qubits. With probability $ \frac{1}{4} $, she can observe one of the basis states.
Depeding on the measurement outcomes, Balvis' qubit is in the following states:
<ol>
<li> "00": $ \ket{v_{00}} = a\ket{0} + b \ket{1} $ </li>
<li> "01": $ \ket{v_{01}} = a\ket{1} + b \ket{0} $ </li>
<li> "10": $ \ket{v_{10}} = a\ket{0} - b \ket{1} $ </li>
<li> "11": $ \ket{v_{11}} = a\ket{1} - b \ket{0} $ </li>
</ol>
As can be observed, the amplitudes $ a $ and $ b $ are "transferred" to Balvis' qubit in any case.
If Asja sends the measurement outcomes, then Balvis can construct $ \ket{v} $ exactly.
<h3>Task 3</h3>
Asja sends the measurement outcomes to Balvis by using two classical bits: $ x $ and $ y $.
For each $ (x,y) $ pair, determine the quantum operator(s) that Balvis can apply to obtain $ \ket{v} = a\ket{0}+b\ket{1} $ exactly.
<a href="B56_Quantum_Teleportation_Solutions.ipynb#task3">click for our solution</a>
<h3> Task 4 </h3>
Create a quantum circuit with three qubits as described at the beginning of this notebook and three classical bits.
Implement the protocol given above until Asja makes the measurements (included).
- The state of $q[2]$ can be set by the rotation with a randomly picked angle.
- Remark that Balvis does not make the measurement.
At this point, read the state vector of the circuit by using "statevector_simulator".
_When a circuit having measurement is simulated by "statevector_simulator", the simulator picks one of the outcomes, and so we see one of the states after the measurement._
Verify that the state of Balvis' qubit is in one of these: $ \ket{v_{00}}$, $ \ket{v_{01}}$, $ \ket{v_{10}}$, and $ \ket{v_{11}}$.
Guess the measurement outcome obtained by "statevector_simulator".
```
#
# your code is here
#
```
<a href="B56_Quantum_Teleportation_Solutions.ipynb#task4">click for our solution</a>
<h3> Task 5 </h3>
Implement the protocol above by including the post-processing part done by Balvis, i.e., the measurement results by Asja are sent to Balvis and then he may apply $ X $ or $ Z $ gates depending on the measurement results.
We use the classically controlled quantum operators.
Since we do not make measurement on $ q[2] $, we define only 2 classical bits, each of which can also be defined separated.
q = QuantumRegister(3)
c2 = ClassicalRegister(1,'c2')
c1 = ClassicalRegister(1,'c1')
qc = QuantumCircuit(q,c1,c2)
...
qc.measure(q[1],c1)
...
qc.x(q[0]).c_if(c1,1) # x-gate is applied to q[0] if the classical bit c1 is equal to 1
Read the state vector and verify that Balvis' state is $ \myvector{a \\ b} $ after the post-processing.
```
#
# your code is here
#
```
<a href="B56_Quantum_Teleportation_Solutions.ipynb#task5">click for our solution</a>
<!--
<h3> Task 6 (optional) </h3>
Observe that Balvis can also t
Create a quantum circuit with four qubits and four classical bits.
Assume that Asja has the first two qubits (number 3 and 2) and Balvis has the last two qubits (number 1 and 0).
Create an entanglement between qubits 2 and 1.
Implement the protocol (the state of the qubit can be set by a rotation with randomly picked angle):
- If Asja teleports a qubit, then set the state of qubit 3.
- If Balvis teleports a qubit, then set the state of qubit 0.
-->
| github_jupyter |
# **Tugas Besar 2**
## Kelompok 8:
1. 16520289 Gagas Praharsa Bahar
2. 16520299 Malik Akbar Hashemi Rafsanjani
3. 16520309 Alifia Rahmah
4. 16520319 Ng Kyle
## Sumber data:
Trending YouTube Video Statistics (US Videos) - Mitchell J ([Kaggle](https://www.kaggle.com/datasnaek/youtube-new))
### Data utama : USvideos.csv (59.85 MB)
Sumber data dengan ukuran 40949 baris x 16 kolom
### Data penunjang : US_category_id.json (8.3 KB)
referensi category_id dengan category_name yang akan membantu pada pengolahan data selanjutnya.
## **Inisialisasi**
```
# import module yang dibutuhkan
import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from pandas.plotting import scatter_matrix
# link file csv dan json
url_csv = 'https://github.com/malikrafsan/test/raw/main/USvideos.csv'
url_json = 'https://github.com/malikrafsan/Tugas-Besar2-KU1102/raw/main/US_category_id.json'
# Membaca file csv dan json
df = pd.read_csv(url_csv)
data = pd.read_json(url_json)
# Membuat list berisi tuple pasangan category_id dan category_name
lst = []
for i in range(len(data["items"])):
lst.append((data["items"][i]["id"],data["items"][i]["snippet"]["title"]))
# Menambahkan kolom "category_name" dan mengisinya sesuai category_id
df.insert(len(df.columns),'category_name','')
for i in range(len(df)):
for j in range(len(lst)):
if int(df["category_id"][i]) == int(lst[j][0]):
df["category_name"][i] = lst[j][1]
df.head()
```
## **Karakteristik data**
```
#Ukuran Data
print("Banyak Baris :", len(df))
print("Banyak Kolom :", len(df.columns))
#Mencari data unik
for i in range (len(df.columns)):
print("Data unik kolom", df.columns[i],"ada sebanyak", len(df.iloc[:,i].unique()))
#Mencari range atau member category tiap kolom time series dan numerikal
num_date = ["trending_date","publish_time","views","likes","dislikes"]
for i in num_date:
print("Kolom", i, "memiliki range :", df[i].min(), "--",df[i].max())
print()
#Mencari nilai unik kategorikal
kategorikal = ["category_id", "category_name"]
for i in kategorikal:
print("Member pada kolom", i, "adalah:")
for j in df[i].unique():
print(j)
print()
df.info()
```
# **Data Preprocessing**
```
# delete kolom yang tidak diperlukan
df.drop('video_id', inplace=True, axis=1) # tidak dibutuhkan
df.drop('thumbnail_link', inplace=True, axis=1) # tidak dibutuhkan
df.drop('description', inplace=True, axis=1) # tidak dibutuhkan
df.drop('category_id', inplace=True, axis=1) # digantikan category_name
df.drop('tags', inplace=True, axis=1) # tidak dibutuhkan
# mengubah tipe data publish_time dan trending_date menjadi time series
df['publish_time'] = pd.to_datetime(df['publish_time'], format='%Y-%m-%dT%H:%M:%S.%fZ')
df["trending_date"]=pd.to_datetime(df["trending_date"],format="%y.%d.%m")
df.head()
df.info()
```
# **Data Analysis**
### Statistik umum setiap atribut data numerik
```
# informasi singkat data numerik
df.describe()
# Rata-rata
df.mean()
# Standar deviasi
df.std()
#Ekstremum minimum
df.min(numeric_only=True)
#Ekstremum maksimum
df.max(numeric_only=True)
#persentil
persentil = [0.1,0.25,0.5,0.75,0.9]
for i in persentil:
print("Persentil", i*100,"%")
print(df.quantile(i))
print()
```
### Statistik data Channel
```
# tabel frekuensi daftar 10 channel yang pernah masuk trending (paling sering)
df['channel_title'].value_counts()[:10]
```
### Statistik data Category
```
# statistik yang dikelompokkan berdasarkan category_name
frekuensi_kategori = df.groupby("category_name").sum()
frekuensi_kategori
# tabel frekuensi category-category yang pernah masuk trending
# dan seberapa sering video dengan kategori tersebut masuk
df['category_name'].value_counts()
```
### Statistik data Views
```
# banyak views terbanyak dan tersedikit
print("View terbanyak:", df["views"].max())
print("View tersedikit:", df["views"].min())
# Video dengan banyak views terbesar
df.loc[df['views'] == df['views'].max()]
# Video dengan banyak views terkecil
df.loc[df['views'] == df['views'].min()]['title']
```
### Statistik data Likes
```
# banyak likes terbanyak dan tersedikit
print("Like terbanyak:", df["likes"].max())
print("Like tersedikit:", df["likes"].min())
# video dengan banyak likes terbesar
df.loc[df['likes'] == df['likes'].max()]
# video dengan banyak likes terkecil
df.loc[df['likes'] == df['likes'].min()]
```
### Statistik data Dislikes
```
# banyak dislike terbanyak dan tersedikit
print("Dislike terbanyak:", df["dislikes"].max())
print("Dislike tersedikit:", df["dislikes"].min())
# video dengan dislike terbanyak
df.loc[df['dislikes'] == df['dislikes'].max()]
# video dengan dislike tersedikit
df.loc[df['dislikes'] == df['dislikes'].min()]
```
### Statistik data Comment counts
```
# banyak komentar terbanyak dan tersedikit
print("Comment counts terbanyak:", df["comment_count"].max())
print("Comment counts tersedikit:", df["comment_count"].min())
# video dengan komentar terbanyak
df.loc[df['comment_count'] == df['comment_count'].max()]
# video dengan komentar tersedikit
df.loc[df['comment_count'] == df['comment_count'].min()]
# video dengan komentar tersedikit, meskipun comments_disabled false
df.loc[(df['comment_count'] == df['comment_count'].min()) & (df['comments_disabled'] == False)]
```
## Statistik data Publish Time
```
# pengelompokan data berdasar hari di-publish
df["publish_time"].apply(lambda x: x.strftime('%A')).value_counts().to_frame().reset_index()
```
## Statistik data Trending Date
```
# pengelompokan data berdasar bulan trending
trending = df.groupby([(df.trending_date.dt.year),(df.trending_date.dt.month)]).sum()
trending
```
# **Visualisasi Data**
### 10 Youtube channel teratas yang paling sering muncul di Trending Videos
```
# horizontal bar plot dari 10 youtube channel teratas yang paling sering muncul di trending
df['channel_title'].value_counts().head(10).sort_values().plot(kind='barh', figsize=(10,8))
plt.title("10 Youtube Channel teratas yang paling sering muncul di Trending Videos", size=(15))
plt.grid(alpha=0.4)
plt.show()
```
Data diatas menunjukkan channel yang videonya paling banyak masuk Trending, dengan channel nomor 1 adalah ESPN.
Berdasarkan grafik diatas, channel yang paling sering masuk Trending biasanya adalah channel yang kontennya rutin dan banyak penggemar, seperti basket dan reality show.
### Jumlah video berdasarkan kategori
```
# horizontal bar plot dari jumlah video berdasarkan kategori
df['category_name'].value_counts().sort_values().plot(kind='barh', figsize=(10,8))
plt.title("Jumlah Video Berdasarkan Kategori", size=15)
plt.grid(alpha=0.4)
plt.show()
```
Grafik diatas adalah grafik jumlah video berdasarkan kategori, dengan kategori dengan jumlah video terbanyak adalah Entertainment.
Insight yang didapatkan dari grafik ini adalah kategori entertainment paling sering masuk trending, yang disusul dengan Music. Ini artinya banyak pengguna YouTube yang menonton untuk mendapatkan konten hiburan/mendengar musik.
### Perbandingan Jumlah Views Setiap Kategori
```
# line plot perbandingan jumlah nilai views setiap kategori
df.groupby(["trending_date","category_name"])["views"].sum().unstack().plot(kind="line",figsize=(17,8),title="Perbandingan jumlah nilai views setiap kategori")
plt.xlabel("Tanggal")
plt.ylabel("Views")
plt.show()
```
Dari grafik diatas, diketahui bahwa kategori Music naik mendominasi Trending dimulai dari bulan April 2018. Sebelum April 2018, kategori Music dan Entertainment bersaing untuk mendapatkan views terbanyak pada video yang trending.
```
# line plot perbandingan jumlah nilai views setiap kategori pada tahun 2017
df.loc[df["trending_date"].dt.year == 2017].groupby(["trending_date","category_name"])["views"].sum().unstack().plot(kind="line",figsize=(17,8),title="Perbandingan jumlah nilai views setiap kategori pada tahun 2017")
plt.xlabel("Tanggal")
plt.ylabel("Views")
plt.show()
```
Berbeda dengan keseluruhan data, pada tahun 2017 kategori Entertainment lebih mendominasi.
```
# line plot perbandingan jumlah nilai views setiap kategori pada tahun 2018
df.loc[df["trending_date"].dt.year == 2018].groupby(["trending_date","category_name"])["views"].sum().unstack().plot(kind="line",figsize=(17,8),title="Perbandingan jumlah nilai views setiap kategori pada tahun 2018")
plt.xlabel("Tanggal")
plt.ylabel("Views")
plt.show()
```
### Perbandingan Jumlah Views Setiap Kategori
```
# line plot perbandingan data tiap bulannya
trending.plot(kind="line", figsize = (17,8), title='Perbandingan data tiap bulan')
```
### Hubungan keseluruhan-bagian
```
# pie chart
plt.figure(figsize=(10,10))
df.groupby('category_name').sum()['views'].plot(kind='pie', title='Perbandingan Jumlah Views Setiap Kategori', legend=True)
# horizontal stacked bar chart Perbandingan Jumlah Views, Likes, dan Comments Setiap Kategori
df.groupby(['category_name']).sum().plot(kind='barh', y=['views', 'likes','dislikes', 'comment_count'], stacked=True)
trending.plot(kind = "barh", stacked = True, y = ["views", "likes", "comment_count","dislikes"])
trending.plot(kind = "barh", stacked = True, y = ["likes", "comment_count","dislikes"])
```
### Visualisasi korelasi views dan likes
```
x = df["views"].values.reshape(-1,1)
y = df["likes"].values.reshape(-1,1)
regresi = LinearRegression().fit(x,y)
hasil = regresi.predict(x)
sns.scatterplot(x=df["views"], y=df["likes"])
plt.plot(x,hasil, color="red")
plt.xlabel("Views")
plt.ylabel("Likes")
plt.title("Plot Views vs Likes")
print("Koefisien korelasi antara views dan likes adalah " + str(df["views"].corr(df["likes"])))
```
Data diatas menunjukkan adanya korelasi positif antara views dan likes. Warna garis dibuat merah agar terlihat secara jelas garis regresi dari data.
Insight yang didapat adalah apabila views naik, maka likes juga akan cenderung naik.
```
sns.jointplot(data=df, x="views", y="likes",hue = "category_name", kind="kde", figsize = (18,8))
```
Dapat dilihat semua berkumpul pada satu daerah
### Korelasi antar kolom data
```
correlation_list = ['views', 'likes', 'dislikes', 'comment_count']
hm_data = df[correlation_list].corr()
display(hm_data)
```
Tabel diatas merupakan tabel korelasi antar variabel kuantitatif. Korelasi terbesar ditunjukkan oleh like-view dengan nilai R 0.849.
Insight yang didapatkan dari tabel korelasi ini adalah apabila views tinggi, maka kecenderungan likes dan comment akan naik pula. Dan apabila likes atau dislikes tinggi, kecenderungan untuk mendapatkan comment naik juga.
Yang dapat disimpulkan adalah, video trending biasanya viral dengan banyak likes yang berbanding lurus dengan views.
```
plt.figure(figsize=(8,6))
sns.heatmap(hm_data, annot=True);
```
Diatas adalah heatmap dari data korelasi yang telah dipaparkan diatas, agar lebih mudah dilihat dalam bentuk visual.
```
scatter_matrix(df[correlation_list], alpha=0.2, figsize=(6, 6), diagonal='kde')
```
| github_jupyter |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Official Statistics
```
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
| github_jupyter |
```
%%time
import malaya
isu_kerajaan = [
'Kenyataan kontroversi Setiausaha Agung Barisan Nasional (BN), Datuk Seri Mohamed Nazri Aziz berhubung sekolah vernakular merupakan pandangan peribadi beliau',
'Timbalan Presiden UMNO, Datuk Seri Mohamad Hasan berkata, kenyataan tersebut tidak mewakili pendirian serta pandangan UMNO \n\nkerana parti itu menghormati serta memahami keperluan sekolah vernakular dalam negara',
'"Saya ingin menegaskan dua perkara penting',
'Pertama pendirian beliau tersebut adalah pandangan peribadi yang tidak mewakili pendirian dan pandangan UMNO',
'"Kedua UMNO sebagai sebuah parti sangat menghormati dan memahami keperluan sekolah vernakular di Malaysia',
'UMNO berpendirian sekolah jenis ini perlu terus wujud di negara kita," katanya dalam satu kenyataan akhbar malam ini',
'Mohamed Nazri semalam menjelaskan, kenyataannya mengenai sekolah jenis kebangsaan Cina dan Tamil baru-baru ini disalah petik pihak media',
'Kata Nazri dalam kenyataannya itu, beliau menekankan bahawa semua pihak perlu menghormati hak orang Melayu dan bumiputera',
'Mohamad yang menjalankan tugas-tugas Presiden UMNO berkata, UMNO konsisten dengan pendirian itu dalam mengiktiraf kepelbagaian bangsa dan etnik termasuk hak untuk beragama serta mendapat pendidikan',
'Menurut beliau, persefahaman dan keupayaan meraikan kepelbagaian itu menjadi kelebihan dan kekuatan UMNO dan BN selama ini',
'Kata beliau, komitmen UMNO dan BN berhubung perkara itu dapat dilihat dengan jelas dalam bentuk sokongan infrastruktur, pengiktirafan dan pemberian peruntukan yang diperlukan',
'"Saya berharap isu ini tidak dipolitikkan secara tidak bertanggungjawab oleh mana-mana pihak terutama dengan cara yang tidak menggambarkan pendirian sebenar UMNO dan BN," katanya',
'Beliau turut menegaskan Mohamed Nazri telah mengambil pertanggungjawaban dengan membuat penjelasan maksud sebenarnya ucapanny di Semenyih, Selangor tersebut',
]
isu_string = '\n\n\n\nDUA legenda hebat dan ‘The living legend’ ini sudah memartabatkan bidang muzik sejak lebih tiga dekad lalu. Jika Datuk Zainal Abidin, 59, dikenali sebagai penyanyi yang memperjuangkan konsep ‘world music’, Datuk Sheila Majid, 55, pula lebih dikenali dengan irama jazz dan R&B.\n\nNamun, ada satu persamaan yang mengeratkan hubungan mereka kerana sama-sama mencintai bidang muzik sejak dulu.\n\nKetika ditemui dalam sesi fotografi yang diatur di Balai Berita, baru-baru ini, Zainal berkata, dia lebih ‘senior’ daripada Sheila kerana bermula dengan kumpulan Headwind sebelum menempa nama sebagai penyanyi solo.\n\n“Saya mula berkawan rapat dengan Sheila ketika sama-sama bernaung di bawah pengurusan Roslan Aziz Productions (RAP) selepas membina karier sebagai artis solo.\n\n“Namun, selepas tidak lagi bernaung di bawah RAP, kami juga membawa haluan karier seni masing-masing selepas itu,” katanya.\n\nJusteru katanya, dia memang menanti peluang berganding dengan Sheila dalam satu konsert.\n\nPenyanyi yang popular dengan lagu Hijau dan Ikhlas Tapi Jauh itu mengakui mereka memang ada keserasian ketika bergandingan kerana membesar pada era muzik yang sama.\n\n“Kami memang meminati bidang muzik dan saling memahami antara satu sama lain. Mungkin kerana kami berdua sudah berada pada tahap di puncak karier muzik masing-masing.\n\n“Saya bersama Sheila serta Datuk Afdlin Shauki akan terbabit dalam satu segmen yang ditetapkan.\n\n“Selain persembahan solo, saya juga berduet dengan Sheila dan Afdlin dalam segmen interaktif ini. Setiap penyanyi akan menyampaikan enam hingga tujuh lagu setiap seorang sepanjang konsert yang berlangsung tiga hari ini,” katanya.\n\nBagi Sheila pula, dia memang ada terbabit dengan beberapa persembahan bersama Zainal cuma tiada publisiti ketika itu.\n\n“Kami pernah terbabit dengan showcase dan majlis korporat sebelum ini. Selain itu, Zainal juga terbabit dengan Konsert Legenda yang membabitkan jelajah empat lokasi sebelum ini.\n\n“Sebab itu, saya sukar menolak untuk bekerjasama dengannya dalam Festival KL Jamm yang dianjurkan buat julung kali dan berkongsi pentas dalam satu konsert bertaraf antarabangsa,” katanya.\n\n\n\nFESTIVAL KL Jamm bakal menggabungkan pelbagai genre muzik seperti rock, hip hop, jazz dan pop dengan lebih 100 persembahan, 20 ‘showcase’ dan pameran.\n\nKonsert berbayar\n\n\n\nMewakili golongan anak seni, Sheila menaruh harapan semoga Festival KL Jamm akan menjadi platform buat artis yang sudah ada nama dan artis muda untuk membuat persembahan, sekali gus sama-sama memartabatkan industri muzik tempatan.\n\nMenurut Sheila, dia juga mencadangkan lebih banyak tempat diwujudkan untuk menggalakkan artis muda membuat persembahan, sekali gus menggilap bakat mereka.\n\n“Berbanding pada zaman saya dulu, artis muda sekarang tidak banyak tempat khusus untuk mereka menyanyi dan menonjolkan bakat di tempat awam.\n\n“Rata-rata hanya sekadar menyanyi di laman Instagram dan cuma dikenali menerusi satu lagu. Justeru, bagaimana mereka mahu buat showcase kalau hanya dikenali dengan satu lagu?” katanya.\n\nPada masa sama, Sheila juga merayu peminat tempatan untuk sama-sama memberi sokongan pada penganjuran festival KL Jamm sekali gus mencapai objektifnya.\n\n“Peminat perlu ubah persepsi negatif mereka dengan menganggap persembahan artis tempatan tidak bagus.\n\n“Kemasukan artis luar juga perlu dilihat dari sudut yang positif kerana kita perlu belajar bagaimana untuk menjadi bagus seperti mereka,” katanya.\n\nSementara itu, Zainal pula berharap festival itu akan mendidik orang ramai untuk menonton konsert berbayar serta memberi sokongan pada artis tempatan.\n\n“Ramai yang hanya meminati artis tempatan tetapi tidak mahu mengeluarkan sedikit wang untuk membeli tiket konsert mereka.\n\n“Sedangkan artis juga menyanyi untuk kerjaya dan ia juga punca pendapatan bagi menyara hidup,” katanya.\n\nFestival KL Jamm bakal menghimpunkan barisan artis tempatan baru dan nama besar dalam konsert iaitu Datuk Ramli Sarip, Datuk Afdlin Shauki, Zamani, Amelina, Radhi OAG, Dr Burn, Santesh, Rabbit Mac, Sheezy, kumpulan Bunkface, Ruffedge, Pot Innuendo, artis dari Kartel (Joe Flizzow, Sona One, Ila Damia, Yung Raja, Faris Jabba dan Abu Bakarxli) dan Malaysia Pasangge (artis India tempatan).\n\nManakala, artis antarabangsa pula membabitkan J Arie (Hong Kong), NCT Dream (Korea Selatan) dan DJ Sura (Korea Selatan).\n\nKL Jamm dianjurkan Music Unlimited International Sdn Bhd dan bakal menggabungkan pelbagai genre muzik seperti rock, hip hop, jazz dan pop dengan lebih 100 persembahan, 20 ‘showcase’, pameran dan perdagangan berkaitan.\n\nFestival tiga hari itu bakal berlangsung di Pusat Pameran dan Perdagangan Antarabangsa Malaysia (MITEC), Kuala Lumpur pada 26 hingga 28 April ini.\n\nMaklumat mengenai pembelian tiket dan keterangan lanjut boleh melayari www.kljamm.com.'
```
We also can give a string, Malaya will always split a string by into multiple sentences.
Important parameters,
1. `top_k`, number of summarized strings.
2. `important_words`, number of important words.
## List available skip-thought models
```
malaya.summarize.available_skipthought()
```
* ``'lstm'`` - LSTM skip-thought deep learning model trained on news dataset. Hopefully we can train on wikipedia dataset.
* ``'residual-network'`` - CNN residual network with Bahdanau Attention skip-thought deep learning model trained on wikipedia dataset.
We use TextRank for scoring algorithm.
## Encoder summarization
We leverage the power of deep encoder models like skip-thought, BERT and XLNET to do extractive summarization for us.
#### Use skip-thought
```
lstm = malaya.summarize.deep_skipthought(model = 'lstm')
encoder = malaya.summarize.encoder(lstm)
encoder.summarize(isu_kerajaan, important_words = 10)
```
Problem with skip-thought models, `top-words` suggested are really not good, because skip-thought trained to leverage sentence level, not word level. How about XLNET or BERT? Lets we try XLNET.
```
xlnet = malaya.transformer.load(model = 'xlnet', size = 'base')
encoder = malaya.summarize.encoder(xlnet)
encoder.summarize(isu_kerajaan, important_words = 10, method = 'mean')
```
Much much better!
## Train LSA model
Important parameters,
1. `vectorizer`, vectorizer technique. Allowed values:
* ``'bow'`` - Bag of Word.
* ``'tfidf'`` - Term frequency inverse Document Frequency.
* ``'skip-gram'`` - Bag of Word with skipping certain n-grams.
2. `ngram`, n-grams size to train a corpus.
3. `important_words`, number of important words.
4. `top_k`, number of summarized strings.
```
malaya.summarize.lsa(isu_kerajaan,important_words=10)
```
We can use `tfidf` as vectorizer.
```
malaya.summarize.lsa(isu_kerajaan,important_words=10, ngram = (1,3), vectorizer = 'tfidf')
```
We can use `skip-gram` as vectorizer, and can override `skip` value.
```
malaya.summarize.lsa(isu_kerajaan,important_words=10, ngram = (1,3), vectorizer = 'skip-gram', skip = 3)
malaya.summarize.lsa(isu_string,important_words=10)
```
## Train LDA model
```
malaya.summarize.lda(isu_kerajaan,important_words=10)
malaya.summarize.lda(isu_string,important_words=10, vectorizer = 'skip-gram')
```
## Load doc2vec summarization
We need to load word vector provided by Malaya. `doc2vec` does not return `top-words`, so parameter `important_words` cannot be use.
Important parameters,
1. `aggregation`, aggregation function to accumulate word vectors. Default is `mean`.
* ``'mean'`` - mean.
* ``'min'`` - min.
* ``'max'`` - max.
* ``'sum'`` - sum.
* ``'sqrt'`` - square root.
#### Using word2vec
I will use `load_news`, word2vec from wikipedia took a very long time for my noob laptop,
```
embedded_news = malaya.wordvector.load_news(256)
w2v_wiki = malaya.wordvector.load(embedded_news['nce_weights'],
embedded_news['dictionary'])
malaya.summarize.doc2vec(w2v_wiki, isu_kerajaan, soft = False, top_k = 5)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Gena/hillshade.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Gena/hillshade.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Gena/hillshade.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Gena/hillshade.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
from ee_plugin.contrib import palettes
dem = ee.Image("JAXA/ALOS/AW3D30_V1_1").select('MED')
dem = dem.updateMask(dem.gt(0))
palette = palettes.cb['Pastel1'][7]
#palette = ['black', 'white']
rgb = dem.visualize(**{'min': 0, 'max': 5000, 'palette': palette })
hsv = rgb.unitScale(0, 255).rgbToHsv()
extrusion = 30
weight = 0.7
hs = ee.Terrain.hillshade(dem.multiply(extrusion), 315, 35).unitScale(10, 250).resample('bicubic')
hs = hs.multiply(weight).add(hsv.select('value').multiply(1 - weight))
hsv = hsv.addBands(hs.rename('value'), ['value'], True)
rgb = hsv.hsvToRgb()
Map.setCenter(0, 28, 2.5)
Map.addLayer(rgb, {}, 'ALOS DEM', True)
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Xopt class, TNK test function
This is the class method for running Xopt.
TNK function
$n=2$ variables:
$x_i \in [0, \pi], i=1,2$
Objectives:
- $f_i(x) = x_i$
Constraints:
- $g_1(x) = -x_1^2 -x_2^2 + 1 + 0.1 \cos\left(16 \arctan \frac{x_1}{x_2}\right) \le 0$
- $g_2(x) = (x_1 - 1/2)^2 + (x_2-1/2)^2 \le 0.5$
```
from xopt import Xopt
import matplotlib.pyplot as plt
from botorch.utils.multi_objective.pareto import is_non_dominated
%matplotlib inline
import os
SMOKE_TEST = os.environ.get('SMOKE_TEST')
# To see log messages
from xopt import output_notebook
output_notebook()
```
The `Xopt` object can be instantiated from a JSON or YAML file, or a dict, with the proper structure.
Here we will make one
```
import yaml
# Make a proper input file.
YAML="""
xopt: {output_path: null}
algorithm:
name: mobo
options:
ref: [1.4, 1.4]
n_initial_samples: 5
n_steps: 10
generator_options:
batch_size: 4
simulation:
name: test_TNK
evaluate: xopt.tests.evaluators.TNK.evaluate_TNK
vocs:
name: TNK_test
description: null
simulation: test_TNK
templates: null
variables:
x1: [0, 3.14159]
x2: [0, 3.14159]
objectives: {y1: MINIMIZE, y2: MINIMIZE}
constraints:
c1: [GREATER_THAN, 0]
c2: ['LESS_THAN', 0.5]
linked_variables: {}
constants: {a: dummy_constant}
"""
config = yaml.safe_load(YAML)
# Optional: Connect the function directly
#from xopt.evaluators.test_TNK import evaluate_TNK
#config['simulation']['evaluate'] = evaluate_TNK
if SMOKE_TEST:
config['algorithm']['options']['n_steps'] = 3
config['algorithm']['options']['generator_options']['num_restarts'] = 2
config['algorithm']['options']['generator_options']['raw_samples'] = 2
X = Xopt(config)
X
```
# Run MOBO
MOBO is designed to run in serial or parallel
```
# Pick one of these
from concurrent.futures import ThreadPoolExecutor as PoolExecutor
#from concurrent.futures import ProcessPoolExecutor as PoolExecutor
executor = PoolExecutor()
# This will also work.
#executor=None
%%time
X.run(executor=executor)
```
# Plot
```
fig, ax = plt.subplots()
# get results and get valid observations
results = X.results
train_y = results['objectives']
valid_y = train_y[results['feasibility'].flatten()]
# plot results
ax.plot(train_y[:, 0], train_y[:, 1], '.')
ax.set_ylabel('$f_2$')
ax.set_xlabel('$f_1$')
# highlight Pareto front, ONLY using valid observations (note botorch assumes maximization when determing dominant points)
non_dom = is_non_dominated(-valid_y)
ax.plot(valid_y[:,0][non_dom],valid_y[:,1][non_dom],'C1o')
plt.show()
# Cleanup
!rm results.json
```
| github_jupyter |
# 範例 : (Kaggle)房價預測
# [教學目標]
- 以下用房價預測資料, 展示特徵篩選的作法
# [範例重點]
- 觀察相關係數過濾法的寫作方式(In[2], Out[2], In[4], Out[4]), 以及對線性迴歸與梯度提升機有什麼影響 (In[5]~In[8], Out[5]~Out[8])
- 觀察L1 嵌入法的寫作方式(In[9]~In[11], Out[9]~Out[11]), 以及對線性迴歸與梯度提升機有什麼影響 (In[12], Out[12], In[13], Out[13])
```
# 做完特徵工程前的所有準備
import pandas as pd
import numpy as np
import copy
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import GradientBoostingRegressor
data_path = 'data/'
df = pd.read_csv(data_path + 'house_train.csv.gz')
train_Y = np.log1p(df['SalePrice'])
df = df.drop(['Id'] , axis=1)
df.head()
# 計算df整體相關係數, 並繪製成熱圖
import seaborn as sns
import matplotlib.pyplot as plt
corr = df.corr()
sns.heatmap(corr)
plt.show()
# 記得刪除 SalePrice
df = df.drop(['SalePrice'] , axis=1)
#只取 int64, float64 兩種數值型欄位, 存於 num_features 中
num_features = []
for dtype, feature in zip(df.dtypes, df.columns):
if dtype == 'float64' or dtype == 'int64':
num_features.append(feature)
print(f'{len(num_features)} Numeric Features : {num_features}\n')
# 削減文字型欄位, 只剩數值型欄位
df = df[num_features]
df = df.fillna(-1)
MMEncoder = MinMaxScaler()
df.head()
# 篩選相關係數大於 0.1 或小於 -0.1 的特徵
high_list = list(corr[(corr['SalePrice']>0.1) | (corr['SalePrice']<-0.1)].index)
high_list.pop(-1)
print(high_list)
# 原始特徵 + 線性迴歸
train_X = MMEncoder.fit_transform(df)
estimator = LinearRegression()
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# 高相關性特徵 + 線性迴歸
train_X = MMEncoder.fit_transform(df[high_list])
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# 原始特徵 + 梯度提升樹
train_X = MMEncoder.fit_transform(df)
estimator = GradientBoostingRegressor()
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# 高相關性特徵 + 梯度提升樹
train_X = MMEncoder.fit_transform(df[high_list])
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
from sklearn.linear_model import Lasso
L1_Reg = Lasso(alpha=0.001)
train_X = MMEncoder.fit_transform(df)
L1_Reg.fit(train_X, train_Y)
L1_Reg.coef_
L1_mask = list((L1_Reg.coef_>0) | (L1_Reg.coef_<0))
df.columns[L1_mask]
from itertools import compress
L1_mask = list((L1_Reg.coef_>0) | (L1_Reg.coef_<0))
L1_list = list(compress(list(df), list(L1_mask)))
L1_list
# L1_Embedding 特徵 + 線性迴歸
train_X = MMEncoder.fit_transform(df[L1_list])
estimator = LinearRegression()
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# L1_Embedding 特徵 + 梯度提升樹
train_X = MMEncoder.fit_transform(df[L1_list])
estimator = GradientBoostingRegressor()
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
```
# 作業1
* 鐵達尼生存率預測中,試著變更兩種以上的相關係數門檻值,觀察預測能力是否提升?
# 作業2
* 續上題,使用 L1 Embedding 做特徵選擇(自訂門檻),觀察預測能力是否提升?
| github_jupyter |
# Galaxy Catalog Plots
This notebook reads the [LSST DM galaxy catalog](http://weaklensingdeblending.readthedocs.org/en/latest/catalog.html), calculates some size and shape parameters, and makes plots to summarize the inputs to the `simulate` program. Plots to summarize the `simulate` outputs are generated in a [separate notebook](http://weaklensingdeblending.readthedocs.org/en/latest/notebooks.html).
## Initialization
```
%pylab inline
import sys
sys.path.append('..')
import descwl
```
## Read Galaxy Catalog
```
reader = descwl.catalog.Reader('../OneDegSq.fits')
catalog = reader.table
num_galaxies = len(catalog)
```
## Calculate Derived Quantities
```
beta = np.zeros(num_galaxies)
q_disk = np.zeros(num_galaxies)
q_bulge = np.zeros(num_galaxies)
hlr_disk = np.zeros(num_galaxies)
hlr_bulge = np.zeros(num_galaxies)
# Disk + bulge < 1 when an AGN component is present.
total_flux = catalog['fluxnorm_disk']+catalog['fluxnorm_bulge']+catalog['fluxnorm_agn']
frac_disk = catalog['fluxnorm_disk']/total_flux
frac_bulge = catalog['fluxnorm_bulge']/total_flux
# Disk and bulge position angles are the same when both components are present.
has_disk = (catalog['fluxnorm_disk'] > 0)
has_bulge = (catalog['fluxnorm_bulge'] > 0)
has_both = np.logical_and(has_disk,has_bulge)
assert np.array_equal(catalog['pa_disk'][has_both],catalog['pa_bulge'][has_both])
beta[has_disk] = np.radians(catalog['pa_disk'][has_disk])
beta[has_bulge] = np.radians(catalog['pa_bulge'][has_bulge])
# Calculate size and shape parameters for each component.
q_disk[has_disk] = catalog['b_d'][has_disk]/catalog['a_d'][has_disk]
q_bulge[has_bulge] = catalog['b_b'][has_bulge]/catalog['a_b'][has_bulge]
hlr_disk[has_disk] = np.sqrt(catalog['a_d'][has_disk]*catalog['b_d'][has_disk])
hlr_bulge[has_bulge] = np.sqrt(catalog['a_b'][has_bulge]*catalog['b_b'][has_bulge])
# Fraction of sources with disk + bulge mixtures
num_mixed = np.count_nonzero(np.logical_and(has_disk,has_bulge))
print '%d of %d galaxies (%.1f%%) have disk+bulge mixtures' % (
num_mixed,num_galaxies,100.*num_mixed/num_galaxies)
Q = np.zeros((num_galaxies,2,2))
for i,row in enumerate(catalog):
if has_disk[i]:
Q[i] += frac_disk[i]*descwl.model.sersic_second_moments(1,hlr_disk[i],q_disk[i],beta[i])
if has_bulge[i]:
Q[i] += frac_bulge[i]*descwl.model.sersic_second_moments(1,hlr_bulge[i],q_bulge[i],beta[i])
sigma_m,sigma_p,a,b,beta2,e1,e2 = descwl.model.moments_size_and_shape(Q)
emag = np.sqrt(e1**2+e2**2)
erms = lambda mask: 0.5*(np.std(e1[mask])+np.std(e2[mask]))
ri_color = catalog['r_ab']-catalog['i_ab']
```
## Reproduce Fig 1 of Chang 2013
Chang 2013 is available at http://arxiv.org/abs/1305.0793 and published as [Mon. Not. R. Astron. Soc. 434, 2121 (Sep. 2013)](http://dx.doi.org/10.1093/mnras/stt1156). Note that there was an [erratum](http://dx.doi.org/10.1093/mnras/stu2553) submitted Oct 2014 (v3 on arxiv) that changes Fig.1(c).
Note that the subplots below are in a slightly different order than in the paper, to simplify the layout, but the labels (a-f) match those used in the paper.
```
nbins = 50;
fig1abce = plt.figure(figsize=(10,8));
plt.subplot(2,2,1);
plt.hist(catalog['redshift'],bins=nbins,range=(0,6),histtype='step',color='black',label='redshift');
plt.xlabel('(a) Galaxy Redshift');
plt.ylabel('Galaxies / %.2f' % (6./nbins));
plt.legend();
plt.subplot(2,2,2);
plt.hist(catalog['r_ab'],bins=nbins,range=(16,30),histtype='step',color='black',label='$r_{AB}$');
plt.hist(catalog['i_ab'],bins=nbins,range=(16,30),histtype='step',color='red',label='$i_{AB}$');
plt.xlabel('(b) Galaxy AB Magnitude');
plt.ylabel('Galaxies / %.2f' % ((30.-16.)/nbins));
plt.legend(loc = 'upper left');
plt.subplot(2,2,3);
plt.hist(sigma_p,bins=nbins,range=(0,3),histtype='step',color='black',label='second-moment radius');
plt.xlabel('(c) Galaxy second moment radius (arcseconds)');
plt.ylabel('Galaxies / %.2f' % (3./nbins));
plt.legend();
plt.subplot(2,2,4);
plt.hist(e1,bins=nbins,range=(-1,+1),histtype='step',color='black',label='$\epsilon_1$');
plt.hist(e2,bins=nbins,range=(-1,+1),histtype='step',color='red',label='$\epsilon_2$');
plt.xlabel('(e) Galaxy ellipticity');
plt.ylabel('Galaxies / %.2f' % (2./nbins));
plt.legend();
plt.tight_layout();
fig1d = plt.figure(figsize=(8,4));
plt.xlim(-0.2,3);
plt.ylim(-0.1,1.5);
plt.scatter(hlr_disk,hlr_bulge,s=2,c=frac_bulge,lw=0,rasterized=True);
plt.xlabel('(d) Disk half-light radius (arcsec)');
plt.ylabel('Bulge half-light radius (arcsec)');
plt.colorbar(label='bulge-to-total flux ratio',pad=0.01);
nbins = 50;
fig1f = plt.figure(figsize=(6.5,4));
plt.hist(emag,bins=nbins,range=(0,1),histtype='step',color='black');
plt.xlabel('(f) Ellipticity magnitude $|\epsilon|$');
plt.ylabel('Galaxies / %.2f' % (1./nbins));
plt.tight_layout();
```
## Prepare Figure for Paper
```
area = 60.*60. # sq. arcmin.
wgt=np.empty(num_galaxies)
wgt[:]=1./area
print 'Total catalog galaxies %ld = %.3f /sq.arcmin.' % (num_galaxies,num_galaxies/area)
gold = (catalog['i_ab'] < 25.3)
notgold = np.logical_not(gold)
num_gold = np.count_nonzero(gold)
print 'Gold sample density is %.3f /sq.arcmin.' % (num_gold/area)
print ' Gold RMS ellipticity %.3f' % erms(gold)
cut1,cut2=0.0,0.1
disk_only = (frac_bulge<=cut1)
big_bulge = (frac_bulge>cut2)
disk_bulge = np.logical_not(np.logical_or(disk_only,big_bulge))
gold_disk_only = np.logical_and(gold,disk_only)
gold_big_bulge = np.logical_and(gold,big_bulge)
gold_disk_bulge = np.logical_and(gold,disk_bulge)
stats = lambda mask,ntot: (np.count_nonzero(mask),100.*np.count_nonzero(mask)/ntot)
print '# disk-only = %6ld (%.1f%%)' % stats(disk_only,num_galaxies)
print '# disk-bulge = %6ld (%.1f%%)' % stats(disk_bulge,num_galaxies)
print '# big-bulge = %6ld (%.1f%%)' % stats(big_bulge,num_galaxies)
print '# gold disk-only = %6ld (%.1f%%)' % stats(gold_disk_only,num_gold)
print '# gold disk-bulge = %6ld (%.1f%%)' % stats(gold_disk_bulge,num_gold)
print '# gold big-bulge = %6ld (%.1f%%)' % stats(gold_big_bulge,num_gold)
print ' Gold disk-only RMS ellipticity %.3f' % erms(gold_disk_only)
print 'Gold disk-bulge RMS ellipticity %.3f' % erms(gold_disk_bulge)
print ' Gold big-bulge RMS ellipticity %.3f' % erms(gold_big_bulge)
fig = plt.figure(figsize=(15,12))
nrow,ncol=3,3
# magnitude
plt.subplot(nrow,ncol,1)
abmin,abmax=20.,29.
nbins=18
plt.hist(catalog['r_ab'],bins=nbins,range=(abmin,abmax),histtype='step',
weights=wgt,color='red',label='$r_{AB}$')
plt.hist(catalog['i_ab'],bins=nbins,range=(abmin,abmax),histtype='step',
weights=wgt,color='blue',label='$i_{AB}$')
plt.xlim(abmin,abmax)
plt.ylim(0.,62.)
plt.xlabel('Catalog AB magnitude')
plt.ylabel('Galaxies / sq.arcmin. / (%.1f mag)' % ((abmax-abmin)/nbins))
plt.legend(loc = 'upper left')
plt.annotate('',xy=(25.3,0.),xytext=(25.3,50.),xycoords='data',textcoords='data',
arrowprops={'arrowstyle':'-','color':'goldenrod'})
plt.annotate('(a)',xy=(0.5,0.9),xycoords='axes fraction',fontsize='large')
# redshift
plt.subplot(nrow,ncol,2)
zmax=5.
nbins=25
plt.hist(catalog['redshift'],bins=nbins,range=(0,zmax),histtype='step',color='black',
weights=wgt,label='All')
plt.hist(catalog['redshift'][gold],bins=25,range=(0,5),histtype='stepfilled',
weights=wgt[gold],facecolor='goldenrod',color='black',label='i < 25.3')
plt.xlabel('Catalog redshift')
plt.ylabel('Galaxies / sq.arcmin. / ($\Delta z = %.1f$)' % (zmax/nbins))
plt.legend()
plt.annotate('(b)',xy=(0.5,0.9),xycoords='axes fraction',fontsize='large')
# size
plt.subplot(nrow,ncol,3)
rmax=1.5
nbins=30
plt.hist(sigma_m,bins=nbins,range=(0,rmax),histtype='step',color='black',
weights=wgt,label='All');
plt.hist(sigma_m[gold],bins=nbins,range=(0,rmax),histtype='stepfilled',
facecolor='goldenrod',color='black',weights=wgt[gold],label='i < 25.3');
plt.xlim(0.,rmax)
plt.xlabel('Catalog size $\sigma_{-} = |Q|^{1/4}$ (arcsec.)')
plt.ylabel('Galaxies / sq. arcmin. / ($\Delta\sigma_{-}=%.1f$)' % (rmax/nbins))
plt.legend()
plt.annotate('(c)',xy=(0.5,0.9),xycoords='axes fraction',fontsize='large')
# bulge fraction
plt.subplot(nrow,ncol,4)
nbins=58
plt.hist(frac_bulge,bins=nbins,range=(-0.06,1.06),histtype='step',color='black',
weights=wgt,label='All')
plt.hist(frac_bulge[gold],bins=nbins,range=(-0.06,1.06),histtype='stepfilled',color='goldenrod',
weights=wgt[gold],label='i < 25.3')
plt.yscale('log')
plt.xlim(-0.06,1.06)
plt.ylim(5e-2,5e2)
plt.xlabel('Bulge-to-total flux fraction')
plt.ylabel('Galaxies / sq.armin. / %.2f' % (1./nbins))
plt.annotate('',xy=(cut2,5e-2),xytext=(cut2,5e2),xycoords='data',textcoords='data',
arrowprops={'arrowstyle':'-','color':'blue'})
plt.legend()
plt.annotate('(d)',xy=(0.5,0.9),xycoords='axes fraction',fontsize='large')
# ellipticity magnitude
plt.subplot(nrow,ncol,5)
emax=0.68
nbins=17
plt.hist(
(emag[gold_disk_only],emag[gold_disk_bulge],emag[gold_big_bulge]),
bins=nbins,range=(0,emax),histtype='step',color=('red','blue','green'),
weights=(wgt[gold_disk_only],wgt[gold_disk_bulge],wgt[gold_big_bulge]),
stacked=False,label=('disk','mixed','bulge'))
plt.xlim(0,emax)
plt.xlabel('Catalog ellipticity magnitude $|\epsilon|$')
plt.ylabel('Galaxies / sq.arcmin. / ($\Delta\epsilon=%.2f$)' % (emax/nbins))
plt.legend()
plt.annotate('(e) i < 25.3',xy=(0.4,0.9),xycoords='axes fraction',fontsize='large')
# color-shape correlations
plt.subplot(nrow,ncol,6)
plt.scatter(catalog['i_ab'],ri_color,vmin=0.,vmax=0.8,
s=2,c=sigma_m,lw=0,rasterized=True)
plt.colorbar(label='Catalog size $\sigma_{-}$ (arcsec.)',pad=0.01)
plt.xlim(abmin,abmax)
ri_min,ri_max = -0.6,+1.8
plt.ylim(ri_min,ri_max)
plt.xlabel('Catalog $i_{AB}$ magnitude')
plt.ylabel('Catalog color $\Delta (r-i)_{AB}$')
plt.annotate('(f) All',xy=(0.1,0.9),xycoords='axes fraction',fontsize='large')
#
plt.subplot(nrow,ncol,7)
w=disk_only
plt.scatter(catalog['i_ab'][w],ri_color[w],vmin=0.,vmax=0.7,
s=2,c=emag[w],lw=0,rasterized=True)
plt.colorbar(label='Ellipticity magnitude $|\epsilon|$',pad=0.01)
plt.xlim(abmin,abmax)
plt.ylim(ri_min,ri_max)
plt.xlabel('Catalog $i_{AB}$ magnitude')
plt.ylabel('Catalog color $\Delta (r-i)_{AB}$')
plt.annotate('(g) Disk',xy=(0.1,0.9),xycoords='axes fraction',fontsize='large')
#
plt.subplot(nrow,ncol,8)
w=disk_bulge
plt.scatter(catalog['i_ab'][w],ri_color[w],vmin=0.,vmax=0.7,
s=2,c=emag[w],lw=0,rasterized=True)
plt.colorbar(label='Ellipticity magnitude $|\epsilon|$',pad=0.01)
plt.xlim(abmin,abmax)
plt.ylim(ri_min,ri_max)
plt.xlabel('Catalog $i_{AB}$ magnitude')
plt.ylabel('Catalog color $\Delta (r-i)_{AB}$')
plt.annotate('(h) Mixed',xy=(0.1,0.9),xycoords='axes fraction',fontsize='large')
#
plt.subplot(nrow,ncol,9)
w=big_bulge
plt.scatter(catalog['i_ab'][w],ri_color[w],vmin=0.,vmax=0.7,
s=2,c=emag[w],lw=0,rasterized=True)
plt.colorbar(label='Ellipticity magnitude $|\epsilon|$',pad=0.01)
plt.xlim(abmin,abmax)
plt.ylim(ri_min,ri_max)
plt.xlabel('Catalog $i_{AB}$ magnitude')
plt.ylabel('Catalog color $\Delta (r-i)_{AB}$')
plt.annotate('(i) Bulge',xy=(0.1,0.9),xycoords='axes fraction',fontsize='large')
#
plt.tight_layout();
fig.savefig('output/catalog_plots.pdf');
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import sentencepiece as spm
import nltk
import ast
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
import time
ncbi_com_0 = pd.read_csv("data/ncbi_comm_use_000000000000.csv")
ncbi_com_1 = pd.read_csv("data/ncbi_comm_use_000000000001.csv")
ncbi_non_com_0 = pd.read_csv("data/ncbi_non_comm_use_000000000000.csv")
ncbi_non_com_1 = pd.read_csv("data/ncbi_non_comm_use_000000000001.csv")
ncbi_com_0.shape
ncbi_com_1.shape
ncbi_non_com_0.shape
```
```
df1 = ncbi_com_0
df1.shape
df2 = pd.concat([df1, ncbi_com_1])
df2.shape
df3 = pd.concat([df2, ncbi_non_com_0])
df3.shape
df4 = pd.concat([df3, ncbi_non_com_1])
df4.shape
df1.info()
df1.isnull().sum()
df1.head(1)
df2.head(2)
ncbi_com_1['Body'][0]
ncbi_com_1['Body'][1]
```
# Data Preprocessing
```
def remove_newline_char(text):
text = text.replace("\n", " ")
return text
def nltk_sent_tokenize(text):
text = sent_tokenize(text)
return text
def contains_coronavirus(text):
if "coronavirus" in text.lower():
return 1
else:
return 0
def contains_COVID(text):
if "COVID" in text:
return 1
else:
return 0
def preprocess(df):
# remove rows that have null Body
df = df[~df['Body'].isnull()]
df['Body'] = df['Body'].apply(remove_newline_char)
df['Body_sents'] = df['Body'].apply(nltk_sent_tokenize)
df['Body_tokens'] = df['Body'].apply(word_tokenize)
df['len_body'] = df['Body_tokens'].apply(lambda x: len(x))
df['has_coronavirus'] = df['Body'].apply(contains_coronavirus)
df['has_COVID'] = df['Body'].apply(contains_COVID)
df['len_sents'] = df['Body_sents'].apply(lambda x: len(x))
return df
```
# Build and save corpus
```
def build_raw_corpus(df):
raw_corpus = []
for i, row in df.iterrows():
raw_corpus += row['Body_sents']
return raw_corpus
def save_corpus_as_txt(filename, corpus):
with open(filename, 'w') as f:
for sent in corpus:
f.write(sent)
f.write('\n')
f.close()
def build_tokenizer_input(df, filename):
raw_corpus = build_raw_corpus(df)
save_corpus_as_txt(filename, raw_corpus)
```
# Train SentencePiece tokenizer
```
def train_tokenizer(model_prefix, input_file, vocab_size):
spm.SentencePieceTrainer.train('--model_prefix={} --input={} --vocab_size={}'.format(model_prefix,
input_file, vocab_size))
```
# Load model
```
def load_model(model_file):
sp = spm.SentencePieceProcessor()
sp.Load(model_file)
return sp
```
# Tokenize text
```
def sp_tokenize(model, text):
tokenized_text = model.EncodeAsPieces(text)
return tokenized_text
```
# Experiments
```
# rows: 5958
# vocab_size=5000
# preprocess data
t1 = time.time()
df1 = preprocess(df1)
t2 = time.time()
print ("Time:", (t2-t1)/60)
# build corpus
input_file_1 = "sample_input_1.txt"
t1 = time.time()
raw_corpus_1 = build_tokenizer_input(df1, input_file_1)
t2 = time.time()
print ("Time:", (t2-t1)/60)
# train sp tokenizer
model_prefix_1 = "m1"
vocab_size = 5000
t1 = time.time()
train_tokenizer(model_prefix_1, input_file_1, vocab_size)
t2 = time.time()
print ("Time:", (t2-t1)/60)
# load model
model_file_1 = model_prefix_1 + ".model"
sp1 = load_model(model_file_1)
# tokenize text
text = "This is a novel coronavirus disease."
tokenized_text = sp_tokenize(sp1, text)
tokenized_text
```
# Some EDA on data
```
df1.head()
df1.has_coronavirus.value_counts()
df1.has_COVID.value_counts()
np.mean(df1['len_body'])
np.mean(df1['len_sents'])
```
| github_jupyter |
$$ \text{LaTeX command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
$$
# EECS 445: Machine Learning
## Hands On 05: Linear Regression II
* Instructor: **Zhao Fu, Valli, Jacob Abernethy and Jia Deng**
* Date: September 26, 2016
### Review: Maximum Likelihood
Suppose we have a set of observed data $D$ and we want to evaluate a parameter setting $w$:
$$p(w|D) = \frac{p(D|w)p(w)}{p(D)}$$
$$p(D) = \sum_{w}{p(D|w)p(w)}$$
We call $p(D|w)$ as the likelihood function. Then we have $p(w|D) \propto p(D|w)p(w)$. Suppose $p(w)$ is the same for all $w$, we can only choose $w$ to maximize likelihood $p(D|w)$, which is to maximize the log-likelihood $\log{p(D|w)}$.
### Review Problem: Maximum Likelihood Estimation
We have observed data $x_1, \cdots, x_n$ drawn from Bernoulli distribution:
$$p(x) = \begin{cases}
\theta & \quad \text{if } x = 1\\
1 - \theta & \quad \text{if } x = 0\\
\end{cases}$$
(a) What is the likelihood function based on $\theta$?
(b) What is the log-likelihood function?
(c) Compute estimated $\theta$ to maximize the log-likelihood function.
### Review: Maximum Likelihood
Suppose we have a set of observed data $D$ and we want to evaluate a parameter setting $w$:
$$p(w|D) = \frac{p(D|w)p(w)}{p(D)}$$
$$p(D) = \sum_{w}{p(D|w)p(w)}$$
We call $p(D|w)$ as the likelihood function. Then we have $p(w|D) \propto p(D|w)p(w)$. Suppose $p(w)$ is the same for all $w$, we can only choose $w$ to maximize likelihood $p(D|w)$, which is to maximize the log-likelihood $\log{p(D|w)}$.
### Solution 1: Maximum Likelihood Estimation
(a) $$
\begin{array}{ll}
L(\theta) &= p(D|\theta) = p(x_1, \dots, x_n|\theta) \\
&= \prod_{i}{p(x_i)} = \theta^{\sum{\mathbb{1}(x_i = 1)}}(1 - \theta)^{\sum{\mathbb{1}(x_i = 0)}} \\
&= \theta^{k}(1 - \theta)^{n - k}
\end{array}
$$
where $k$ is the number of $1$s from the observed data.
(b) $$\log{L(\theta)} = k\log(\theta) + (n - k)\log(1 - \theta)$$
### Solution 1: Maximum Likelihood Estimation
(c) Set the derivative of $log(L(\theta))$ to zero we have
$$
\frac{\mathrm{d}log(L(\theta))}{\mathrm{d}\theta} = \frac{k}{\theta} - \frac{n - k}{1 - \theta} = 0 \\
\frac{k}{\theta} = \frac{n - k}{1 - \theta} \\
\theta = \frac{k}{n}
$$.
### Problem 2: Maximum Likelihood Estimation for Gaussian Distribution
We have observed data $x_1, \cdots, x_n$ drawn from Normal distribution:
$$\mathcal{N}(x|\mu, \sigma^2) = \frac{1}{(2\pi \sigma^2)^\frac{1}{2}} \exp{(-\frac{1}{2\sigma^2}(x - \mu)^2)}$$
(a) What is the likelihood function based on $\mu$ and $\sigma^2$?
(b) What is the log-likelihood function?
(c) Compute estimated parameters $\mu$ and $\sigma^2$ to maximize the log-likelihood function.
### Solution 2
We have observed data $x_1, \cdots, x_n$ drawn from Normal distribution:
$$\mathcal{N}(x|\mu, \sigma^2) = \frac{1}{(2\pi \sigma^2)^\frac{1}{2}} \exp{(-\frac{1}{2\sigma^2}(x - \mu)^2)}$$
(a) What is the log-likelihood function?
**Answer**: $-(n/2)\log \sigma - \sum_{i=1}^n\frac{1}{2\sigma^2}(x_i - \mu)^2$
(b) Compute estimated parameters $\mu$ and $\sigma^2$ to maximize the log-likelihood function.
**Answer**:
* $\mu_{\text{MLE}} = \frac{1}{n} \sum_{i=1}^n x_i$
* $\sigma^2_{\text{MLE}} = \frac{1}{n} \sum_{i=1}^n (x_i - \mu_{\text{MLE}})^2$
## Regularized Linear Regression
### Regularized Least Squares: Objective Function
- Recall the objective function we minimizes in last lecture is
$$
E(\vec{w}) = \frac12 \sum_{n=1}^N \left( \vec{w}^T \phi(\vec{x}_n) - t_n \right)^2
$$
- To penalize the large coefficients, we will add one penalization/regularization term to it and minimize them altogether.
$$
E(\vec{w}) = \underbrace{ \frac12 \sum_{n=1}^N \left( \vec{w}^T \phi(\vec{x}_n) - t_n \right)^2 }_{E_D(\vec{w})}+ \underbrace{\boxed{\frac{\lambda}{2} \left \| \vec{w} \right \|^2}}_{E_W(\vec{w})}
$$
of which $E_D(\vec{w})$ represents the term of sum of squared errors and $E_W(\vec{w})$ is the regularization term.
- $\lambda$ is the regularization coefficient.
- If $\lambda$ is large, $E_{\vec{W}}(\vec{w})$ will dominate the objective function. As a result we will focus more on minimizing $E_W(\vec{w})$ and the resulting solution $\vec{w}$ tends to have smaller norm and the $E_D(\vec{w})$ term will be larger.
### Regularized Least Squares: Derivation
- Based on what we have derived in last lecture, we could write the objective function as
$$
\begin{aligned}
E(\vec{w})
&= \frac12 \sum_{n=1}^N \left( \vec{w}^T \phi(\vec{x}_n) - t_n \right)^2 + \frac{\lambda}{2} \left \| \vec{w} \right \|^2
\end{aligned}
$$
**Exercise**: Derive the gradient in element-wise to verify the above result, i.e. using $\phi(\vec{x}_n)_d$ and $w_d$ to represent $E(w_1, w_2, \dots, w_D)$ and derive $\frac{\partial E}{\partial w_d}$. Suppose $\phi(\vec{x_n}) \in \mathbb{R}^D$.
### Regularized Least Squares: Solution
- Based on what we have derived in last lecture, we could write the objective function as
$$
\begin{aligned}
E(\vec{w}) = \frac{1}{2}\sum_{n = 1}^{N}{(\sum_{d=1}^{D}{w_d\phi_d(\vec{x}_n)} - t_n)^2} + \frac{\lambda}{2}\sum_{d=1}^{D}{w_d^2} \\
\frac{\partial E}{\partial w_d} = \sum_{n = 1}^{N}{\phi_d(\vec{x}_n)(\sum_{d=1}^{D}{w_d\phi_d(\vec{x}_n)} - t_n)} + \lambda w_d \\
\frac{\partial E}{\partial w_d} = \sum_{n = 1}^{N}{\phi_d(\vec{x}_n)(\vec{w}^T\phi(\vec{x}_n) - t_n)} + \lambda w_d
\end{aligned}
$$
- The gradient is
$$
\begin{aligned}
\nabla_{\vec{w}} E(\vec{w})
&= \Phi^T \Phi \vec{w} - \Phi^T \vec{t} + \lambda \vec{w}\\
&= (\Phi^T \Phi + \lambda I)\vec{w} - \Phi^T \vec{t}
\end{aligned}
$$
- Setting the gradient to 0, we will get the solution
$$
\boxed{ \hat{\vec{w}}=(\Phi^T \Phi + \lambda I)^{-1} \Phi^T \vec{t} }
$$
### Regularized Least Squares: Closed Form
In the solution to ordinary least squares which is $\hat{\vec{w} }=(\Phi^T \Phi)^{-1} \Phi^T \vec{t}$, we cannot guarantee $\Phi^T \Phi$ is invertible. But in regularized least squares, if $\lambda > 0$, $\Phi^T \Phi + \lambda I$ is always invertible.
**Exercise**: To be invertible, a matrix needs to be full rank. Argue that $\Phi^T \Phi + \lambda I$ is full rank by characterizing its $p$ eigenvalues in terms of the singular values of $\Phi$ and $\lambda$.
### Solution:
Suppose $\Phi = U^T\Lambda V$ which is SVD of $\Phi$, we have $\Phi^T\Phi = V^T\Lambda^2V$.
Then we have $(\Phi^T\Phi + \lambda I)V^T = V^T(\Lambda^2 + \lambda I)$.
The $i^{th}$ eigenvalue of $(\Phi^T\Phi + \lambda I)$ is $\lambda_i^2 + \lambda > 0$ where $\lambda_i$ is the singular value for $\Phi$.
Then $\det{(\Phi^T\Phi + \lambda I)} = \prod{(\lambda_i^2 + \lambda)} > 0$, which means $\Phi^T\Phi + \lambda I$ is invertable.
### Regularized Least Squares: Different Norms
- The $\ell^p$ norm of a vector $\vec{x}$ is defined as
$$
\left \| \vec{x} \right \|_p = (\sum_{j=1}^{M} |x_j|^p)^\frac{1}{p}
$$
- For the regularized least squares above, we used $\ell^2$ norm. We could also use other $\ell^p$ norms for different regularizers and the objective function becomes
$$
E(\vec{w}) = \frac12 \sum_{n=1}^N \left( \vec{w}^T \phi(\vec{x}_n) - t_n \right)^2 + \frac{\lambda}{2} \left \| \vec{w} \right \|_p^p
$$
**Exercise**: Derive the element-wise gradient for the above $\ell^p$ norm regularized energy function.
### Regularized Least Squares: Summary
- Simple modification of linear regression
- $\ell^2$ Regularization controls the tradeoff between *fitting error* and *complexity*.
- Small $\ell^2$ regularization results in complex models, but with risk of overfitting
- Large $\ell^2$ regularization results in simple models, but with risk of underfitting
- It is important to find an optimal regularization that *balances* between the two
## Probablistic Interpretation of Least Squares Regression
- We have showed derived the solution to least squares regression by minimizing objective function. Now we will provide a probablistic perspective. Specifically, we will show the solution to **regular least squares** is just the **maximum likelihood** estimate of $\vec{w}$ and the solution to **regularized least squares** is the **Maximum a Posteriori** estimate.
### Some Background
- Gaussian Distribution
$$
\mathcal{N}(x, \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[ \frac{(x-\mu)^2}{2\sigma^2} \right]
$$
- **Maximum Likelihood Estimation** and **Maximum a Posteriori Estimation (MAP)**
- For distribution $t \sim p(t|\theta)$. $\theta$ is some unknown parameter (like mean or variance) to be estimated.
- Given observation $\vec{t} = (t_1, t_2, \dots, t_N)$,
- The Maximum Likelihood Estimator is
$$
\theta_{ML} = \arg \max \prod_{n=1}^N p(t_n | \theta)
$$
- If we have some prior knowledge about $\theta$, the MAP estimator is
$$
\theta_{MAP} = \arg \max \prod_{n=1}^N p(\theta | t_n) \quad (\text{Posteriori Probability of } \theta)
$$
### Maximum Likelihood Estimator $\vec{w}_{ML}$
- We assume the **signal+noise** model of single data $(\vec{x}, t)$ is
$$
\begin{gathered}
t = \vec{w}^T \phi(\vec{x}) + \epsilon \\
\epsilon \sim \mathcal{N}(0, \beta^{-1})
\end{gathered}
$$
of which $\vec{w}^T \phi(\vec{x})$ is the true model, $\epsilon$ is the perturbation/randomness.
- Since $\vec{w}^T \phi(\vec{x})$ is deterministic/non-random, we have
$$
t \sim \mathcal{N}(\vec{w}^T \phi(\vec{x}), \beta^{-1})
$$
**Exercise**:
* Derive the likelihood function for a single data $p(t_n|\vec{x}_n,\vec{w},\beta)$.
* Derive the complete log likelihood function for the whole dataset $\ln p(\vec{t}|\mathcal{X},\vec{w},\beta)$.
* Using maximum likelihood to estimate parameter $\vec{w}$.
### Maximum Likelihood Estimator $\vec{w}_{ML}$
- The **likelihood function** of $t$ is just **probability density function (PDF)** of $t$
$$
p(t|\vec{x},\vec{w},\beta) = \mathcal{N}(t|\vec{w}^T \phi(\vec{x}),\beta^{-1})
$$
- For inputs $\mathcal{X}=(\vec{x}_1, \dots, \vec{x}_n)$ and target values $\vec{t}=(t_1,\dots,t_n)$, the data likelihood is
$$
p(\vec{t}|\mathcal{X},\vec{w},\beta)
= \prod_{n=1}^N p(t_n|\vec{x}_n,\vec{w},\beta)
= \prod_{n=1}^N \mathcal{N}(t_n|\vec{w}^T\phi(\vec{x}_n),\beta^{-1})
$$
- **Notation Clarification**
- $p(t|x,w,\beta)$ it the PDF of $t$ whose distribution is parameterized by $x,\vec{w},\beta$.
- $\mathcal{N}(\vec{w}^T \phi(\vec{x}), \beta^{-1})$ is Gaussian distribution with **mean** $\vec{w}^T \phi(\vec{x})$ and **variance** $\beta^{-1}$.
- $\mathcal{N}(t|\vec{w}^T \phi(\vec{x}),\beta^{-1})$ is the PDF of $\vec{t}$ which has Gaussian distribution $\mathcal{N}(\vec{w}^T \phi(\vec{x}), \beta^{-1})$
### Maximum Likelihood Estimator $\vec{w}_{ML}$: Derivation
- Single data likelihood is
$$
p(t_n|\vec{x}_n,\vec{w},\beta)
= \mathcal{N}(t_n|\vec{w}^T\phi(\vec{x}_n),\beta^{-1})
= \frac{1}{\sqrt{2 \pi \beta^{-1}}} \exp \left \{ - \frac{1}{2 \beta^{-1}} (t_n - \vec{w}^T \phi(x_n))^2 \right \}
$$
- Single data log-likelihood is
$$
\ln p(t_n|\vec{x}_n,\vec{w},\beta) = - \frac12 \ln 2 \pi \beta^{-1} - \frac{\beta}{2} (\vec{w}^T \phi(x_n) - t_n)^2
$$
We use logarithm because maximizer of $f(x)$ is the same as maximizer of $\log f(x)$. Logarithm can convert product to summation which makes life easier.
- Complete data log-likelohood is
$$
\begin{aligned}
\ln p(\vec{t}|\mathcal{X},\vec{w},\beta)
&= \ln \left[ \prod_{n=1}^N p(t_n|\vec{x}_n,\vec{w},\beta) \right] = \sum_{n=1}^N \ln p(t_n|\vec{x}_n,\vec{w},\beta) \\
&= \sum_{n=1}^N \left[ - \frac12 \ln 2 \pi \beta^{-1} - \frac{\beta}{2} (\vec{w}^T \phi(x_n) - t_n)^2 \right]
\end{aligned}
$$
- Maximum likelihood estimate $\vec{w}_{ML}$ is
$$
\begin{aligned}
\vec{w}_{ML}
&= \underset{\vec{w}}{\arg \max} \ln p(\vec{t}|\mathcal{X},\vec{w},\beta) \\
&= \underset{\vec{w}}{\arg \max} \sum_{n=1}^N \left[ - \frac12 \ln 2 \pi \beta^{-1} - \frac{\beta}{2} (\vec{w}^T \phi(x_n) - t_n)^2 \right] \\
&= \underset{\vec{w}}{\arg \max} \sum_{n=1}^N \left[ - \frac{\beta}{2} (\vec{w}^T \phi(x_n) - t_n)^2 \right] \\
&= \underset{\vec{w}}{\arg \min} \sum_{n=1}^N \left[(\vec{w}^T \phi(x_n) - t_n)^2 \right]
\end{aligned}
$$
- Familiar? Recall the objective function we minimized in least squares is $E(\vec{w}) = \frac12 \sum_{n=1}^N \left( \vec{w}^T \phi(\vec{x}_n) - t_n \right)^2$, so we could conclude that
$$
\boxed{\vec{w}_{ML} = \hat{\vec{w}}_{LS} = \Phi^\dagger \vec{t}}
$$
### MAP Estimator $\vec{w}_{MAP}$
- The **MAP estimator** is obtained by
$$
\begin{aligned}
\vec{w}_{MAP}
&= \arg \max p(\vec{w}|\vec{t}, \mathcal{X},\beta) & & (\text{Posteriori Probability})\\
&= \arg \max \frac{p(\vec{w}, \vec{t}, \mathcal{X},\beta)}{p(\mathcal{X}, t, \beta)} \\
&= \arg \max \frac{p(\vec{t}|\vec{w}, \mathcal{X},\beta) p(\vec{w}, \mathcal{X}, \beta)}{p(\mathcal{X}, t, \beta)} \\
&= \arg \max p(\vec{t}|\vec{w}, \mathcal{X},\beta) p(\vec{w}, \mathcal{X}, \beta) & & (p(X, t, \beta) \text{ is irrelevant to} \ \vec{w})\\
&= \arg \max p(\vec{t}|\vec{w}, \mathcal{X},\beta) p(\vec{w}) p(\mathcal{X}) p(\beta) & & (\text{Independence}) \\
&= \arg \max p(\vec{t}|\vec{w}, \mathcal{X},\beta) p(\vec{w}) & & (\text{Likelihood} \times \text{Prior})
\end{aligned}
$$
We are just using **Bayes Theorem** for the above steps.
- The only difference from ML estimator is we have an extra term of PDF of $\vec{w}$. This is the **prior belief** of $\vec{w}$. Here, we assume,
$$
\vec{w} \sim \mathcal{N}(\vec{0}, \alpha^{-1}I)
$$
**Exercise**: Derive the MAP Estimator of $\vec{w}$ and compare the solution with regularized linear regression. What is $\lambda$ in this case?
### MAP Estimator $\vec{w}_{MAP}$: Derivation
- $\vec{w} \sim \mathcal{N}(\vec{0}, \alpha^{-1}I)$ is **multivariate Gaussian** which has PDF
$$
p(\vec{w}) = \frac{1}{\left( \sqrt{2 \pi \alpha^{-1}} \right)^N} \exp \left \{ -\frac{1}{2 \alpha^{-1}} \sum_{n=1}^N w_n^2 \right \}
$$
- So the MAP estimator is
$$
\begin{aligned}
\vec{w}_{MAP}
&= \underset{\vec{w}}{\arg \max} \ p(\vec{t}|\vec{w}, \mathcal{X},\beta) p(\vec{w}) = \underset{\vec{w}}{\arg \max} \left[\ln p(\vec{t}|\vec{w}, \mathcal{X},\beta) + \ln p(\vec{w}) \right] \\
&= \underset{\vec{w}}{\arg \min} \left[ \sum_{n=1}^N \frac{\beta}{2} (\vec{w}^T \phi(x_n) - t_n)^2 + \frac{\alpha}{2} \sum_{n=1}^N w_n^2 \right] \\
&= \underset{\vec{w}}{\arg \min} \left[ \sum_{n=1}^N \frac12 (\vec{w}^T \phi(x_n) - t_n)^2 + \frac12 \frac{\alpha}{\beta} \left \| \vec{w} \right \|^2 \right]
\end{aligned}
$$
- Exactly the objective in regularized least squares! So
$$
\boxed{ \vec{w}_{MAP} = \hat{\vec{w}}=\left(\Phi^T \Phi + \frac{\alpha}{\beta} I\right)^{-1} \Phi^T \vec{t} }
$$
- Compared with $\ell^2$ norm regularized least square, we have $\lambda = \frac{\alpha}{\beta}$.
### Problem 5a: MAP estimation for Linear Regression with unusual Prior
Assume we have $n$ vectors $\vec{x}_1, \cdots, \vec{x}_n$. We also assume that for each $\vec{x}_i$ we have observed a *target* value $t_i$, where
$$
\begin{gather}
t_i = \vec{w}^T \vec{x_i} + \epsilon \\
\epsilon \sim \mathcal{N}(0, \beta^{-1})
\end{gather}
$$
where $\epsilon$ is the "noise term".
(a) Quick quiz: what is the likelihood *given* $\vec{w}$? That is, what's $p(t_i | \vec{x}_i, \vec{w})$?
**Answer**: $p(t_i | \vec{x}_i, \vec{w}) = \mathcal{N}(t_i|\vec{w}^\top \vec{x_i}, \beta^{-1}) = \frac{1}{(2\pi \sigma^2)^\frac{1}{2}} \exp{(-\frac{\beta}{2}(t_i - \vec{w}^\top \vec{x_i})^2)}$
### Problem 5: MAP estimation for Linear Regression with unusual Prior
Assume we have $n$ vectors $\vec{x}_1, \cdots, \vec{x}_n$. We also assume that for each $\vec{x}_i$ we have observed a *target* value $t_i$, sampled IID. We will also put a *prior* on $\vec{w}$, using PSD matrix $\Sigma$.
$$
\begin{gather}
t_i = \vec{w}^T \vec{x_i} + \epsilon \\
\epsilon \sim \mathcal{N}(0, \beta^{-1}) \\
\vec{w} \sim \mathcal{N}(0, \Sigma)
\end{gather}
$$
Note: the difference here is that our prior is a multivariate gaussian with *non-identity* covariance! Also we let $\mathcal{X} = \{\vec{x}_1, \cdots, \vec{x}_n\}$
(a) Compute the log posterior function, $\log p(\vec{w}|\vec{t}, \mathcal{X},\beta)$
*Hint*: use Bayes' Rule
(b) Compute the MAP estimate of $\vec{w}$ for this model
*Hint*: the solution is very similar to the MAP estimate for a gaussian prior with identity covariance
| github_jupyter |
```
ALPHABET = [' ', 'e', 't', 'a', 'i', 'o', 's', 'n', 'r', 'h', 'l', 'd', 'c', 'm', 'u', 'f', 'g', 'y', 'b', 'w', 'p',\
'.', 'v', ',', 'k', "'", '/', '>', '<', '-', '"', 'j', 'x', ')', '(', '!', 'z', 'q', '0', '1', '?', ':',\
'9', '2', '*', ';', '3', '5', '8', '4', '7', '&', '6', 'é', '\x96', '`', '$', '\x85', '_', '%', '=', '#',\
'UNK']
from random import random, choice
def noise_generator(string, noise_level, chars=ALPHABET+['']):
noised = ""
for c in string:
if random() > noise_level:
noised += c
else:
# if random() < noise_level:
noised += choice(chars)
assert len(noised) == len(string)
return noised
text = """Spanish is the second most spoken language in the United States of America. Forty-five million Hispanophones speak Spanish as their first, second or heritage language,[1] and there are six million Spanish language students in the United States.[2] This makes the United States the third-largest Hispanophone country in the world after Mexico and Colombia, and before Spain. Spanish has both the largest number of native language Romance speakers and native Indo-European language speakers in the world.[3] About half of all American Spanish speakers also assessed themselves as speaking English "very well" in the 2000 U.S. Census.[4]
There are more Spanish-speakers in the United States than speakers of French, German, Italian, Hawaiian, varieties of Chinese and Native American languages combined. According to the 2012 American Community Survey conducted by the U.S. Census Bureau, Spanish is the primary language spoken at home by 38.3 million people aged five or older, more than twice that of 1990.[5][6]
The Spanish language has been present in what is now the United States since the 16th and 17th centuries, with the arrival of Spanish colonization in North America. Colonizers settled in areas that would later become the states of Florida, Texas, Colorado, New Mexico, Arizona, Nevada, Utah, and California. The Spanish explorers explored areas of 42 future U.S. states leaving behind a varying range of Hispanic legacy in the North American continent. Western regions of the Louisiana Territory were also under Spanish rule between 1763 and 1800, after the French and Indian War, further extending the Spanish influence throughout the modern-day United States of America.
After the incorporation of these areas into the United States in the first half of the 19th century, the Spanish language was later reinforced in the country by the acquisition of Puerto Rico in 1898. Later waves of emigration from Mexico, Cuba, El Salvador and elsewhere in Hispanic America to the United States beginning in the second half of the 19th century to the present-day have strengthened the role of the Spanish language in the country. Today, Hispanics are one of the fastest growing ethnic groups in the United States, thus increasing the use and importance of American Spanish in the United States."""
noise_generator(text, 0.1)
text = text[:1024].lower()
text
noise_generator(text, 0.01)
noise_generator(text, 0.02)
noise_generator(text, 0.05)
noise_generator(text, 0.2)
```
Нужно будет делать довольно мелкую сетку. Для начала можно попробовать несколько точек: [0, 0.01, 0.05, 0.1, 0.2]
| github_jupyter |
# 2A.ml - Déterminer la vitesse moyenne des vélib
Ce notebook explicite une solution pour calculer la vitesse moyenne des velib sachant qu'on ne connaît que l'état des stations à intervalle réguliers.
```
%matplotlib inline
```
Même si je propose quelques jeux de données, il est possible de créer le sien en s'inspirant du code suivant : [Récupérer les données Velib et les visualiser](http://www.xavierdupre.fr/app/manydataapi/helpsphinx/notebooks/api_velib_jcdecaux.html#apivelibjcdecauxrst). Le premier lien décrit les données plus en détail : elles sont constituées des décomptes des vélos et places disponibles pour chaque station d'une même ville et pour chaque minute. La méthode proposée ici est celle des **appariements** décrites par le premier lien. L'algorithme peut être décrit en deux étapes :
1. Construction d'une base d'événements : vélos reposés et retirés.
2. Appariement des vélos retirés avec les vélos reposés.
La première partie ne pose pas de difficulté particulière. On peut juste penser à retirer les premiers vélos réposés qui ne pourront pas être appariés de toutes façons. On fait de même pour les derniers vélos retirés.
La seconde partie est constitutée de deux éléments :
- un coût d'appariement
- la minimisation de l'appariement
Pour le coût, on peut y mettre à peu près toutes les contraintes imaginables (vitesse trop grande, durée trop grande ou trop petite). Pour la seconde, le code optimise de façon très naïve : on part d'un appariement aléatoire, on tire deux appariements $a_1 \rightarrow b_1$ et $a_2 \rightarrow b_2$. On inverse : $a_1 \rightarrow b_2$ et $a_2 \rightarrow b_1$. Si l'appariement est moins coûteux, on garde. Le code complet est disponible dans le module *velib_trajectories*.
```
from pyquickhelper.loghelper import str2datetime
from ensae_projects.datainc import besancon_df
import pandas
jeu = besancon_df()
df = pandas.read_csv(jeu, sep="\t", encoding="utf8")
df ["collect_date"] = df.apply( lambda r: str2datetime(r["collect_date"]),axis=1)
df.head()
```
**On vérifie les données pour la première date :**
```
from manydataapi.velib import DataCollectJCDecaux as DataVelibCollect
dt = df["file"][0]
subset = df [ df["file"] == dt ]
fig,ax,plt = DataVelibCollect.draw(subset, figsize=(16,6))
ax.set_title("Besançon - {0} - {1} stations".format(dt.replace("besancon.","") \
.replace(".txt","").replace("_", " "),
len(subset)));
```
**On calcule les événements (1 vélo apparu, 1 vélo disparu) :**
```
import ensae_projects.challenge.velib_trajectories as velib
events = list(sorted(velib.enumerate_events(df)))
print("nb events",len(events))
events [:2]
```
**On calcule le meilleur appariement (ça prend un peu de temps).** On ne prend que 200 itérations mais il en faudrait plus.
```
params = velib.ParemetreCoutTrajet()
print(params)
mindist, moyenne, appariement, positif, negatif = velib.appariement(events, iter=50, params=params)
print("vitesse moyenne", moyenne)
```
La vitesse évolue encore et il faudrait faire tourner l'algorithme plus longtemps et le recoder de façon plus efficace. Voyons déjà ce que cela donne en terme de déplacement.
```
import matplotlib.pyplot as plt
app = [(positif[a], negatif[b] ) for a,b in appariement]
plt.figure(figsize=(16,6))
for deb,fin in app :
x = [deb[3], fin[3]]
y = [deb[4], fin[4]]
if x[0] > 0 and y[0] and x[1] > 0 and y[1] > 0 :
# on enlève les trajets aberrants
plt.plot(x,y,color="b")
```
Difficile de dire sur ce simple dessin si cela est sensé. Il faudrait faire tourner l'algorithme plus longtemps et le recoder de façon plus efficace.
| github_jupyter |
# LMS filter and ADALINE algorithm
In this first project you will implement a Least Mean Square (LMS) error filter by using the Adaptive Linear Neuron (ADALINE) algorithm. This algorithm is a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time. It was invented in 1960 by Stanford University professor Bernard Widrow and his first Ph.D. student, Ted Hoff.
To fully understand the concepts of this filter I recommend you to watch the following lecture by professor Widrow:
```
from IPython.display import HTML
# Youtube
HTML('<iframe width="560" height="315" \\\
src="https://www.youtube.com/embed/hc2Zj55j1zU" \\\
title="YouTube video player" frameborder="0" allow="accelerometer; \\\
autoplay; clipboard-write; encrypted-media; \\\
gyroscope; picture-in-picture" allowfullscreen></iframe>')
```
After watching professor Widrow's lecture, you can also take a look at [this paper](https://isl.stanford.edu/~widrow/papers/j1975thecomplex.pdf) to fully understand the LMS filter and ADALINE algorithm. A good summary on how the algorithm works is presented in section 2 of [this page](https://www.clear.rice.edu/elec422/1999/nsekila/LMSAlgorithm.htm).
## Problem: Implement a LMS filter and ADALINE algorithm to find the coefficients of a Digital Filter
The main problem of this project is to implement the LMS filter and ADALINE algorithm to obtain the coefficients of a filter response $w[n]$ given the input $x[n]$ and the convolution $y[n] = x[n]*w[n]$. Since this is a supervised machine learning algorithm, you will train your model with a buffered version of the input $x[n]$ and compare your calculations with the expected outputs $y[n]$. After many iterations, you will notice that the coefficients tend to converge to the desired ones of the filter while reducing the error.
## Part 1: Defining the Algorithm
In this first part you will have to create a block diagram of the LMS filter and ADALINE algorithm based on the information given to you in the previous section, to do so, you can use any available tool that you want to create your block diagram as a png image.
[comment]: <> (Your image should be called algorithm.png and be saved in the folder Images.)
<img src="Images/algorithm.png" alt="Block Diagram" width="300"/>
## Part 2: Create a buffer of streamed data
To understand how data is processed by the filter you need to think about a buffer $Z$ that will be filled with some input data. First suppose that your buffer is of size 5 and is completely empty, for example full of zeros, and you have a vector $x$ also of size 5 with some data. Now let's see Figure 1 and understand what happens on every time step.
<img src="Images/buffer.png" alt="buffer" width="300"/>
You can see that the data inside of $Z$ is moved and there are two special cases:
1. Data inside $Z$ is fully loaded (green color).
2. Data inside $Z$ is loaded and emptied (orange and green colors)
With this image in mind, you can think that a filter processes a chunk of data given by the buffer on a specific time step instead of a recursive manner. This way you have a training data for a period of time that can be used in our LMS filter and ADALINE algorithm.
Now it is your turn to create a function called `get_buffer` that generates a buffer matrix $Z$. This buffer matrix can be in fully loaded or a loaded and emptied form.
```
import numpy as np
import matplotlib.pyplot as plt
def get_buffer(x, buffer_size=5, form='fl'):
"""
Function that generates a buffer matrix with a fully loaded or loaded and emptied form.
Parameters:
x (numpy array): Array of numbers representing the input signal.
buffer_size (int): Size of buffer.
form (string): String that represent the form of the Z matrix.
Can be 'fl' for fully loaded or 'lae' for loaded and emptied.
By default fully loaded is selected.
Returns:
Z (numpy array): Matrix with a fully loaded or loaded and emptied form.
"""
return None
```
Now test your `get_buffer` function with the following code and check if it matches your expectations.
```
test = np.arange(0,16)
print("Fully Loaded Form")
print(get_buffer(test, buffer_size=8, form='fl'), "\n")
print("Loaded and Emptied Form")
print(get_buffer(test, buffer_size=4, form='lae'))
```
## Part 3: Implement convolution
Now that you have a way to represent a vector $x$ as a buffer matrix $Z$ you will have to use it to implement a convolution as a matrix product between $Z$ and $w$, where $w$ is a vector of the coefficients of a filter response.
Your results should match the usage of the `numpy` function `convolve` as follows:
1. When using $Z_{lae}$ results should match `np.convolve(x, w, mode='full')`.
2. When using $Z_{fl}$ results should match `np.convolve(x, w, mode='full')[0:-w.shape[0]+1]`
```
x = np.array([-5, 1, -3, 2, 4, 0, 1, -7, 9, 3, -5, -6, 8, 4, 3, 0, 1, -4])
w = np.array([7, -3, 1, 4, -9, -2])
# Add your first test here: compare using Z_lae
Z_lae = None
conv_lae = None
numpy_conv_lae = None
print("Convolution using Z_lea \n {}".format(conv_lae))
print("Convolution using numpy_lea \n {}".format(numpy_conv_lae))
print("Comparison using Z_lea and numpy is same?: {} \n".format((conv_lae==numpy_conv_lae).all()))
# Add your second test here: compare using Z_fl
Z_fl = None
conv_fl = None
numpy_conv_fl = None
print("Convolution using Z_fl \n {}".format(conv_fl))
print("Convolution using numpy_fl \n {}".format(numpy_conv_fl))
print("Comparison using Z_fl and numpy is same?: {}".format((conv_fl==numpy_conv_fl).all()))
```
## Part 4: Implement LMS filter and ADALINE algorithm
Now it is time to implement your LMS filter and ADALINE algorithm, in order to do so you need to create a function called `adaline_filter` which takes as arguments the following values:
* `X` which is a matrix in fully loaded or loaded and emptied form.
* `w` an initial vector for the estimated filter.
* `y_hat` the expected output vector for the filter, sometimes called ground truth.
* `alpha` is the learning rate or convergence factor (step size).
* `epochs` is the number of iterations.
As outputs you will have:
* `w` which is an updated version of the initial input vector $w$.
* `loss` is an array vector that stores the mean square error loss function, MSE, for every epoch or iteration, you can read more about the MSE loss function [here](https://en.wikipedia.org/wiki/Mean_squared_error).
```
def adaline_filter(X, w, y_hat, alpha=0.0005, epochs=100):
"""
Function that generates a buffer matrix with a fully loaded or loaded and emptied form.
Parameters:
X (numpy array): Matrix in fully loaded or loaded and emptied form.
w (numpy array): Initial vector for the estimated filter.
y_hat (numpy array): Expected output vector for the filter, sometimes called ground truth.
alpha (float): Learning rate or convergence factor (step size).
epochs (int): Number of iterations.
Returns:
w (numpy array): Updated version of the initial input vector w.
loss (numpy array): Array vector that stores the mean square error loss function for every
epoch or iteration.
"""
return None, None
```
To test your algorithm an input vector $x$ and the ground truth vector `y_hat` are given, you will have to estimate your $w$ vector using the `adaline_filter` function.
```
x = np.array([1, 2, -3, 4, -6, 2, 4, -1, 7, 4, 8, 6, -1, 0, 3, -9, -7])
y_hat = np.array([4, 10, -7, 9, -23, 13, -4, 32, 12, 21, 58, 21, 18, -12, 9, -15, -45, -32, 26, 3, -14])
w = np.array([0, 0, 0, 0, 0])
# Add your code here
Z = None
# Add your code here
w_est, loss_mse = None, None
print("Estimated w values are: {}".format(w_est))
plt.plot(loss_mse, label="Learning = 0.0005")
plt.title("MSE Loss vs. Iteration");
plt.xlabel("Iteration")
plt.ylabel("MSE Loss")
plt.grid("on")
plt.legend();
```
## Part 5: Plot Results with Different Learning Rates
In this part you will compare and plot your results by using three different learning rates `alpha` and `epochs = 200` in the same figure. For this you need to use the following values:
* `alpha = 0.002`
* `alpha = 0.005`
* `alpha = 0.0006`
```
# Add your code here
w1, loss1 = None, None
w2, loss2 = None, None
w3, loss3 = None, None
plt.plot(loss1, label="Learning = 0.0002");
plt.plot(loss2, label="Learning = 0.0005");
plt.plot(loss3, label="Learning = 0.00006");
plt.title("MSE Loss vs. Iteration");
plt.xlabel("Iteration")
plt.ylabel("MSE Loss")
plt.grid("on")
plt.legend();
```
## Part 6: Apply your LMS filter and ADALINE algorithm
In this part you will use your algorithm to find the coefficients of a filter based on the expected results of the output.
For this problem an input signal with three different tones of $30, 50$ and $150Hz$ is sampled at $800Hz$, added to the input signal there's also noise.
```
# Signal Generation
np.random.seed(123)
fc_1 = 50
fc_2 = 150
fc_3 = 30
# Add your code here
fs = None
Ts = None
t = np.arange(0,0.10,Ts)
signal_1 = np.sin(2*np.pi*fc_1*t)
signal_2 = np.sin(2*np.pi*fc_2*t)
signal_3 = np.sin(2*np.pi*fc_3*t)
noise = np.random.rand(signal_1.shape[0])
# Input signal
x = signal_1 + signal_2 + signal_3 + noise
# Expected output signal
y_hat = signal_1 + signal_3
# Initial filter coefficients
w = np.zeros(51)
plt.plot(x);
plt.stem(x, use_line_collection=True);
plt.title("Input Signal");
plt.xlabel("sample")
plt.ylabel("amplitude")
plt.grid("on");
# Find the coefficients of the filter
Z = None
w, loss = None, None
```
Plot the results of your filter against the expected signal
```
conv = np.convolve(x, w, mode='full')
plt.plot(conv, label="Estimate");
plt.plot(signal_1 + signal_3, label="Expected");
plt.title("Comparison between convolutions");
plt.xlabel("sample")
plt.ylabel("amplitude")
plt.legend()
plt.grid("on");
```
## Part 7: Conclusions
You are at the end of this project, to finalize, write three conclusions of what you have learned.
| github_jupyter |
# Managed Spot Training for XGBoost
This notebook shows usage of SageMaker Managed Spot infrastructure for XGBoost training. Below we show how Spot instances can be used for the 'algorithm mode' and 'script mode' training methods with the XGBoost container.
[Managed Spot Training](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html) uses Amazon EC2 Spot instance to run training jobs instead of on-demand instances. You can specify which training jobs use spot instances and a stopping condition that specifies how long Amazon SageMaker waits for a job to run using Amazon EC2 Spot instances.
In this notebook we will perform XGBoost training as described [here](). See the original notebook for more details on the data.
## Prerequisites
Ensuring the latest sagemaker sdk is installed. For a major version upgrade, there might be some apis that may get deprecated.
```
!pip install -qU awscli boto3 sagemaker
```
### Setup variables and define functions
```
%%time
import io
import os
import boto3
import sagemaker
import urllib
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-spot'
# customize to your bucket where you have would like to store the data
```
### Fetching the dataset
```
%%time
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
sagemaker.Session().upload_data(FILE_DATA, bucket=bucket, key_prefix=prefix+'/train')
```
### Obtaining the latest XGBoost container
We obtain the new container by specifying the framework version (0.90-1). This version specifies the upstream XGBoost framework version (0.90) and an additional SageMaker version (1). If you have an existing XGBoost workflow based on the previous (0.72) container, this would be the only change necessary to get the same workflow working with the new container.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '1.0-1')
```
### Training the XGBoost model
After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes few minutes.
To run our training script on SageMaker, we construct a sagemaker.xgboost.estimator.XGBoost estimator, which accepts several constructor arguments:
* __entry_point__: The path to the Python script SageMaker runs for training and prediction.
* __role__: Role ARN
* __hyperparameters__: A dictionary passed to the train function as hyperparameters.
* __train_instance_type__ *(optional)*: The type of SageMaker instances for training. __Note__: This particular mode does not currently support training on GPU instance types.
* __sagemaker_session__ *(optional)*: The session used to train on Sagemaker.
```
hyperparameters = {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:squarederror",
"num_round":"50"}
instance_type = 'ml.m5.2xlarge'
output_path = 's3://{}/{}/{}/output'.format(bucket, prefix, 'abalone-xgb')
content_type = "libsvm"
```
If Spot instances are used, the training job can be interrupted, causing it to take longer to start or finish. If a training job is interrupted, a checkpointed snapshot can be used to resume from a previously saved point and can save training time (and cost).
To enable checkpointing for Managed Spot Training using SageMaker XGBoost we need to configure three things:
1. Enable the `train_use_spot_instances` constructor arg - a simple self-explanatory boolean.
2. Set the `train_max_wait constructor` arg - this is an int arg representing the amount of time you are willing to wait for Spot infrastructure to become available. Some instance types are harder to get at Spot prices and you may have to wait longer. You are not charged for time spent waiting for Spot infrastructure to become available, you're only charged for actual compute time spent once Spot instances have been successfully procured.
3. Setup a `checkpoint_s3_uri` constructor arg - this arg will tell SageMaker an S3 location where to save checkpoints. While not strictly necessary, checkpointing is highly recommended for Manage Spot Training jobs due to the fact that Spot instances can be interrupted with short notice and using checkpoints to resume from the last interruption ensures you don't lose any progress made before the interruption.
Feel free to toggle the `train_use_spot_instances` variable to see the effect of running the same job using regular (a.k.a. "On Demand") infrastructure.
Note that `train_max_wait` can be set if and only if `train_use_spot_instances` is enabled and must be greater than or equal to `train_max_run`.
```
import time
job_name = 'DEMO-xgboost-spot-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
print("Training job", job_name)
train_use_spot_instances = True
train_max_run = 3600
train_max_wait = 7200 if train_use_spot_instances else None
checkpoint_s3_uri = ('s3://{}/{}/checkpoints/{}'.format(bucket, prefix, job_name) if train_use_spot_instances
else None)
print("Checkpoint path:", checkpoint_s3_uri)
estimator = sagemaker.estimator.Estimator(container,
role,
hyperparameters=hyperparameters,
train_instance_count=1,
train_instance_type=instance_type,
train_volume_size=5, # 5 GB
output_path=output_path,
sagemaker_session=sagemaker.Session(),
train_use_spot_instances=train_use_spot_instances,
train_max_run=train_max_run,
train_max_wait=train_max_wait,
checkpoint_s3_uri=checkpoint_s3_uri
);
train_input = sagemaker.s3_input(s3_data='s3://{}/{}/{}'.format(bucket, prefix, 'train'), content_type='libsvm')
estimator.fit({'train': train_input}, job_name=job_name)
```
### Savings
Towards the end of the job you should see two lines of output printed:
- `Training seconds: X` : This is the actual compute-time your training job spent
- `Billable seconds: Y` : This is the time you will be billed for after Spot discounting is applied.
If you enabled the `train_use_spot_instances`, then you should see a notable difference between `X` and `Y` signifying the cost savings you will get for having chosen Managed Spot Training. This should be reflected in an additional line:
- `Managed Spot Training savings: (1-Y/X)*100 %`
## Enabling checkpointing for script mode
An additional mode of operation is to run customizable scripts as part of the training and inference jobs. See [this notebook](./xgboost_abalone_dist_script_mode.ipynb) for details on how to setup script mode.
Here we highlight the specific changes that would enable checkpointing and use Spot instances.
Checkpointing in the framework mode for SageMaker XGBoost can be performed using two convenient functions:
- `save_checkpoint`: this returns a callback function that performs checkpointing of the model for each round. This is passed to XGBoost as part of the [`callbacks`](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.train) argument.
- `load_checkpoint`: This is used to load existing checkpoints to ensure training resumes from where it previously stopped.
Both functions take the checkpoint directory as input, which in the below example is set to `/opt/ml/checkpoints`.
The primary arguments that change for the `xgb.train` call are
1. `xgb_model`: This refers to the previous checkpoint (saved from a previously run partial job) obtained by `load_checkpoint`. This would be `None` if no previous checkpoint is available.
2. `callbacks`: This contains a function that performs the checkpointing
Updated script looks like the following.
---------
```
CHECKPOINTS_DIR = '/opt/ml/checkpoints' # default location for Checkpoints
callbacks = [save_checkpoint(CHECKPOINTS_DIR)]
prev_checkpoint, n_iterations_prev_run = load_checkpoint(CHECKPOINTS_DIR)
bst = xgb.train(
params=train_hp,
dtrain=dtrain,
evals=watchlist,
num_boost_round=(args.num_round - n_iterations_prev_run),
xgb_model=prev_checkpoint,
callbacks=callbacks
)
```
### Using the SageMaker XGBoost Estimator
The XGBoost estimator class in the SageMaker Python SDK allows us to run that script as a training job on the Amazon SageMaker managed training infrastructure. We’ll also pass the estimator our IAM role, the type of instance we want to use, and a dictionary of the hyperparameters that we want to pass to our script.
```
from sagemaker.session import s3_input
from sagemaker.xgboost.estimator import XGBoost
job_name = 'DEMO-xgboost-regression-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
print("Training job", job_name)
checkpoint_s3_uri = ('s3://{}/{}/checkpoints/{}'.format(bucket, prefix, job_name) if train_use_spot_instances
else None)
print("Checkpoint path:", checkpoint_s3_uri)
xgb_script_mode_estimator = XGBoost(
entry_point="abalone.py",
hyperparameters=hyperparameters,
image_name=container,
role=role,
train_instance_count=1,
train_instance_type=instance_type,
framework_version="0.90-1",
output_path="s3://{}/{}/{}/output".format(bucket, prefix, "xgboost-script-mode"),
train_use_spot_instances=train_use_spot_instances,
train_max_run=train_max_run,
train_max_wait=train_max_wait,
checkpoint_s3_uri=checkpoint_s3_uri
)
```
Training is as simple as calling `fit` on the Estimator. This will start a SageMaker Training job that will download the data, invoke the entry point code (in the provided script file), and save any model artifacts that the script creates. In this case, the script requires a `train` and a `validation` channel. Since we only created a `train` channel, we re-use it for validation.
```
xgb_script_mode_estimator.fit({'train': train_input, 'validation': train_input}, job_name=job_name)
```
| github_jupyter |
# Child Nutrition Calculator
### Input the required information about child
```
# personal info -> input name, age, gender, height, weight
def personalInfoChild():
Name = input("Enter your Name: ")
Age = int(input("Enter your Age: "))
Gender = input("Enter your Gender: ")
height = float(input("Enter your height in inches: "))
weight = float(input("Enter your weight pounds (lb): "))
return Name, Age, Gender, height, weight
# Food Consumption of child
def foodConsumed():
foodConsumedByChild = {}
for i in range(6):
print("******************************************")
food = input("Enter food name: ")
intake = int(input("Enter food intake in gm: "))
foodConsumedByChild[food] = intake
return foodConsumedByChild
print("Enter your basic information")
Name, Age, Gender, height, weight = personalInfoChild()
print("Enter food intakes in gm")
foodConsumedByChild = foodConsumed()
foodConsumedByChild
```
### Method to calculate BMI
```
def BMI(weight, height):
# 703 for converting in SI units
bmi = (weight / (height ** 2)) * 703
return bmi
```
### Method to calculate calories intake
```
def rateOfFood(food):
print("************************************")
print("Enter rate details of ", food)
calories = int(input("Enter calories: "))
intake = int(input("Enter food intake: "))
rate = (calories / intake)
print("Rate is", rate)
return rate
def totalCaloriesIntake(foodConsumedByChild):
totalCaloriesConsumed = 0
for food in foodConsumedByChild.keys():
rate = rateOfFood(food)
caloriesOfFoodIntake = rate * foodConsumedByChild[food]
totalCaloriesConsumed += caloriesOfFoodIntake
return totalCaloriesConsumed
```
### Calculating respective BMI and Calories Intake
```
# BMI
bmiOfChild = BMI(weight, height)
print("BMI of Child is bmiOfChild ", bmiOfChild)
# Calories Intake
totalCaloriesConsumedByChild = totalCaloriesIntake(foodConsumedByChild)
print("Total Calories Intake of Child is ", totalCaloriesConsumedByChild)
```
# HW
* Go through code and imporve it on your own
- instead of only 6 food what if I want more ?
- you can make food rate per 100gm
* Print output in correct format
# Some git commands
1. Clone a repo - go to destination folder and open respective CMD / Terminal and write git clone url
2. check git status -- changed files in red
3. add - git add .
4. check git status again -- changed files in green
5. commit -> git commit -m "your Message"
6. pushing -> git push / git push -f
| github_jupyter |
```
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.decomposition import PCA
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.svm import LinearSVR
from sklearn.svm import SVR
from matplotlib.ticker import MaxNLocator
import matplotlib.pyplot as plt
%matplotlib inline
from collections import namedtuple
ORIGIN_BLOCKGROUP = 'o_bg'
GEO_ID = 'geoid'
AREA = 'area'
BLOCKGROUP_MODE_CHOICE = 'block_group'
BLOCKGROUP_AFFORDABILITY = 'key'
BLOCKGROUP = 'blockgroup'
### The following is new features list
DURATION = 'google_duration'
DISTANCE = 'trip_path_distance'
HOUSEHOLD_SIZE = 'hhsize'
RENT = 'hh_rent'
HOME_OWN = 'hh_own'
DEPART_TIME = 'depart_time'
NUMBER_CHILDREN = 'numchildren'
NUMBER_VEHICLE = 'vehicle_count'
FEMALE = 'gender_female'
MALE = 'gender_male'
WEEKDAY_AM = 'weekday_am'
WEEKDAY_MID = 'weekday_mid'
WEEKDAY_PM = 'weekday_pm'
WEEKDAY_LATE = 'weekday_late'
RESIDENCY_UNDER5 = 'residency_under5'
RESIDENCY_OVER5 = 'residency_over5'
INCOME_UNDER25 = 'income_under25'
INCOME_75_100 = 'income_75_100'
INCOME_OVER100 = 'income_over100'
RACE_WHITE = 'race_white'
TRIP_WEIGHT = 'trip_weight_revised'
DRIVE_ALONE_THRESHOLD = 'drive_alone_threshold'
DRIVE_ALONE = 'drive_alone'
###
# and TRIP_WEIGHT when summarizing the results
WEIGHTED_VALUES = 'weighted_values'
#rename:
MODE_INDEX = 'mode_index'
SCALED = 'scaled'
MODE_CHOIDE_SCORE = 'mode_choice_score'
AFFORDABILITY_SCORE = 'affordability_score'
ABOVE_MEDIAN = 'prop_driving_above_median'
RELATIVE_SCALED = 'relative_scaled'
FEATURES_LIST = [DURATION, DISTANCE, HOUSEHOLD_SIZE, RENT, HOME_OWN, DEPART_TIME,
NUMBER_CHILDREN, NUMBER_VEHICLE, FEMALE, MALE,
WEEKDAY_AM, WEEKDAY_MID, WEEKDAY_PM, WEEKDAY_LATE,
RESIDENCY_UNDER5, RESIDENCY_OVER5, INCOME_UNDER25, INCOME_75_100, INCOME_OVER100,
RACE_WHITE, TRIP_WEIGHT]
SCORES_LIST = [MODE_CHOIDE_SCORE, AFFORDABILITY_SCORE]
OUTCOME_LIST = [DRIVE_ALONE, DRIVE_ALONE_THRESHOLD]
tr = pd.read_csv('df_Trip_Features.csv', dtype={ORIGIN_BLOCKGROUP: str})
tr.rename(columns={ORIGIN_BLOCKGROUP: BLOCKGROUP}, inplace=True)
tr.head()
len(tr[BLOCKGROUP].unique())
# load features/covariates and normalize the values
mode_choice = pd.read_csv('true_final/final_mode_choice_081418.csv',
dtype={BLOCKGROUP_MODE_CHOICE: 'str'})[[BLOCKGROUP_MODE_CHOICE, MODE_INDEX]]
mode_choice.rename(columns = {BLOCKGROUP_MODE_CHOICE: BLOCKGROUP,
MODE_INDEX: MODE_CHOIDE_SCORE}, inplace=True)
affordability = pd.read_csv('true_final/final_affordability_081418.csv',
dtype={BLOCKGROUP_AFFORDABILITY: 'str'})[[BLOCKGROUP_AFFORDABILITY, RELATIVE_SCALED]]
affordability.rename(columns = {BLOCKGROUP_AFFORDABILITY: BLOCKGROUP,
RELATIVE_SCALED: AFFORDABILITY_SCORE}, inplace=True)
features_set = pd.merge(left=mode_choice, right=affordability, on=BLOCKGROUP)
features_set.head()
```
Now we should be ready for Prediction
```
def fit_and_evaluate_continuous(input_model, title_label, input_dat, output_dat, scoring_method):
"""
This function runs linear regression, random forests regression, support vector regression
using continuous outcome
"""
input_model.fit(input_dat, output_dat)
mode_scores = cross_val_score(input_model, input_dat, output_dat, scoring=scoring_method, cv=5)
print("the next are " + scoring_method + " for each K")
print(np.abs(mode_scores))
print(scoring_method + " (Average / Std): %0.2f (+/- %0.2f)" %
(np.abs(mode_scores).mean(), np.abs(mode_scores).std() * 2) + "\n")
def fit_and_evaluate_binary(input_model, title_label, input_dat, output_dat):
"""
This function runs logistic regression, random forests classifier, support vector classifier
using binary outcome
"""
input_model.fit(input_dat, output_dat)
mode_scores = cross_val_score(input_model, input_dat,
output_dat,
scoring='accuracy', cv=StratifiedKFold(n_splits=5, shuffle=True, random_state=0))
#print("Accuracy for each K")
#print(np.abs(mode_scores), "\n")
print("Summary: " + title_label + " with 5-fold CV Accuracy: %0.2f (+/- %0.2f)" %
(np.abs(mode_scores).mean(), np.abs(mode_scores).std() * 2))
print("\n")
return ([np.abs(mode_scores).mean(), np.abs(mode_scores).std()])
def run_and_diagnose(input_scores, input_psrc, output_binary_dat, score_title, psrc_title):
"""
This function applies three classification methods to the two different models;
1. Mode Choice & Affordability => Driving Alone (binary)
2. All available PSRC raw features => Driving Alone (binary)
and then print the accuracy of prediction results and draw plots.
"""
n_groups = 3
logreg_score = fit_and_evaluate_binary(LogisticRegression(),
score_title + " Logistidc Regression", input_scores, output_binary_dat)
rfc_score = fit_and_evaluate_binary(RandomForestClassifier(max_depth=100, random_state=0),
score_title + " Random Forest Classifier", input_scores, output_binary_dat)
svc_score = fit_and_evaluate_binary(LinearSVC(),
score_title + " Support Vector Classifier with Linear Kernel", input_scores, output_binary_dat)
score_means = list(zip(logreg_score, rfc_score, svc_score))[0]
score_vars = list(zip(logreg_score, rfc_score, svc_score))[1]
logreg_psrc = fit_and_evaluate_binary(LogisticRegression(),
psrc_title + " Logistidc Regression", input_psrc, output_binary_dat)
rfc_psrc = fit_and_evaluate_binary(RandomForestClassifier(max_depth=100, random_state=0),
psrc_title + " Random Forest Classifier", input_psrc, output_binary_dat)
svc_psrc = fit_and_evaluate_binary(LinearSVC(),
psrc_title + " Support Vector Classifier with Linear Kernel", input_psrc, output_binary_dat)
psrc_means = list(zip(logreg_psrc, rfc_psrc, svc_psrc))[0]
psrc_vars = list(zip(logreg_psrc, rfc_psrc, svc_psrc))[1]
# The following draws a grouped bar plot comparing different algorithms & models
fig, ax = plt.subplots()
index = np.arange(n_groups)
bar_width = 0.35
opacity = 0.4
error_config = {'ecolor': '0.3'}
rects1 = ax.bar(index, score_means, bar_width,
alpha=opacity, color='b',
yerr=score_vars, error_kw=error_config,
label=score_title)
rects2 = ax.bar(index + bar_width, psrc_means, bar_width,
alpha=opacity, color='r',
yerr=psrc_vars, error_kw=error_config,
label=psrc_title)
#ax.set_xlabel('Group')
ax.set_ylabel('Accuracy')
ax.set_title(score_title + " vs. " + psrc_title + " Accuracy Comparison")
ax.set_xticks(index + bar_width / 2)
ax.set_xticklabels(('Logistic Regression', 'Random Forest', 'Support Vector Classifier'))
ax.legend(loc=9, bbox_to_anchor=(0.5, -0.1))
fig.tight_layout()
plt.show()
idx_column = ['logistic', 'randomforest', 'svc']
score_means = pd.Series(score_means, index = idx_column)
psrc_means = pd.Series(psrc_means, index = idx_column)
res = pd.DataFrame({score_title: score_means, psrc_title:psrc_means})
return(res)
merged = pd.merge(left=features_set, right=tr, how='inner', on=BLOCKGROUP)
merged.set_index(BLOCKGROUP, inplace=True)
# normalize data except the binary outcome
normalized_merged = merged[[DRIVE_ALONE, DRIVE_ALONE_THRESHOLD]].copy()
normalized_merged[FEATURES_LIST] = (merged[FEATURES_LIST]- merged[FEATURES_LIST].mean()) / merged[FEATURES_LIST].std()
normalized_merged[SCORES_LIST] = (merged[SCORES_LIST]- merged[SCORES_LIST].mean()) / merged[SCORES_LIST].std()
normalized_merged[DRIVE_ALONE] = (merged[DRIVE_ALONE]- merged[DRIVE_ALONE].mean()) / merged[DRIVE_ALONE].std()
normalized_merged.head()
# Now run the result
run_and_diagnose(normalized_merged[SCORES_LIST], normalized_merged[FEATURES_LIST], normalized_merged[DRIVE_ALONE_THRESHOLD],
'ModeChoice & Affordability', 'Raw PSRC')
from sklearn.pipeline import Pipeline
# to examine principal components
pca=PCA(n_components=4)
#pca.fit(normalized_res)
#print(pca.components_)
pca.fit_transform(tr[FEATURES_LIST].values)
print(pca.explained_variance_ratio_)
pd.set_option('display.max_columns', 30)
pd.DataFrame(pca.components_, columns=tr[FEATURES_LIST].columns, index = ['PC-1','PC-2', 'PC-3', 'PC-4'])
FEATURES_LIST2 = [DURATION, DISTANCE, HOUSEHOLD_SIZE, NUMBER_VEHICLE, HOME_OWN]
run_and_diagnose(normalized_merged[SCORES_LIST], normalized_merged[FEATURES_LIST2], normalized_merged[DRIVE_ALONE_THRESHOLD],
'ModeChoice & Affordability', 'Raw PSRC (Five Features)')
# This shows Feature list + scores perform the best
run_and_diagnose(normalized_merged[SCORES_LIST],
normalized_merged[FEATURES_LIST2], normalized_merged[DRIVE_ALONE_THRESHOLD],
'PSRC five', 'PSRC Five')
# we may use and modify the following if we want Altair plots..
"""
import altair as alt
from altair.expr import datum, if_
from vega_datasets import data
source = data.population.url
alt.Chart(source).mark_bar(stroke='transparent').encode(
alt.X('gender:N', scale=alt.Scale(rangeStep=12), axis=alt.Axis(title='')),
alt.Y('sum(people):Q', axis=alt.Axis(title='population', grid=False)),
color=alt.Color('gender:N', scale=alt.Scale(range=["#EA98D2", "#659CCA"])),
column='age:O'
).configure_view(
stroke='transparent'
).configure_axis(
domainWidth=0.8
).transform_filter(
datum.year == 2000
).transform_calculate(
'gender', if_(datum.sex == 2, 'Female', 'Male')
)
"""
```
| github_jupyter |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
## ===========================================================
## Calculate the output of this network using the weights and bias tensors
Using dot product, summation, and the sigmoid function to generate output for a single neural network
```
features
weights
bias
output1 = activation(torch.sum(features * weights) + bias)
```
#### -----------------------------------------------------------------------------
```
output1
```
#### ------------------------------------------------------------------------
## ===========================================================
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Calculate the output of this network using matrix multiplication
```
## ===========================================================
## Calculate the output of network using matrix multiplication
```
features
weights
reshaped_weights = weights.view(5,1)
reshaped_weights
matmul_features_weights = torch.mm(features, reshaped_weights)
matmul_features_weights
```
#### ------------------------------------------------------------------------
```
output2 = activation(matmul_features_weights + bias)
output2
```
#### ------------------------------------------------------------------------
checking for equality of outputs using inner product and matrix multiplication
```
bool(output1==output2)
```
## ===========================================================
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
# ======================================================
> # **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
features
W1
W2
B1
B2
hidden_layer = activation(torch.mm(features, W1) + B1)
output = activation(torch.mm(hidden_layer, W2) + B2)
```
### --------------------------------------------------------
```
output
```
### --------------------------------------------------------
# ======================================================
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
| github_jupyter |
# DiscreteDP Example: Mine Management
**Daisuke Oyama**
*Faculty of Economics, University of Tokyo*
From Miranda and Fackler, <i>Applied Computational Economics and Finance</i>, 2002,
Section 7.6.1
```
%matplotlib inline
import itertools
import numpy as np
from scipy import sparse
import matplotlib.pyplot as plt
from quantecon.markov import DiscreteDP
```
The model is formulated with finite horizon in Section 7.2.1,
but solved with infinite horizon in Section 7.6.1.
Here we follow the latter.
```
price = 1 # Market price of ore
sbar = 100 # Upper bound of ore stock
beta = 0.9 # Discount rate
n = sbar + 1 # Number of states
m = sbar + 1 # Number of actions
# Cost function
c = lambda s, x: x**2 / (1+s)
```
## Product formulation
This approch sets up the reward array `R` and the transition probability array `Q`
as a 2-dimensional array of shape `(n, m)`
and a 3-simensional array of shape `(n, m, n)`, respectively,
where the reward is set to $-\infty$ for infeasible state-action pairs
(and the transition probability distribution is arbitrary for those pairs).
Reward array:
```
R = np.empty((n, m))
for s, x in itertools.product(range(n), range(m)):
R[s, x] = price * x - c(s, x) if x <= s else -np.inf
```
(Degenerate) transition probability array:
```
Q = np.zeros((n, m, n))
for s, x in itertools.product(range(n), range(m)):
if x <= s:
Q[s, x, s-x] = 1
else:
Q[s, x, 0] = 1 # Arbitrary
```
Set up the dynamic program as a `DiscreteDP` instance:
```
ddp = DiscreteDP(R, Q, beta)
```
Solve the optimization problem with the `solve` method,
which by defalut uses the policy iteration algorithm:
```
res = ddp.solve()
```
The number of iterations:
```
res.num_iter
```
The controlled Markov chain is stored in `res.mc`.
To simulate:
```
nyrs = 15
spath = res.mc.simulate(nyrs+1, init=sbar)
spath
```
Draw the graphs:
```
wspace = 0.5
hspace = 0.3
fig = plt.figure(figsize=(12, 8+hspace))
fig.subplots_adjust(wspace=wspace, hspace=hspace)
ax0 = plt.subplot2grid((2, 4), (0, 0), colspan=2)
ax1 = plt.subplot2grid((2, 4), (0, 2), colspan=2)
ax2 = plt.subplot2grid((2, 4), (1, 1), colspan=2)
ax0.plot(res.v)
ax0.set_xlim(0, sbar)
ax0.set_ylim(0, 60)
ax0.set_xlabel('Stock')
ax0.set_ylabel('Value')
ax0.set_title('Optimal Value Function')
ax1.plot(res.sigma)
ax1.set_xlim(0, sbar)
ax1.set_ylim(0, 25)
ax1.set_xlabel('Stock')
ax1.set_ylabel('Extraction')
ax1.set_title('Optimal Extraction Policy')
ax2.plot(spath)
ax2.set_xlim(0, nyrs)
ax2.set_ylim(0, sbar)
ax2.set_xticks(np.linspace(0, 15, 4, endpoint=True))
ax2.set_xlabel('Year')
ax2.set_ylabel('Stock')
ax2.set_title('Optimal State Path')
plt.show()
```
## State-action pairs formulation
This approach assigns the rewards and transition probabilities
only to feaslbe state-action pairs,
setting up `R` and `Q` as a 1-dimensional array of length `L`
and a 2-dimensional array of shape `(L, n)`, respectively.
In particular, this allows us to formulate `Q` in
[scipy sparse matrix format](http://docs.scipy.org/doc/scipy/reference/sparse.html).
We need the arrays of feasible state and action indices:
```
S = np.arange(n)
X = np.arange(m)
# Values of remaining stock in the next period
S_next = S.reshape(n, 1) - X.reshape(1, m)
# Arrays of feasible state and action indices
s_indices, a_indices = np.where(S_next >= 0)
# Number of feasible state-action pairs
L = len(s_indices)
L
s_indices
a_indices
```
Reward vector:
```
R = np.empty(L)
for i, (s, x) in enumerate(zip(s_indices, a_indices)):
R[i] = price * x - c(s, x)
```
(Degenerate) transition probability array,
where we use the [scipy.sparse.lil_matrix](http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_matrix.html) format,
while any format will do
(internally it will be converted to the [scipy.sparse.csr_matrix](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html) format):
```
Q = sparse.lil_matrix((L, n))
it = np.nditer((s_indices, a_indices), flags=['c_index'])
for s, x in it:
i = it.index
Q[i, s-x] = 1
```
Alternatively, one can construct `Q` directly as a scipy.sparse.csr_matrix as follows:
```
# data = np.ones(L)
# indices = s_indices - a_indices
# indptr = np.arange(L+1)
# Q = sparse.csr_matrix((data, indices, indptr), shape=(L, n))
```
Set up the dynamic program as a `DiscreteDP` instance:
```
ddp_sp = DiscreteDP(R, Q, beta, s_indices, a_indices)
```
Solve the optimization problem with the `solve` method,
which by defalut uses the policy iteration algorithm:
```
res_sp = ddp_sp.solve()
```
Number of iterations:
```
res_sp.num_iter
```
Simulate the controlled Markov chain:
```
nyrs = 15
spath_sp = res_sp.mc.simulate(nyrs+1, init=sbar)
```
Draw the graphs:
```
wspace = 0.5
hspace = 0.3
fig = plt.figure(figsize=(12, 8+hspace))
fig.subplots_adjust(wspace=wspace, hspace=hspace)
ax0 = plt.subplot2grid((2, 4), (0, 0), colspan=2)
ax1 = plt.subplot2grid((2, 4), (0, 2), colspan=2)
ax2 = plt.subplot2grid((2, 4), (1, 1), colspan=2)
ax0.plot(res_sp.v)
ax0.set_xlim(0, sbar)
ax0.set_ylim(0, 60)
ax0.set_xlabel('Stock')
ax0.set_ylabel('Value')
ax0.set_title('Optimal Value Function')
ax1.plot(res_sp.sigma)
ax1.set_xlim(0, sbar)
ax1.set_ylim(0, 25)
ax1.set_xlabel('Stock')
ax1.set_ylabel('Extraction')
ax1.set_title('Optimal Extraction Policy')
ax2.plot(spath_sp)
ax2.set_xlim(0, nyrs)
ax2.set_ylim(0, sbar)
ax2.set_xticks(np.linspace(0, 15, 4, endpoint=True))
ax2.set_xlabel('Year')
ax2.set_ylabel('Stock')
ax2.set_title('Optimal State Path')
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Kabongosalomon/Cat-vs-Dog-Classifier/blob/master/Helper_Colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Load My data
```
! git clone https://github.com/Kabongosalomon/Cat-vs-Dog-Classifier.git
# Loaf helper file
link = "https://drive.google.com/file/d/1Cn0B9Zr2irUnZcHqODT9IilGHf9fZ61R/view?usp=sharing"
_, id_t = link.split('d/')
id = id_t.split('/')[0]
print ("Loading file ...")
print (id) # Verify that you have everything after '='
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once per notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
file_id = id
downloaded = drive.CreateFile({'id':file_id})
downloaded.FetchMetadata(fetch_all=True)
downloaded.GetContentFile(downloaded.metadata['title'])
print ("Completed")
!ls
```
## Pre-process
```
!unzip -q Cat_Dog_data.zip # Unzip the downloaded file
!mkdir ./Cat-vs-Dog-Classifier/data # Create a data directory
!cp -r Cat_Dog_data/ Cat-vs-Dog-Classifier/data/ # Copy the unziped file in the created repository
!rm -r ./Cat_Dog_data/ # Remove Both the unziped file
!rm -r ./Cat_Dog_data.zip # Remove Both the zip file
!rm adc.json
```
## Runing the main.py
```
import os
os.chdir("Cat-vs-Dog-Classifier") # Thanks to https://stackoverflow.com/questions/37644441/python-run-script-in-all-subdirectories/37644536
!ls -a data/Cat_Dog_data/train
# Train on CNN_classifier_1
# !CUDA_LAUNCH_BLOCKING=1 python main.py # used many for debug purpose
! python main.py
# Train on CNN_classifier_2
# !CUDA_LAUNCH_BLOCKING=1 python main.py # used many for debug purpose
! python main.py
```
<h2>About the Authors:</h2>
<a href="https://salomonkabongo.wixsite.com/datascientist">Salomon Kabongo KABENAMUALU</a>, Master degree student at <a href="https://aimsammi.org/">the African Masters in Machine Intelligence (AMMI Ghana)</a> his research focused on the use machine learning technique in the field of Natural Language Processing.
Copyright © 2020. This notebook and its source code are released under the terms of the <a href="https://www.apache.org/licenses/LICENSE-2.0">Apache License 2.0</a>.
| github_jupyter |
# Logistic Regression with L2 regularization
```
import pandas as pd
products = pd.read_csv('amazon_baby_subset.csv')
products.head()
import json
with open('important_words.json', 'r') as f:
important_words = json.load(f)
important_words = [str(s) for s in important_words]
products = products.fillna({'review':''})
products.head()
import string
products = products.fillna({'review':''})
intab = string.punctuation
outtab = '\0'*len(string.punctuation)
table = str.maketrans(intab, outtab)
def remove_punctuation(text):
return text.translate(table)
products['review_clean'] = products['review'].apply(remove_punctuation)
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products.head()
```
## Train-Validation split
```
train_index = pd.read_json('module-4-assignment-train-idx.json', typ='series')
validation_index = pd.read_json('module-4-assignment-validation-idx.json', typ='series')
train_data, validation_data = products.iloc[train_index.values], products.iloc[validation_index.values]
len(train_data)
len(validation_data)
```
## Convert Frame to NumPy array
```
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.as_matrix()
label_sarray = data_sframe[label]
label_array = label_sarray.as_matrix()
return(feature_matrix, label_array)
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
score = np.dot(feature_matrix,coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = 1/(1+np.exp(-score))
# return predictions
return predictions
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors,feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
derivative = derivative - 2*l2_penalty*coefficient
return derivative
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in range(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix,coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in range(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print('iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp))
return coefficients
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
```
## Compare coefficients
```
table = pd.DataFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
positive_words = table.sort_values('coefficients [L2=0]',ascending = False)['word'][0:5]
negative_words = table.sort_values('coefficients [L2=0]',ascending = True)['word'][0:5]
positive_words
negative_words
import matplotlib.pyplot as plt
%matplotlib notebook
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table[table['word'].isin(positive_words)]
table_negative_words = table[table['word'].isin(negative_words)]
del table_positive_words['word']
del table_negative_words['word']
for i in range(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].values.flatten(),
'-', label=positive_words.iloc[i], linewidth=4.0, color=color)
for i in range(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].values.flatten(),
'-', label=negative_words.iloc[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
```
## Measuring accuracy
```
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print("L2 penalty = %g" % key)
print("train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key]))
print("--------------------------------------------------------------------------------")
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
%matplotlib inline
import datetime
import cPickle as pickle
import csv
import numpy as np
import random
import sys
maxInt = sys.maxsize
decrement = True
while decrement:
# decrease the maxInt value by factor 10
# as long as the OverflowError occurs.
decrement = False
try:
csv.field_size_limit(maxInt)
except OverflowError:
maxInt = int(maxInt/10)
decrement = True
```
# get term-/document-frequency
```
csv_reader = csv.reader(open('../data/raw/NELA-18/train.tsv', 'r'), delimiter='\t')
tkn2tf = {}
len_heads = [] #1
len_paras = [] #2
cnt_paras = [] #3
len_bodys = [] #4
# csv data: 0:id, 1:head, 2:body, 3:label
print datetime.datetime.now().isoformat()
for n, row in enumerate(csv_reader):
if (n+1) % 100000 == 0: print n+1,
head = row[1].lower().strip()
for tkn in head.split():
if tkn in tkn2tf: tkn2tf[tkn] += 1
else: tkn2tf[tkn] = 1
len_heads.append(len(head.split())) #1
body = row[2].lower().strip()
tkn_para = []
for para in body.split('<eop>'):
if para and para != ' ':
_para = para + '<eop>'
len_para = len(_para.split())
len_paras.append(len_para) #2
tkn_para.append(_para)
cnt_paras.append(len(tkn_para)) #3
body_split = []
for tkn in body.split():
if tkn in tkn2tf: tkn2tf[tkn] += 1
else: tkn2tf[tkn] = 1
body_split.append(tkn)
len_bodys.append(len(body_split)) #4
print n+1, 'Done'
print datetime.datetime.now().isoformat()
print 'voca size :', len(tkn2tf)
sorted_token = sorted(tkn2tf.items(), key=lambda kv: kv[1], reverse=True)
tkn2idx = {}
for idx, (tkn, _) in tqdm(enumerate(sorted_token)):
tkn2idx[tkn] = idx + 2
tkn2idx['<UNK>'] = 1
tkn2idx[''] = 0
if len(tkn2idx) == len(tkn2tf)+2:
print len(tkn2idx), 'No problem'
print
print 'Show top-10 tkn:'
for tkn, freq in sorted_token[:10]:
print tkn,':',freq
print ''
with open('../data/nela-18/whole/dic_mincut0.txt', 'wb') as f:
for key in tkn2idx.keys():
f.write(key+'\n')
tkn2tf_mincut5 = {}
for tkn, tf in tkn2tf.items():
if tf < 8:
continue
tkn2tf_mincut5[tkn] = tf
print 'voca size :', len(tkn2tf_mincut5)
tkn2tf_mincut5['<EOS>'] = tkn2tf_mincut5['<eos>']
tkn2tf_mincut5['<EOP>'] = tkn2tf_mincut5['<eop>']
del tkn2tf_mincut5['<eos>']
del tkn2tf_mincut5['<eop>']
import operator
sorted_voca = sorted(tkn2tf_mincut5.items(), key=operator.itemgetter(1))
list_voca_mincut = []
list_voca_mincut.append('') # PAD
list_voca_mincut.append('<UNK>') # UNK
list_voca_mincut.append('<EOS>') # EOS
list_voca_mincut.append('<EOP>') # EOP
for word, idx in sorted_voca:
if word=='<UNK>' or word=='<EOP>' or word=='<EOS>':
print("existing word", word)
continue
else:
list_voca_mincut.append(word)
len(list_voca_mincut)
with open('../data/nela-18/whole/dic_mincutN.txt', 'wb') as f:
for i in range(len(list_voca_mincut)):
f.write(list_voca_mincut[i]+'\n')
dic_voca = {}
for voca in list_voca_mincut:
dic_voca[voca] = len(dic_voca)
print(dic_voca[''], dic_voca['<UNK>'], dic_voca['<EOS>'], dic_voca['<EOP>'])
with open('../data/nela-18/whole/dic_mincutN.pkl', 'wb') as f:
pickle.dump(dic_voca, f)
```
#### for data processing
```
import copy
dic_voca_lower = copy.deepcopy(dic_voca)
dic_voca_lower['<eos>'] = dic_voca_lower['<EOS>']
dic_voca_lower['<eop>'] = dic_voca_lower['<EOP>']
del dic_voca_lower['<EOS>']
del dic_voca_lower['<EOP>']
len(dic_voca_lower)
print(dic_voca_lower[''], dic_voca_lower['<UNK>'], dic_voca_lower['<eos>'], dic_voca_lower['<eop>'])
```
## stats
```
import csv
import sys
import numpy as np
data= []
with open('../data/raw/NELA-18/train.tsv', 'r') as f:
data_csv = csv.reader(f, delimiter='\t')
for row in data_csv:
data.append(row)
def print_info(data):
print("mean", np.average(data))
print("std", np.std(data))
print("max", np.max(data))
print("95.xx coverage", np.average(data) + 2*np.std(data) )
print("99.73 coverage", np.average(data) + 3*np.std(data) )
print("99.95 coverage", np.average(data) + 3.5*np.std(data) )
print("99.99 coverage", np.average(data) + 4*np.std(data) )
head = [x[1].strip() for x in data]
head_len = [len(x.split()) for x in head]
print('head_len')
print_info(head_len)
body = [x[2].strip() for x in data]
body_len = [len(x.split()) for x in body ]
print('body_len')
print_info(body_len)
context_len = [len(x.split('<EOP>')) for x in body]
print('context_len')
print_info(context_len)
body_sentence = []
for sent in body:
sent = sent.split('<EOP>')
body_sentence.extend(sent)
body_len = [ len(x.split()) for x in body_sentence ]
print('body_sentence')
print_info(body_len)
```
# encode to numpy
```
def fit_length(data, max_len_t, max_len_b):
data_t, data_b = data
list_zeros = np.zeros(max_len_b, 'int32').tolist()
fl_data_t = []
for datum in data_t:
try:
datum = list(datum)
except:
pass
_len = len(datum)
if _len >= max_len_t:
fl_data_t.append( datum[:max_len_t] )
else:
fl_data_t.append( datum + list_zeros[:(max_len_t-_len)] )
fl_data_b = []
for datum in data_b:
try:
datum = list(datum)
except:
pass
_len = len(datum)
if _len >= max_len_b:
fl_data_b.append( datum[:max_len_b] )
else:
fl_data_b.append( datum + list_zeros[:(max_len_b-_len)] )
np_data_t = np.asarray(fl_data_t, dtype='int32')
np_data_b = np.asarray(fl_data_b, dtype='int32')
data = [np_data_t, np_data_b]
return data
csv_reader = csv.reader(open('../data/raw/NELA-18/train.tsv', 'r'), delimiter='\t')
print datetime.datetime.now().isoformat()
ids = []
heads = []
bodys = []
labels = []
for n, row in enumerate(csv_reader):
if (n+1) % 10000 == 0: print n+1,
ids.append(row[0])
labels.append(int(row[3]))
head = []
for tkn in row[1].lower().strip().split():
if tkn in dic_voca_lower:
head.append(dic_voca_lower[tkn])
else:
head.append(1)
heads.append(head)
body = []
for tkn in row[2].lower().strip().split():
if tkn in dic_voca_lower:
body.append(dic_voca_lower[tkn])
else:
body.append(1)
bodys.append(body)
print n+1, 'Done'
print datetime.datetime.now().isoformat() # ~5 mins
print datetime.datetime.now().isoformat()
[np_heads, np_bodys] = fit_length([heads, bodys], 25, 2800)
print datetime.datetime.now().isoformat() # ~3 mins
print datetime.datetime.now().isoformat()
t_trainpath = '../data/nela-18/whole/train/train_title.npy'
np.save(t_trainpath, np_heads)
b_trainpath = '../data/nela-18/whole/train/train_body.npy'
np.save(b_trainpath, np_bodys)
l_trainpath = '../data/nela-18/whole/train/train_label.npy'
np.save(l_trainpath, labels)
print datetime.datetime.now().isoformat()
```
# devset
```
csv_reader = csv.reader(open('../data/raw/NELA-18/dev.tsv', 'r'), delimiter='\t')
print datetime.datetime.now().isoformat()
ids_dev = []
heads_dev = []
bodys_dev = []
labels_dev = []
for n, row in enumerate(csv_reader):
if (n+1) % 10000 == 0: print n+1,
ids_dev.append(row[0])
labels_dev.append(int(row[3]))
head = []
for tkn in row[1].lower().strip().split():
if tkn in dic_voca_lower:
head.append(dic_voca_lower[tkn])
else:
head.append(1)
heads_dev.append(head)
body = []
for tkn in row[2].lower().strip().split():
if tkn in dic_voca_lower:
body.append(dic_voca_lower[tkn])
else:
body.append(1)
bodys_dev.append(body)
print n+1, 'Done'
print datetime.datetime.now().isoformat()
print datetime.datetime.now().isoformat()
[np_heads_dev, np_bodys_dev] = fit_length([heads_dev, bodys_dev], 25, 2800)
print datetime.datetime.now().isoformat() # ~3 mins
print datetime.datetime.now().isoformat()
t_trainpath = '../data/nela-18/whole/dev/dev_title.npy'
np.save(t_trainpath, np_heads_dev)
b_trainpath = '../data/nela-18/whole/dev/dev_body.npy'
np.save(b_trainpath, np_bodys_dev)
l_trainpath = '../data/nela-18/whole/dev/dev_label.npy'
np.save(l_trainpath, labels_dev)
print datetime.datetime.now().isoformat()
```
# testset
```
csv_reader = csv.reader(open('../data/raw/NELA-18/test_type_0.tsv', 'r'), delimiter='\t')
print datetime.datetime.now().isoformat()
ids_dev = []
heads_dev = []
bodys_dev = []
labels_dev = []
for n, row in enumerate(csv_reader):
if (n+1) % 10000 == 0: print n+1,
ids_dev.append(row[0])
labels_dev.append(int(row[3]))
head = []
for tkn in row[1].lower().strip().split():
if tkn in dic_voca_lower:
head.append(dic_voca_lower[tkn])
else:
head.append(1)
heads_dev.append(head)
body = []
for tkn in row[2].lower().strip().split():
if tkn in dic_voca_lower:
body.append(dic_voca_lower[tkn])
else:
body.append(1)
bodys_dev.append(body)
print n+1, 'Done'
print datetime.datetime.now().isoformat()
print datetime.datetime.now().isoformat()
[np_heads_dev, np_bodys_dev] = fit_length([heads_dev, bodys_dev], 25, 2800)
print datetime.datetime.now().isoformat() # ~3 mins
print datetime.datetime.now().isoformat()
t_trainpath = '../data/nela-18/whole/test/test_title.npy'
np.save(t_trainpath, np_heads_dev)
b_trainpath = '../data/nela-18/whole/test/test_body.npy'
np.save(b_trainpath, np_bodys_dev)
l_trainpath = '../data/nela-18/whole/test/test_label.npy'
np.save(l_trainpath, labels_dev)
print datetime.datetime.now().isoformat()
```
# debugset
```
print datetime.datetime.now().isoformat()
t_trainpath = '../data/nela-18//whole/debug/debug_title.npy'
np.save(t_trainpath, np_heads_dev[:200])
b_trainpath = '../data/nela-18/whole/debug/debug_body.npy'
np.save(b_trainpath, np_bodys_dev[:200])
l_trainpath = '../data/nela-18/whole/debug/debug_label.npy'
np.save(l_trainpath, labels_dev[:200])
print datetime.datetime.now().isoformat()
with open('../data/nela-18/whole/dic_mincutN.txt') as f:
test_list_voca = f.readlines()
test_list_voca = [x.strip() for x in test_list_voca]
from nlp_vocab import Vocab
tt = Vocab(test_list_voca)
print(tt.index2sent(np_heads_dev[10]))
```
## 아래는 아직 진행 안함
# para ver.
```
SEED = 448
random.seed(SEED)
csv_reader = csv.reader(open('version2/data_para_train.csv', 'r'))
print datetime.datetime.now().isoformat()
data = []
true_data = []
for n, row in enumerate(csv_reader):
if (n+1) % 100000 == 0: print n+1,
if row[3] == "1":
data.append(row)
else:
true_data.append(row)
random.shuffle(true_data)
data += true_data[:len(data)]
print datetime.datetime.now().isoformat()
ids_para = []
heads_para = []
bodys_para = []
labels_para = []
for n, row in enumerate(data):
if (n+1) % 10000 == 0: print n+1,
ids_para.append(row[0])
labels_para.append(int(row[3]))
head = []
for tkn in row[1].split():
if tkn in tkn2idx_mincut5:
head.append(tkn2idx_mincut5[tkn])
else:
head.append(1)
heads_para.append(head)
body = []
for tkn in row[2].split():
if tkn in tkn2idx_mincut5:
body.append(tkn2idx_mincut5[tkn])
else:
body.append(1)
bodys_para.append(body)
print n+1, ': Done'
print datetime.datetime.now().isoformat()
print datetime.datetime.now().isoformat()
[np_heads_para, np_bodys_para] = fit_length([heads_para, bodys_para], 49, 170)
print 'numpy: Done'
print datetime.datetime.now().isoformat() # ~3 mins
print datetime.datetime.now().isoformat()
t_trainpath = 'nps/train_para_head_mincut5'
np.save(t_trainpath, np_heads_para)
b_trainpath = 'nps/train_para_body_mincut5'
np.save(b_trainpath, np_bodys_para)
l_trainpath = 'nps/train_para_label_mincut5'
np.save(l_trainpath, labels_para)
print 'save: Done'
print datetime.datetime.now().isoformat()
import numpy as np
l_trainpath = np.load('nps/train_para_label_mincut5.npy')
l_trainpath.shape
csv_reader = csv.reader(open('version2/data_para_dev.csv', 'r'))
print datetime.datetime.now().isoformat()
ids_para_dev = []
heads_para_dev = []
bodys_para_dev = []
labels_para_dev = []
for n, row in enumerate(csv_reader):
if (n+1) % 10000 == 0: print n+1,
ids_para_dev.append(row[0])
labels_para_dev.append(int(row[3]))
head = []
for tkn in row[1].split():
if tkn in tkn2idx_mincut5:
head.append(tkn2idx_mincut5[tkn])
else:
head.append(1)
heads_para_dev.append(head)
body = []
for tkn in row[2].split():
if tkn in tkn2idx_mincut5:
body.append(tkn2idx_mincut5[tkn])
else:
body.append(1)
bodys_para_dev.append(body)
print n+1, 'Done'
print datetime.datetime.now().isoformat()
print datetime.datetime.now().isoformat()
[np_heads_para_dev, np_bodys_para_dev] = fit_length([heads_para_dev, bodys_para_dev], 49, 170)
print datetime.datetime.now().isoformat() # ~3 mins
print datetime.datetime.now().isoformat()
t_trainpath = 'nps/valid_para_head_mincut5'
np.save(t_trainpath, np_heads_para_dev)
b_trainpath = 'nps/valid_para_body_mincut5'
np.save(b_trainpath, np_bodys_para_dev)
l_trainpath = 'nps/valid_para_label_mincut5'
np.save(l_trainpath, labels_para_dev)
print datetime.datetime.now().isoformat()
```
# testset
```
csv_reader = csv.reader(open('version2/data_whole_test.csv', 'r'))
print datetime.datetime.now().isoformat()
ids_test = []
heads_test = []
bodys_test = []
labels_test = []
for n, row in enumerate(csv_reader):
if (n+1) % 10000 == 0: print n+1,
ids_test.append(row[0])
labels_test.append(int(row[3]))
head = []
for tkn in row[1].split():
if tkn in tkn2idx_mincut5:
head.append(tkn2idx_mincut5[tkn])
else:
head.append(1)
heads_test.append(head)
body = []
for tkn in row[2].split():
if tkn in tkn2idx_mincut5:
body.append(tkn2idx_mincut5[tkn])
else:
body.append(1)
bodys_test.append(body)
print n+1, 'Done'
print datetime.datetime.now().isoformat()
print datetime.datetime.now().isoformat()
[np_heads_test, np_bodys_test] = fit_length([heads_test, bodys_test], 49, 1200)
print datetime.datetime.now().isoformat() # ~3 mins
print datetime.datetime.now().isoformat()
t_trainpath = 'nps/test_whole_head_mincut5'
np.save(t_trainpath, np_heads_test)
b_trainpath = 'nps/test_whole_body_mincut5'
np.save(b_trainpath, np_bodys_test)
l_trainpath = 'nps/test_whole_label_mincut5'
np.save(l_trainpath, labels_test)
print datetime.datetime.now().isoformat()
```
# test stats.
```
csv_reader = csv.reader(open('version2/data_whole_test.csv', 'r'))
len_heads_test = [] #1
len_paras_test = [] #2
cnt_paras_test = [] #3
len_bodys_test = [] #4
labels_test = []
print datetime.datetime.now().isoformat()
for n, row in enumerate(csv_reader):
if (n+1) % 100000 == 0: print n+1,
labels_test.append(int(row[3]))
head = row[1]
len_heads_test.append(len(head.split())) #1
body = row[2]
tkn_para = []
for para in body.split('<EOP>'):
if para and para != ' ':
_para = para + '<EOP>'
len_para = len(_para.split())
len_paras_test.append(len_para) #2
tkn_para.append(_para)
cnt_paras_test.append(len(tkn_para)) #3
body_split = body.split()
len_bodys_test.append(len(body_split)) #4
print n+1, 'Done'
print datetime.datetime.now().isoformat()
#1
len_titles = np.array(len_heads_test)
print len_titles.tolist().count(1)
print np.max(len_titles), np.min(len_titles), np.mean(len_titles), np.std(len_titles)
len_t = len(len_titles)
cnt_t = sum(len_titles <= 49)
print cnt_t, len_t, cnt_t*1.0/len_t
#2
len_paras = np.array(len_paras_test)
print len_paras.tolist().count(1)
print np.max(len_paras), np.min(len_paras), np.mean(len_paras), np.std(len_paras)
len_p = len(len_paras)
cnt_p = sum(len_paras <= 170)
print cnt_p, len_p, cnt_p*1.0/len_p
#3
cnt_para = np.array(cnt_paras_test)
print cnt_para.tolist().count(1)
print np.max(cnt_para), np.min(cnt_para), np.mean(cnt_para), np.std(cnt_para), np.median(cnt_para)
len_cp = len(cnt_para)
cnt_cp = sum(cnt_para <= 20)
print cnt_cp, len_cp, cnt_cp*1.0/len_cp
#4
len_bodys = np.array(len_bodys_test)
print len_bodys.tolist().count(2)
print np.max(len_bodys), np.min(len_bodys), np.mean(len_bodys), np.std(len_bodys)
len_b = len(len_bodys)
cnt_b = sum(len_bodys <= 1200)
print cnt_b, len_b, cnt_b*1.0/len_b
plt.figure(1)
plt.hist(len_paras, range=[0, 500], normed=False, bins=500)
tkn2df = {}
for tkn in tkn2tf.keys():
tkn2df[tkn] = 0
csv_reader = csv.reader(open('final_final/data_whole_training.csv', 'r'))
print datetime.datetime.now().isoformat()
for n, row in enumerate(csv_reader):
if (n+1) % 100000 == 0: print n+1,
tmp_tkn = []
head = row[1]
body = row[2]
doc = ' '.join([head, body])
for tkn in doc.split():
if tkn in tmp_tkn:
continue
else:
tkn2df[tkn] += 1
tmp_tkn.append(tkn)
print n, 'Done'
print datetime.datetime.now().isoformat()
```
| github_jupyter |
## 1. The most Nobel of Prizes
<p><img style="float: right;margin:5px 20px 5px 1px; max-width:250px" src="https://s3.amazonaws.com/assets.datacamp.com/production/project_309/img/Nobel_Prize.png"></p>
<p>The Nobel Prize is perhaps the worlds most well known scientific award. Except for the honor, prestige and substantial prize money the recipient also gets a gold medal showing Alfred Nobel (1833 - 1896) who established the prize. Every year it's given to scientists and scholars in the categories chemistry, literature, physics, physiology or medicine, economics, and peace. The first Nobel Prize was handed out in 1901, and at that time the Prize was very Eurocentric and male-focused, but nowadays it's not biased in any way whatsoever. Surely. Right?</p>
<p>Well, we're going to find out! The Nobel Foundation has made a dataset available of all prize winners from the start of the prize, in 1901, to 2016. Let's load it in and take a look.</p>
```
# Loading in required libraries
library(tidyverse)
# Reading in the Nobel Prize data
nobel <- read_csv("datasets/nobel.csv")
# Taking a look at the first couple of winners
head(nobel)
```
## 2. So, who gets the Nobel Prize?
<p>Just looking at the first couple of prize winners, or Nobel laureates as they are also called, we already see a celebrity: Wilhelm Conrad Röntgen, the guy who discovered X-rays. And actually, we see that all of the winners in 1901 were guys that came from Europe. But that was back in 1901, looking at all winners in the dataset, from 1901 to 2016, which sex and which country is the most commonly represented? </p>
<p>(For <em>country</em>, we will use the <code>birth_country</code> of the winner, as the <code>organization_country</code> is <code>NA</code> for all shared Nobel Prizes.)</p>
```
# Counting the number of (possibly shared) Nobel Prizes handed
# out between 1901 and 2016
nobel %>% count()
# Counting the number of prizes won by male and female recipients.
nobel %>%
count(sex)
# Counting the number of prizes won by different nationalities.
nobel %>%
count(birth_country) %>%
arrange(desc(n)) %>%
head(20)
```
## 3. USA dominance
<p>Not so surprising perhaps: the most common Nobel laureate between 1901 and 2016 was a man born in the United States of America. But in 1901 all the laureates were European. When did the USA start to dominate the Nobel Prize charts?</p>
```
# Calculating the proportion of USA born winners per decade
prop_usa_winners <- nobel %>%
mutate(
usa_born_winner = birth_country == "United States of America",
decade = floor(year / 10) * 10
) %>%
group_by(decade) %>%
summarize(proportion = mean(usa_born_winner, na.rm = TRUE))
# Display the proportions of USA born winners per decade
prop_usa_winners
```
## 4. USA dominance, visualized
<p>A table is OK, but to <em>see</em> when the USA started to dominate the Nobel charts we need a plot!</p>
```
# Setting the size of plots in this notebook
options(repr.plot.width=7, repr.plot.height=4)
# Plotting USA born winners
ggplot(prop_usa_winners, aes(x = decade, y = proportion)) +
geom_line() +
geom_point() +
scale_y_continuous(labels = scales::percent, limits = 0:1, expand = c(0, 0))
```
## 5. What is the gender of a typical Nobel Prize winner?
<p>So the USA became the dominating winner of the Nobel Prize first in the 1930s and has kept the leading position ever since. But one group that was in the lead from the start, and never seems to let go, are <em>men</em>. Maybe it shouldn't come as a shock that there is some imbalance between how many male and female prize winners there are, but how significant is this imbalance? And is it better or worse within specific prize categories like physics, medicine, literature, etc.?</p>
```
# Calculating the proportion of female laureates per decade
prop_female_winners <- nobel %>%
mutate(
female_winner = sex == "Female",
decade = floor(year / 10) * 10
) %>%
group_by(decade, category) %>%
summarize(proportion = mean(female_winner, na.rm = TRUE))
# Plotting the proportion of female laureates per decade
ggplot(prop_female_winners, aes(x = decade, y = proportion, color = category)) +
geom_line() +
geom_point() +
scale_y_continuous(labels = scales::percent, limits = 0:1, expand = c(0, 0))
```
## 6. The first woman to win the Nobel Prize
<p>The plot above is a bit messy as the lines are overplotting. But it does show some interesting trends and patterns. Overall the imbalance is pretty large with physics, economics, and chemistry having the largest imbalance. Medicine has a somewhat positive trend, and since the 1990s the literature prize is also now more balanced. The big outlier is the peace prize during the 2010s, but keep in mind that this just covers the years 2010 to 2016.</p>
<p>Given this imbalance, who was the first woman to receive a Nobel Prize? And in what category?</p>
```
# Picking out the first woman to win a Nobel Prize
nobel %>%
filter(sex == "Female") %>%
top_n(1, desc(year))
```
## 7. Repeat laureates
<p>For most scientists/writers/activists a Nobel Prize would be the crowning achievement of a long career. But for some people, one is just not enough, and there are few that have gotten it more than once. Who are these lucky few? (Having won no Nobel Prize myself, I'll assume it's just about luck.)</p>
```
# Selecting the laureates that have received 2 or more prizes.
nobel %>%
count(full_name) %>%
filter(n > 1)
```
## 8. How old are you when you get the prize?
<p>The list of repeat winners contains some illustrious names! We again meet Marie Curie, who got the prize in physics for discovering radiation and in chemistry for isolating radium and polonium. John Bardeen got it twice in physics for transistors and superconductivity, Frederick Sanger got it twice in chemistry, and Linus Carl Pauling got it first in chemistry and later in peace for his work in promoting nuclear disarmament. We also learn that organizations also get the prize as both the Red Cross and the UNHCR have gotten it twice.</p>
<p>But how old are you generally when you get the prize?</p>
```
# Loading the lubridate package
library(lubridate)
# Calculating the age of Nobel Prize winners
nobel_age <- nobel %>%
mutate(age = year - year(birth_date))
# Plotting the age of Nobel Prize winners
ggplot(nobel_age, aes(x = age, y = year)) +
geom_point() +
geom_smooth()
```
## 9. Age differences between prize categories
<p>The plot above shows us a lot! We see that people use to be around 55 when they received the price, but nowadays the average is closer to 65. But there is a large spread in the laureates' ages, and while most are 50+, some are very young.</p>
<p>We also see that the density of points is much high nowadays than in the early 1900s -- nowadays many more of the prizes are shared, and so there are many more winners. We also see that there was a disruption in awarded prizes around the Second World War (1939 - 1945). </p>
<p>Let's look at age trends within different prize categories.</p>
```
# Same plot as above, but faceted by the category of the Nobel Prize
ggplot(nobel_age, aes(x = age, y = year)) +
geom_point() +
geom_smooth(se = FALSE) +
facet_wrap(~ category)
```
## 10. Oldest and youngest winners
<p>Another plot with lots of exciting stuff going on! We see that both winners of the chemistry, medicine, and physics prize have gotten older over time. The trend is strongest for physics: the average age used to be below 50, and now it's almost 70. Literature and economics are more stable, and we also see that economics is a newer category. But peace shows an opposite trend where winners are getting younger! </p>
<p>In the peace category we also a winner around 2010 that seems exceptionally young. This begs the questions, who are the oldest and youngest people ever to have won a Nobel Prize?</p>
```
# The oldest winner of a Nobel Prize as of 2016
nobel_age %>% top_n(1, age)
# The youngest winner of a Nobel Prize as of 2016
nobel_age %>% top_n(1, desc(age))
```
## 11. You get a prize!
<p><img style="float: right;margin:20px 20px 20px 20px; max-width:200px" src="https://s3.amazonaws.com/assets.datacamp.com/production/project_309/img/paint_nobel_prize.png"></p>
<p>Hey! You get a prize for making it to the very end of this notebook! It might not be a Nobel Prize, but I made it myself in paint so it should count for something. But don't despair, Leonid Hurwicz was 90 years old when he got his prize, so it might not be too late for you. Who knows.</p>
<p>Before you leave, what was again the name of the youngest winner ever who in 2014 got the prize for "[her] struggle against the suppression of children and young people and for the right of all children to education"?</p>
```
# The name of the youngest winner of the Nobel Prize as of 2016
youngest_winner <- "Malala Yousafzai"
```
| github_jupyter |
<a href="https://colab.research.google.com/github/agemagician/CodeTrans/blob/main/prediction/single%20task/api%20generation/small_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Install the library and download the pretrained models
```
print("Installing dependencies...")
%tensorflow_version 2.x
!pip install -q t5==0.6.4
import functools
import os
import time
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import t5
!wget "https://www.dropbox.com/sh/kjoqdpj7e16dny9/AADdvjWVFckCgNQN-AqMKhiDa?dl=1" -O vocabulary.zip
!unzip vocabulary.zip
!rm vocabulary.zip
!wget "https://www.dropbox.com/sh/8dxden58rkczqg9/AADkgZtA6d-RAI2wKL9pavyFa?dl=1" -O api_gen.zip
!unzip api_gen.zip
!rm api_gen.zip
```
## Set sentencepiece model
```
from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
vocab_model_path = 'code_spm_unigram_40M.model'
vocab = SentencePieceVocabulary(vocab_model_path, extra_ids=100)
print("Vocab has a size of %d\n" % vocab.vocab_size)
```
## Set the preprocessors and the task registry for the t5 model
```
def api_gen_dataset_fn(split, shuffle_files=False):
del shuffle_files
ds = tf.data.TextLineDataset(api_gen_path[split])
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["", ""], field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds = ds.map(lambda *ex: dict(zip(["code", "docstring"], ex)))
return ds
def api_gen_preprocessor(ds):
def normalize_text(text):
return text
def to_inputs_and_targets(ex):
return {
"inputs": tf.strings.join(["description for api: ", normalize_text(ex["code"])]),
"targets": normalize_text(ex["docstring"])
}
return ds.map(to_inputs_and_targets, num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('api_gen')
t5.data.TaskRegistry.add(
"api_gen",
dataset_fn=api_gen_dataset_fn,
output_features={
"inputs": t5.data.utils.Feature(vocabulary=vocab),
"targets": t5.data.utils.Feature(vocabulary=vocab),
},
splits=["train", "validation"],
text_preprocessor=[api_gen_preprocessor],
postprocess_fn=t5.data.postprocessors.lower_text,
metric_fns=[t5.evaluation.metrics.bleu, t5.evaluation.metrics.accuracy, t5.evaluation.metrics.rouge],
)
```
## Set t5 small model
```
MODEL_DIR = "small"
model_parallelism = 1
train_batch_size = 256
tf.io.gfile.makedirs(MODEL_DIR)
model = t5.models.MtfModel(
model_dir=MODEL_DIR,
tpu=None,
tpu_topology=None,
model_parallelism=model_parallelism,
batch_size=train_batch_size,
sequence_length={"inputs": 512, "targets": 512},
mesh_shape="model:1,batch:1",
mesh_devices=["GPU:0"],
learning_rate_schedule=0.003,
save_checkpoints_steps=5000,
keep_checkpoint_max=None,
iterations_per_loop=100,
)
```
## Api Generation
### Give the description for api
```
description = "parse the uses licence node of this package, if any, and returns the license definition if theres" #@param {type:"raw"}
```
### Parsing and Tokenization
```
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
def englishTokenizer(sentence):
result = []
tokens = word_tokenize(sentence)
for t in tokens:
if( not len(t)>50):
result.append(t)
return ' '.join(result)
tokenized_description = englishTokenizer(description)
print("tokenized description: " + tokenized_description)
```
### Record the description with the prefix to a txt file
```
descriptions = [tokenized_description]
inputs_path = 'input.txt'
with tf.io.gfile.GFile(inputs_path, "w") as f:
for c in descriptions:
f.write("description for api: %s\n" % c)
predict_outputs_path = 'MtfModel-output.txt'
```
### Running the model with the best checkpoint to generating api for the given description
```
model.batch_size = 8 # Min size for small model on v2-8 with parallelism 1.
model.predict(
input_file="input.txt",
output_file=predict_outputs_path,
checkpoint_steps=840000,
beam_size=4,
vocabulary=vocab,
# Select the most probable output token at each step.
temperature=0,
)
```
### Api Generation Result
```
prediction_file = "MtfModel-output.txt-840000"
print("\nPredictions using checkpoint 840000:\n" )
with tf.io.gfile.GFile(prediction_file) as f:
for c, d in zip(descriptions, f):
if c:
print("Description: " + c + '\n')
print("Generated api: " + d)
```
| github_jupyter |
```
import sys
import numpy as np
from collections import Counter
sys.path.append('../scales_project/')
from utils import simulate_EPR
from importlib import reload
reload(simulate_EPR)
simul
import matplotlib as mpl
def setup_mpl():
mpl.rc('font', size=20)
mpl.rcParams['legend.fontsize'] = 'small'
mpl.rcParams['legend.fontsize'] = 'small'
mpl.rcParams['xtick.labelsize'] = 'small'
mpl.rcParams['ytick.labelsize'] = 'small'
mpl.rcParams['font.family']='Helvetica 45 Light'
mpl.rcParams['xtick.major.pad']='12'
mpl.rcParams['ytick.major.pad']='12'
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['xtick.major.width'] = 2
mpl.rcParams['ytick.major.width'] = 2
mpl.rcParams['xtick.minor.width'] = 2
mpl.rcParams['ytick.minor.width'] = 2
mpl.rcParams['xtick.major.size'] = 6
mpl.rcParams['ytick.major.size'] = 6
mpl.rcParams['xtick.minor.size'] = 3
mpl.rcParams['ytick.minor.size'] = 3
mpl.rcParams['axes.linewidth'] = 2
mpl.rcParams['ytick.direction'] = 'in'
mpl.rcParams['xtick.direction'] = 'in'
mpl.rcParams['xtick.top']=True
mpl.rcParams['ytick.right']=True
mpl.rcParams['mathtext.default']='regular'
mpl.rcParams['xtick.major.pad']='4'
mpl.rcParams['ytick.major.pad']='4'
mpl.rcParams['axes.labelpad']= 2
alpha = 0.6
to_rgba = mpl.colors.ColorConverter().to_rgba
setup_mpl()
reload(simulate_EPR)
all_records = []
model = simulate_EPR.EPRModel(p_beta=0.55)
model.reset()
model.run_simulation(1000)
all_records.append(model.records)
model.records_1 = model.records
locs = [model.get_coordinates()[i] for i in np.array(model.records)[:,0]]
distances = np.linalg.norm(np.array(locs[1:]) - np.array(locs[:-1]),axis = 1)
%pylab inline
plt.figure(figsize = (10,10))
plt.scatter(np.array(list(model.d.values()))[:,0],np.array(list(model.d.values()))[:,1])
sys.path.append('../scales_project/utils/')
from utils import scale_by_scale_optim
from utils import utils
locs = np.array(model.records_1)[:,0]
stop_coords = [model.d[i] for i in locs]
my_split = scale_by_scale_optim.ScalesOptim(np.array(locs),
np.array(stop_coords),
distance_func=utils.haversine,
min_dist = 1.2,
nprocs = 1,
verbose=True,
bootstrap = True,
information_criterion = None,
siglvl= 0.05)
final_series, final_scales, likelihoods, criterion_s, final_sizes, final_proba_dist, alphas = my_split.find_best_scale()
from matplotlib.collections import LineCollection
fig,ax = plt.subplots(figsize = (20,20))
x,y = np.array(list(model.d.values()))[:,0],np.array(list(model.d.values()))[:,1]
xy = np.array(list(model.d.values()))
segments = list(zip(xy[:-1],xy[1:]))
coll = LineCollection(segments, lw = 0.5, color = 'k')
ax.add_collection(coll)
ax.autoscale()
#ax.scatter(x,y, s = 1, color = 'k',zorder = 100)
ax.axis('off')
plt.show()
model.beta
plt.figure(figsize = (10,5))
utils.plot_scales_histogram(np.array(final_series).astype(int),
np.array(stop_coords),
log_dist=False,
density=True,
distance_func=utils.haversine)
plt.ylabel('pdf')
plt.xlabel('distance (m)')
```
| github_jupyter |
```
#Adapted from the method described in
#Bhatt, Samir, Edward C. Holmes, and Oliver G. Pybus. 2011. “The Genomic Rate of Molecular Adaptation of the Human Influenza A Virus.” Molecular Biology and Evolution 28 (9): 2443–51.
#and
#Bhatt, Samir, Aris Katzourakis, and Oliver G. Pybus. 2010. “Detecting Natural Selection in RNA Virus Populations Using Sequence Summary Statistics.” Infection, Genetics and Evolution: Journal of Molecular Epidemiology and Evolutionary Genetics in Infectious Diseases 10 (3): 421–30.
import math
import json
import random
from os import path
import pandas as pd
import numpy as np
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio import AlignIO
from Bio.Align import MultipleSeqAlignment
from Bio.Align import AlignInfo
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import seaborn as sns
from scipy import stats
def frequency_binning(x, midfreq_high=0.75, midfreq_low=0.15):
"""
Given the frequency of a polymorphism, return a bin (fixed, high, medium, or low)
Default mid-frequency bin is 0.15-0.75
This can be manually changed by suppling midfreq_high and midfreq_low arguments
"""
#nan frequencies are when there is no sequence coverage at the given position
if math.isnan(x):
f_bin = float('nan')
else:
if x == 1.0:
f_bin = 'f'
elif x>=midfreq_high:
f_bin = 'h'
elif x<midfreq_high and x>=midfreq_low:
f_bin = 'm'
elif x<midfreq_low:
f_bin='l'
return f_bin
def walk_through_sites(outgroup_seq, outgroup_aa_seq, alignment_seqs, midfreq_high, midfreq_low):
"""
Finds differences between the outgroup sequence and each sequence in the alignment
Determines whether these differences are synonymous or nonsynonymous
Skips sites that are ambiguous
Returns freq_bins: a list of polymorphic frequency bin (fixed, high, medium, low) at each site
Returns replacement_score: a list of replacement score at each site
Returns ingroup_bases: a list of lists. All nts that are observed at each site
"""
#at each site, count number of viruses with polymorphism
count_polymorphic = np.zeros(len(outgroup_seq))
#at each site, count totaly number of viruses
count_total_unambiguous = np.zeros(len(outgroup_seq))
count_replacement_mutations = np.zeros(len(outgroup_seq))
count_silent_mutations = np.zeros(len(outgroup_seq))
#at each site, list of nucleotide from each virus
ingroup_bases = [[] for x in range(len(outgroup_seq))]
for seq in alignment_seqs:
if len(seq) != len(outgroup_seq):
print(seq)
elif len(seq) == len(outgroup_seq):
for pos in range(len(outgroup_seq)):
outgroup_nt = str(outgroup_seq[pos])
virus_nt = str(seq[pos])
#skip ambiguous sites
if virus_nt != 'N':
if outgroup_nt != 'N':
ingroup_bases[pos].append(virus_nt)
count_total_unambiguous[pos]+=1
if virus_nt != outgroup_nt:
count_polymorphic[pos]+=1
#determine silent or replacement
codon = math.floor(pos/3)
codon_pos = pos-(codon*3)
if codon_pos == 0:
codon_nt = virus_nt+outgroup_seq[pos+1:(pos+3)]
elif codon_pos == 1:
codon_nt = outgroup_seq[pos-1]+virus_nt+outgroup_seq[pos+1]
elif codon_pos == 2:
codon_nt = outgroup_seq[(pos-2):(pos)]+virus_nt
if isinstance(codon_nt, str):
codon_nt = Seq(codon_nt)
codon_aa = codon_nt.translate()
outgroup_aa = outgroup_aa_seq[codon]
if outgroup_aa != 'X':
if codon_aa != outgroup_aa:
count_replacement_mutations[pos]+=1
elif codon_aa == outgroup_aa:
count_silent_mutations[pos]+=1
polymorphic_frequencies = count_polymorphic/count_total_unambiguous
replacement_score = count_replacement_mutations/count_total_unambiguous
freq_bins = [frequency_binning(x, midfreq_high, midfreq_low) for x in polymorphic_frequencies]
return freq_bins, replacement_score, ingroup_bases
def determine_site_type(outgroup, ingroup):
"""
Determines site type (ala Bhatt et al, 2010)
Site type depends on whether there are polymorphisms at a given site and, if so, how many different nts are observed
"""
ingroup_bases_nan = set(ingroup)
#remove 'nan's
ingroup_bases = {x for x in ingroup_bases_nan if pd.notna(x)}
if len(ingroup_bases) == 0:
site_type = None
elif len(ingroup_bases) != 0:
#all ingroup bases are identical
if len(ingroup_bases) == 1:
if outgroup in ingroup_bases:
site_type = 1
elif outgroup not in ingroup_bases:
site_type = 2
#2 different bases in ingroup
elif len(ingroup_bases) == 2:
if outgroup in ingroup_bases:
site_type = 3
elif outgroup not in ingroup_bases:
site_type = 4
#3 different bases in ingroup
elif len(ingroup_bases) == 3:
if outgroup in ingroup_bases:
site_type = 5
elif outgroup not in ingroup_bases:
site_type = 6
#4 different bases in ingroup
elif len(ingroup_bases) == 4:
site_type = 7
return site_type
def fixation_polymorphism_score(outgroup, ingroup):
"""
Returns Fi=fixation score and Pi=polymorphism score based on site type (ala Bhatt et al, 2010)
"""
site_type = determine_site_type(outgroup, ingroup)
if site_type == None:
Fi = float('nan')
Pi = float('nan')
if site_type == 1:
Fi = 0
Pi = 0
elif site_type == 2:
Fi = 1
Pi = 0
elif site_type in [3,5,7]:
Fi = 0
Pi = 1
elif site_type == 4:
Fi = 0.5
Pi = 0.5
elif site_type == 6:
Fi = (1/3)
Pi = (2/3)
return Fi, Pi
def assign_fi_pi(outgroup_seq, ingroup_bases):
"""
Returns list of Fi and Pi at each position in the sequence
"""
#at each site, record Fi
Fi_all = np.zeros(len(outgroup_seq))
#at each site, record Pi
Pi_all = np.zeros(len(outgroup_seq))
for pos in range(len(outgroup_seq)):
outgroup_nt = outgroup_seq[pos]
ingroup_nts = ingroup_bases[pos]
Fi, Pi = fixation_polymorphism_score(outgroup_nt, ingroup_nts)
Fi_all[pos] = Fi
Pi_all[pos] = Pi
return Fi_all, Pi_all
def separate_clades(cov, gene):
"""
If a virus should be divided into multiple clades, find which clade a strain belongs to from augur output
file and return dataframe specifing the strain name and clade
"""
if path.exists('../'+str(cov)+'/results/clades_'+str(gene)+'.json'):
clade_file = '../'+str(cov)+'/results/clades_'+str(gene)+'.json'
# if gene =='spike' or gene == 's1' or gene == 's2':
# clade_file = '../'+str(cov)+'/results/clades_spike.json'
else:
clade_file = '../'+str(cov)+'/results/clades_full.json'
clade_lists = []
with open(clade_file, "r") as clade_handle:
clades = json.load(clade_handle)
for node, v in clades['nodes'].items():
if 'NODE' not in node:
clade_lists.append({'clade':v['clade_membership'],
'strain':node})
clade_df = pd.DataFrame(clade_lists)
return clade_df
def subset_viruses(cov, gene, window, clade, min_seqs, year_max=None, year_min= None):
"""
Read in data from metafile, alignment file and reference file
Partition data based on year the virus isolate was sequenced
Temporal data will be grouped into time windows (default window is 3yrs), and only windows with more than
min_seqs (default is 2) will be considered in the analysis
Range of years considered by the analysis can be specified by year_max and year_min
Also calculates the outgroup (or founder) sequence as the consensus sequence at the first time point
"""
# input_file_outgroup = '../'+str(cov)+'/auspice/seasonal_corona_'+str(cov)+'_'+str(gene)+'_root-sequence.json'
input_file_alignment = '../'+str(cov)+'/results/aligned_'+str(cov)+'_'+str(gene)+'.fasta'
metafile = '../'+str(cov)+'/results/metadata_'+str(cov)+'_'+str(gene)+'.tsv'
#Subset data based on time windows
meta = pd.read_csv(metafile, sep = '\t')
meta.drop(meta[meta['date']=='?'].index, inplace=True)
meta.dropna(subset=['date'], inplace=True)
meta['year'] = meta['date'].str[:4].astype('int')
if year_max:
meta.drop(meta[meta['year']>year_max].index, inplace=True)
if year_min:
meta.drop(meta[meta['year']<year_min].index, inplace=True)
date_range = meta['year'].max() - meta['year'].min()
if clade!= None:
clade_df = separate_clades(cov, gene)
meta = meta.merge(clade_df, on='strain')
meta.drop(meta[meta['clade']!=clade].index, inplace=True)
#Group viruses by time windows
virus_time_subset = {}
if window == 'all':
years = str(meta['year'].min()) + '-' + str(meta['year'].max())
virus_time_subset[years] = meta['strain'].tolist()
else:
date_window_start = meta['year'].min()
date_window_end = meta['year'].min() + window
while date_window_end <= meta['year'].max():
years = str(date_window_start) + '-' + str(date_window_end)
strains = meta[(meta['year']>=date_window_start) & (meta['year']<date_window_end)]['strain'].tolist()
virus_time_subset[years] = strains
#non-overlapping
# date_window_end += window
# date_window_start+= window
#sliding window
date_window_end += 1
date_window_start += 1
#Only use time points with enough data:
virus_time_subset = {k:v for k,v in virus_time_subset.items() if len(v)>=min_seqs}
year_windows = []
seqs_in_window = []
#Find outgroup sequence from strains at first time point(to make consensus from)
first_window = True
first_window_strains = []
first_window_sequences = []
alignment_time_subset = {}
for years, subset_viruses in virus_time_subset.items():
year_windows.append(years)
seqs_in_window.append(len(subset_viruses))
alignment_time_subset[years] = []
#make consensus sequence at first time point
if first_window == True:
first_window_strains+=subset_viruses
first_window = False
with open(input_file_alignment, "r") as aligned_handle:
for virus in SeqIO.parse(aligned_handle, "fasta"):
if virus.id in first_window_strains:
first_window_sequences.append(virus)
if virus.id in subset_viruses:
alignment_time_subset[years].append(virus.seq)
first_window_alignment = MultipleSeqAlignment(first_window_sequences)
outgroup_seq = AlignInfo.SummaryInfo(first_window_alignment).dumb_consensus(ambiguous ='N')
outgroup_aa_seq = outgroup_seq.translate()
return virus_time_subset, alignment_time_subset, outgroup_seq, outgroup_aa_seq, year_windows, seqs_in_window
def bootstrap_alignment(bootstrap_codon_order, sequences):
"""
for each time point, create sample alignment of same size as emperical alignment
"""
bootstrap_alignment_seqs = []
for virus_seq in sequences:
virus_seq_str = str(virus_seq)
virus_codons = [virus_seq_str[i:i+3] for i in range(0, len(virus_seq_str), 3)]
bootstrap_virus = ''.join([virus_codons[x] for x in bootstrap_codon_order])
bootstrap_alignment_seqs.append(bootstrap_virus)
return bootstrap_alignment_seqs
def bootstrap_ancestral(outgroup_seq):
"""
Sample codons from emperical ancestral sequence with replacement
"""
outgroup_seq_str = str(outgroup_seq)
#sample codons with replacement
ancestral_codons = [outgroup_seq_str[i:i+3] for i in range(0, len(outgroup_seq_str), 3)]
bootstrap_codon_order = random.choices(range(len(ancestral_codons)), k=len(ancestral_codons))
bootstrap_ancestral_seq = ''.join([ancestral_codons[x] for x in bootstrap_codon_order])
bootstrap_ancestral_seq = Seq(bootstrap_ancestral_seq)
return bootstrap_ancestral_seq, bootstrap_codon_order
def make_bootstrap_dataset(outgroup_seq, alignment_time_subset):
"""
Return a bootstrapped ancestral sequence (outgroup) and bootstrapped alignment of sequences
"""
bootstrap_ancestral_seq, bootstrap_codon_order = bootstrap_ancestral(outgroup_seq)
bootstrap_ancestral_seq_aa = bootstrap_ancestral_seq.translate()
bootstrap_alignment_seqs = {}
for years, sequences in alignment_time_subset.items():
bootstrap_sequences = bootstrap_alignment(bootstrap_codon_order, sequences)
bootstrap_alignment_seqs[years] = bootstrap_sequences
return bootstrap_ancestral_seq, bootstrap_ancestral_seq_aa, bootstrap_alignment_seqs
def calc_site_stats(alignment_sequences, outgroup_seq, outgroup_aa_seq, midfreq_high, midfreq_low):
"""
Runs functions to determine frequency bins, fixations scores, polymorphism scores,
replacement scores and silent scores at each site
"""
#Find percent polymorphism at each site
#Also determine whether polymorphism is silent or replacement
#initiate lists to record all time windows
frequency_bins = []
fixation_scores = []
polymorphism_scores = []
replacement_scores = []
silent_scores = []
for years, alignment_seqs in alignment_sequences.items():
#calculate stats for each window separately
freq_bins, replacement_score, ingroup_bases = walk_through_sites(outgroup_seq, outgroup_aa_seq,
alignment_seqs,
midfreq_high, midfreq_low)
Fi_all, Pi_all = assign_fi_pi(outgroup_seq, ingroup_bases)
silent_score = 1-replacement_score
frequency_bins.append(freq_bins)
fixation_scores.append(Fi_all)
polymorphism_scores.append(Pi_all)
replacement_scores.append(replacement_score)
silent_scores.append(silent_score)
return frequency_bins, fixation_scores, polymorphism_scores, replacement_scores, silent_scores
def calc_m_ratio(cov, gene, window, clade, min_seqs, midfreq_high, midfreq_low, bootstrap, year_max=None, year_min= None):
"""
Calculate the M ratio, needed for calculating 'a' in bhatt_estimators
M=rm/sm
Not expected to vary through time provided that long-term effective population sizes remain sufficiently large
For each gene, M is calculated by combining site count among time points
"""
if gene=='spike' or gene=='s1':
(virus_time_subset, alignment_time_subset,
outgroup_seq, outgroup_aa_seq,
year_windows, seqs_in_window) = subset_viruses(cov, 's2', 'all',
clade, min_seqs, year_max, year_min)
if bootstrap:
input_file_alignment = '../'+str(cov)+'/results/aligned_'+str(cov)+'_s2.fasta'
(bootstrap_ancestral_seq, bootstrap_ancestral_seq_aa,
bootstrap_alignment_seqs) = make_bootstrap_dataset(outgroup_seq, alignment_time_subset)
else:
(virus_time_subset, alignment_time_subset,
outgroup_seq, outgroup_aa_seq,
year_windows, seqs_in_window) = subset_viruses(cov, gene, 'all',
clade, min_seqs, year_max, year_min)
if bootstrap:
input_file_alignment = '../'+str(cov)+'/results/aligned_'+str(cov)+'_'+str(gene)+'.fasta'
(bootstrap_ancestral_seq, bootstrap_ancestral_seq_aa,
bootstrap_alignment_seqs) = make_bootstrap_dataset(outgroup_seq, alignment_time_subset)
if bootstrap:
(frequency_bins,
fixation_scores, polymorphism_scores,
replacement_scores, silent_scores) = calc_site_stats(bootstrap_alignment_seqs,
bootstrap_ancestral_seq, bootstrap_ancestral_seq_aa,
midfreq_high, midfreq_low)
else:
(frequency_bins,
fixation_scores, polymorphism_scores,
replacement_scores, silent_scores) = calc_site_stats(alignment_time_subset,
outgroup_seq, outgroup_aa_seq, midfreq_high, midfreq_low)
sm = 0
rm = 0
for site in range(len(frequency_bins[0])):
freq_bin = frequency_bins[0][site]
if freq_bin == 'm':
sm+= (polymorphism_scores[0][site]*silent_scores[0][site])
rm+= (polymorphism_scores[0][site]*replacement_scores[0][site])
if sm ==0:
sm = 0.00000000000000001
m_ratio = rm/sm
return m_ratio
def bhatt_estimators(cov, gene, outgroup_seq, frequency_bins, year_windows, fixation_scores, polymorphism_scores, replacement_scores, silent_scores, m_ratio):
"""
Calculate sf, rf, sh, rh, sm, rm, sl, rl (bhatt estimators)
Then calculate al, ah, af
Number of adaptive substitutions = af + ah
Calculate these values for each time window and return:
window_midpoint: middle year of time window
adaptive_substitutions: list of adaptive substitutions estimated for each time window
adaptive_substitutions_per_codon: list of adaptive substitutions per codon estimated for each time window
rate_of_adaptation: fit linear regression of adaptive_subtitutions_per_codon vs time
"""
#Initiate lists to store a values
window_midpoint = []
adaptive_substitutions = []
#for each window, calculate bhatt estimators
for years_window in range(len(frequency_bins)):
window_start = int(year_windows[years_window][0:4])
window_end = int(year_windows[years_window][-4:])
window_midpoint.append((window_start + window_end)/2)
sf = 0
rf = 0
sh = 0
rh = 0
sm = 0
rm = 0
sl = 0
rl = 0
#calculate number of sites in different catagories (defined by polymorphic freq at that site)
window_freq_bins = frequency_bins[years_window]
for site in range(len(window_freq_bins)):
freq_bin = window_freq_bins[site]
#ignore sites with no polymorphisms?
if freq_bin!='nan':
if freq_bin == 'f':
sf+= (fixation_scores[years_window][site]*silent_scores[years_window][site])
rf+= (fixation_scores[years_window][site]*replacement_scores[years_window][site])
elif freq_bin == 'h':
sh+= (polymorphism_scores[years_window][site]*silent_scores[years_window][site])
rh+= (polymorphism_scores[years_window][site]*replacement_scores[years_window][site])
elif freq_bin == 'm':
sm+= (polymorphism_scores[years_window][site]*silent_scores[years_window][site])
rm+= (polymorphism_scores[years_window][site]*replacement_scores[years_window][site])
elif freq_bin == 'l':
sl+= (polymorphism_scores[years_window][site]*silent_scores[years_window][site])
rl+= (polymorphism_scores[years_window][site]*replacement_scores[years_window][site])
# print(year_windows[years_window])
# print(sf, rf, sh, rh, sm, rm, sl, rl)
#Calculate equation 1: number of nonneutral sites
al = rl - sl*m_ratio
ah = rh - sh*m_ratio
af = rf - sf*m_ratio
#set negative a values to zero
if al < 0:
al = 0
if ah < 0:
ah = 0
if af < 0:
af = 0
# print(al, ah, af)
#Calculate the number and proportion of all fixed or high-freq sites that have undergone adaptive change
number_adaptive_substitutions = af + ah
adaptive_substitutions.append(number_adaptive_substitutions)
# proportion_adaptive_sites = (af + ah)/(rf +rh)
gene_length = len(outgroup_seq)
adaptive_substitutions_per_codon = [x/gene_length for x in adaptive_substitutions]
if len(window_midpoint)!=0:
rate_of_adaptation, intercept, r_value, p_value, std_err = stats.linregress(window_midpoint, adaptive_substitutions_per_codon)
else:
rate_of_adaptation = 0
return window_midpoint, adaptive_substitutions, adaptive_substitutions_per_codon, rate_of_adaptation
def calc_bhatt_a(cov, gene, window, clade, min_seqs, midfreq_high, midfreq_low, bootstrap, year_max=None, year_min=None):
"""
Run all functions to calculate 'a'
If data should be bootstrapped, also make bootstrapped outgroups and alignments and calculate 'a' on these
"""
#Get virus subset
(virus_time_subset, alignment_time_subset,
outgroup_seq, outgroup_aa_seq, year_windows, seqs_in_window) = subset_viruses(cov, gene,
window, clade, min_seqs,
year_max, year_min)
# print(alignment_time_subset, [len(alignment_time_subset[x]) for x in alignment_time_subset.keys()], seqs_in_window)
#Calculate frequencies for emperical data
(frequency_bins,
fixation_scores, polymorphism_scores,
replacement_scores, silent_scores) = calc_site_stats(alignment_time_subset, outgroup_seq,
outgroup_aa_seq, midfreq_high, midfreq_low)
#calculate m ratio
m_ratio = calc_m_ratio(cov, gene, window,
clade, min_seqs, midfreq_high, midfreq_low, False,
year_max, year_min)
#calculate bhatt estimators
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon, rate_of_adaptation) = bhatt_estimators(cov, gene, outgroup_seq,
frequency_bins, year_windows,
fixation_scores, polymorphism_scores,
replacement_scores, silent_scores, m_ratio)
n_bootstraps = 100
bootstrap_count = 0
bootstrap_adaptive_substitutions = []
bootstrap_adaptive_substitutions_per_codon = []
bootstrap_rate_of_adaptation = []
if bootstrap:
while bootstrap_count < n_bootstraps:
bootstrap_count+=1
if bootstrap_count%100 == 0:
print(f'{bootstrap_count} bootstraps done for {cov} {gene}')
#Get bootstrapped ancestral seq and alignment
input_file_alignment = '../'+str(cov)+'/results/aligned_'+str(cov)+'_'+str(gene)+'.fasta'
(bootstrap_ancestral_seq, bootstrap_ancestral_seq_aa,
bootstrap_alignment_seqs) = make_bootstrap_dataset(outgroup_seq, alignment_time_subset)
#Calculate frequencies for bootstrap data
(bootstrap_frequency_bins,
bootstrap_fixation_scores, bootstrap_polymorphism_scores,
bootstrap_replacement_scores, bootstrap_silent_scores) = calc_site_stats(bootstrap_alignment_seqs,
bootstrap_ancestral_seq,
bootstrap_ancestral_seq_aa,
midfreq_high, midfreq_low)
#Calculate m ratio
bootstrap_m_ratio = calc_m_ratio(cov, gene, window,
clade, min_seqs, midfreq_high, midfreq_low, True,
year_max, year_min)
#calculate bhatt estimators
(bs_window_midpoint, bs_adaptive_substitutions,
bs_adaptive_substitutions_per_codon,
bs_rate_of_adaptation) = bhatt_estimators(cov, gene, bootstrap_ancestral_seq,
bootstrap_frequency_bins, year_windows,
bootstrap_fixation_scores,
bootstrap_polymorphism_scores,
bootstrap_replacement_scores, bootstrap_silent_scores,
bootstrap_m_ratio)
#add these bootstrap values to list
bootstrap_adaptive_substitutions.append(bs_adaptive_substitutions)
bootstrap_adaptive_substitutions_per_codon.append(bs_adaptive_substitutions_per_codon)
bootstrap_rate_of_adaptation.append(bs_rate_of_adaptation)
if bootstrap:
return window_midpoint, adaptive_substitutions, adaptive_substitutions_per_codon, rate_of_adaptation, bootstrap_adaptive_substitutions, bootstrap_adaptive_substitutions_per_codon, bootstrap_rate_of_adaptation
else:
return window_midpoint, adaptive_substitutions, adaptive_substitutions_per_codon, rate_of_adaptation
def plot_adaptive_subs_per_codon(cov, genes, window, clade, min_seqs, midfreq_high, midfreq_low, bootstrap, year_max=None, year_min=None, filename=None):
"""
For a given virus and any number of genes, plot time on x and adaptive subsititions per codon on y.
Calculations will be saved as .json files in /bhatt_results directory so they do not have to be rerun
If new data is acquired for a virus, these will need to be deleted before rerunning this plotting function
"""
data_to_plot = []
# color_map = {'229e':'#2E86C1', 'oc43': '#CB4335', 'nl63': '#009888', 'hku1': '#7c5295'}
color_map = {'oc43A': '#208288', 'oc43B':'#76C7BE', '229e': '#0B194C',
'nl63A': '#87C735', 'nl63B': '#009888', 'nl63': '#87C735',
'hku1A': '#2E74B3', 'hku1B': '#92B2DE', 'hku1': '#255191'}
for gene in genes:
if bootstrap:
save_json_name = 'bhatt_results/'+str(cov)+str(clade)+'_'+str(gene)+'_bhatt_analysis_bootstrapped.json'
if path.exists(save_json_name):
with open(save_json_name) as json_handle:
json_dict = json.load(json_handle)
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation, bootstrap_adaptive_substitutions,
bootstrap_adaptive_substitutions_per_codon,
bootstrap_rate_of_adaptation) = (json_dict['window_midpoint'],
json_dict['adaptive_substitutions'],
json_dict['adaptive_substitutions_per_codon'],
json_dict['rate_of_adaptation'],
json_dict['bootstrap_adaptive_substitutions'],
json_dict['bootstrap_adaptive_substitutions_per_codon'],
json_dict['bootstrap_rate_of_adaptation'])
else:
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation, bootstrap_adaptive_substitutions,
bootstrap_adaptive_substitutions_per_codon,
bootstrap_rate_of_adaptation) = calc_bhatt_a(cov, gene, window,
clade, min_seqs, midfreq_high,
midfreq_low, bootstrap, year_max, year_min)
save_json = {'cov': cov, 'gene': gene, 'window':window, 'clade':clade, 'min_seqs': min_seqs,
'midfreq_high': midfreq_high, 'midfreq_low': midfreq_low,
'window_midpoint':window_midpoint, 'adaptive_substitutions':adaptive_substitutions,
'adaptive_substitutions_per_codon':adaptive_substitutions_per_codon, 'rate_of_adaptation': rate_of_adaptation,
'bootstrap_adaptive_substitutions': bootstrap_adaptive_substitutions,
'bootstrap_adaptive_substitutions_per_codon': bootstrap_adaptive_substitutions_per_codon,
'bootstrap_rate_of_adaptation':bootstrap_rate_of_adaptation}
with open(save_json_name, 'w') as outfile:
json.dump(save_json, outfile)
else:
save_json_name = 'bhatt_results/'+str(cov)+str(clade)+'_'+str(gene)+'_bhatt.json'
if path.exists(save_json_name):
with open(save_json_name) as json_handle:
json_dict = json.load(json_handle)
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation) = (json_dict['window_midpoint'],
json_dict['adaptive_substitutions'],
json_dict['adaptive_substitutions_per_codon'],
json_dict['rate_of_adaptation'])
else:
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation) = calc_bhatt_a(cov, gene, window, clade, min_seqs,
midfreq_high, midfreq_low,
bootstrap, year_max, year_min)
for year in range(len(window_midpoint)):
if clade!=None:
data_to_plot.append({'cov': cov, 'clade': clade, 'gene': gene, 'data_plotted': cov+clade,
'year': window_midpoint[year],
'adaptive_subs_per_codon': adaptive_substitutions_per_codon[year]})
elif clade==None:
data_to_plot.append({'cov': cov, 'clade': clade, 'gene': gene, 'data_plotted': cov,
'year': window_midpoint[year],
'adaptive_subs_per_codon': adaptive_substitutions_per_codon[year]})
if bootstrap:
num_bootstraps = range(len(bootstrap_adaptive_substitutions_per_codon))
for bootstrap_iteration in num_bootstraps:
bs_adaptive_subs = 'bs_subs_'+str(bootstrap_iteration)
color_map['bs'+str(bootstrap_iteration)] = '#E5E5E5'
data_to_plot.append({'cov': cov, 'gene': gene, 'data_plotted': 'bs'+str(bootstrap_iteration),
'year': window_midpoint[year],
bs_adaptive_subs: bootstrap_adaptive_substitutions_per_codon[bootstrap_iteration][year]})
df_to_plot = pd.DataFrame(data_to_plot)
sns.set(font_scale=2.0)
sns.set_style("white")
g = sns.FacetGrid(df_to_plot, col='gene', col_wrap=2, hue='data_plotted', height=6, aspect=1,
palette=color_map, sharey=True, sharex=False)
#plot distribution of bootstrap regression lines
if bootstrap:
for bootstrap_iteration in num_bootstraps:
g = g.map(sns.regplot, 'year', 'bs_subs_'+str(bootstrap_iteration), ci = None, scatter=False)
if bootstrap_iteration%100 == 0:
print(f'{bootstrap_iteration} plots done for {cov} {gene}')
backgroundartists = []
for ax in g.axes.flat:
for l in ax.lines + ax.collections:
l.set_zorder(1)
backgroundartists.append(l)
#plot emprical data with regression line
# g = g.map(sns.regplot, 'year', 'adaptive_subs_per_codon', ci=None)
g.map(sns.regplot, 'year', 'adaptive_subs_per_codon', ci=None)
if bootstrap:
for ax in g.axes.flat:
for l in ax.lines + ax.collections:
if l not in backgroundartists:
l.set_zorder(5)
# Iterate thorugh each axis
for ax in g.axes.flat:
ax.tick_params(axis='both', which='major', labelsize=16)
# Make x and y-axis labels slightly larger
ax.set_xlabel(ax.get_xlabel(), fontsize='medium')
if ax.get_ylabel():
ax.set_ylabel('Adaptive Subs per Codon', fontsize='medium')
# Make title more human-readable and larger
if ax.get_title():
ax.set_title(ax.get_title().split('=')[1],
fontsize='medium')
# Make right ylabel more human-readable and larger
# Only the 2nd and 4th axes have something in ax.texts
if ax.texts:
txt = ax.texts[0]
ax.text(txt.get_unitless_position()[0], txt.get_unitless_position()[1],
txt.get_text().split('=')[1],
transform=ax.transAxes,
va='center',
fontsize='medium')
# Remove the original text
ax.texts[0].remove()
#plot error bars
# g = g.map(sns.pointplot, 'year', 'bs_adaptive_subs_per_codon', ci = 95, color='grey', join=False, capsize=.2)
if filename:
g.savefig(filename, dpi=300)
plot_adaptive_subs_per_codon('229e', ['replicase1ab', 'rdrp', 'spike',
's1', 's2', 'envelope', 'membrane', 'nucleocapsid'],
3, None, 2, 0.75, 0.15, False)
plot_adaptive_subs_per_codon('oc43', ['replicase1ab', 'rdrp', 'he', 'spike',
's1', 's2', 'envelope', 'membrane', 'nucleocapsid'],
3, 'A', 2, 0.75, 0.15, True)
#100 bootstraps
plot_adaptive_subs_per_codon('oc43', ['spike', 's1', 's2', 'rdrp'],
3, 'A', 3, 0.75, 0.15, True, filename='figure4_new.png')
#calculated as slope of adaptive subs per codon
def plot_adaptive_subs_per_codon_per_year_slope(covs, genes, window, min_seqs, midfreq_high, midfreq_low, bootstrap, year_max=None, year_min=None, filename=None):
"""
For any number of viruses and any number of genes, plot gene on x and adaptive subsititions per
codon per year on y, colored by virus.
Calculations will be saved as .json files in /bhatt_results directory so they do not have to be rerun
If new data is acquired for a virus, these will need to be deleted before rerunning this plotting function
"""
data_to_plot = []
cov_clades = {'229e': [None], 'oc43': ['A', 'B'], 'nl63': [None], 'hku1': ['A', 'B']}
for cov in covs:
clades = cov_clades[cov]
for clade in clades:
for gene in genes:
#some covs have different genes, only run if cov has data for this gene
# if path.exists(f"../{cov}/results/metadata_{cov}_{gene}.tsv"):
if clade == None:
cov_clade = cov
elif clade != None:
cov_clade = str(cov)+str(clade)
if bootstrap:
if clade == None:
save_json_name = 'bhatt_results/'+str(cov)+'_'+str(gene)+'_bhatt_analysis_bootstrapped.json'
elif clade != None:
save_json_name = 'bhatt_results/'+str(cov)+str(clade)+'_'+str(gene)+'_bhatt_analysis_bootstrapped.json'
if path.exists(save_json_name):
with open(save_json_name) as json_handle:
json_dict = json.load(json_handle)
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation, bootstrap_adaptive_substitutions,
bootstrap_adaptive_substitutions_per_codon,
bootstrap_rate_of_adaptation) = (json_dict['window_midpoint'],
json_dict['adaptive_substitutions'],
json_dict['adaptive_substitutions_per_codon'],
json_dict['rate_of_adaptation'],
json_dict['bootstrap_adaptive_substitutions'],
json_dict['bootstrap_adaptive_substitutions_per_codon'],
json_dict['bootstrap_rate_of_adaptation'])
else:
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation, bootstrap_adaptive_substitutions,
bootstrap_adaptive_substitutions_per_codon,
bootstrap_rate_of_adaptation) = calc_bhatt_a(cov, gene, window,
clade, min_seqs, midfreq_high,
midfreq_low, bootstrap, year_max, year_min)
save_json = {'cov': cov, 'gene': gene, 'window':window, 'clade':clade, 'min_seqs': min_seqs,
'midfreq_high': midfreq_high, 'midfreq_low': midfreq_low,
'window_midpoint':window_midpoint, 'adaptive_substitutions':adaptive_substitutions,
'adaptive_substitutions_per_codon':adaptive_substitutions_per_codon, 'rate_of_adaptation': rate_of_adaptation,
'bootstrap_adaptive_substitutions': bootstrap_adaptive_substitutions,
'bootstrap_adaptive_substitutions_per_codon': bootstrap_adaptive_substitutions_per_codon,
'bootstrap_rate_of_adaptation':bootstrap_rate_of_adaptation}
with open(save_json_name, 'w') as outfile:
json.dump(save_json, outfile)
slope_sci = rate_of_adaptation * (10**3)
bs_slope_sci = [x * (10**3) for x in bootstrap_rate_of_adaptation]
lower_95ci = np.percentile(sorted(bs_slope_sci), 2.5)
upper_95ci = np.percentile(sorted(bs_slope_sci), 97.5)
data_to_plot.append({'cov': cov, 'gene': gene, 'cov_clade': cov_clade,
'adaptive_subs_per_codon_per_year': slope_sci,
'lower_95ci': lower_95ci, 'upper_95ci': upper_95ci,
'ci': [lower_95ci, upper_95ci]})
else:
if clade == None:
save_json_name = 'bhatt_results/'+str(cov)+'_'+str(gene)+'_bhatt_analysis.json'
elif clade != None:
save_json_name = 'bhatt_results/'+str(cov)+str(clade)+'_'+str(gene)+'_bhatt_analysis.json'
if path.exists(save_json_name):
with open(save_json_name) as json_handle:
json_dict = json.load(json_handle)
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation) = (json_dict['window_midpoint'],
json_dict['adaptive_substitutions'],
json_dict['adaptive_substitutions_per_codon'],
json_dict['rate_of_adaptation'])
else:
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation) = calc_bhatt_a(cov, gene, window, clade, min_seqs,
midfreq_high, midfreq_low,
bootstrap, year_max, year_min)
save_json = {'cov': cov, 'gene': gene, 'window':window, 'clade':clade, 'min_seqs': min_seqs,
'midfreq_high': midfreq_high, 'midfreq_low': midfreq_low,
'window_midpoint':window_midpoint, 'adaptive_substitutions':adaptive_substitutions,
'adaptive_substitutions_per_codon':adaptive_substitutions_per_codon,
'rate_of_adaptation': rate_of_adaptation}
with open(save_json_name, 'w') as outfile:
json.dump(save_json, outfile)
slope_sci = rate_of_adaptation * (10**3)
data_to_plot.append({'cov': cov, 'gene': gene, 'cov_clade': cov_clade,
'adaptive_subs_per_codon_per_year': slope_sci})
df_to_plot = pd.DataFrame(data_to_plot)
sns.set(font_scale=1.0)
sns.set_style("white")
color_map = {'oc43': '#208288', 'oc43A': '#208288', 'oc43B':'#76C7BE', '229e': '#0B194C',
'nl63A': '#87C735', 'nl63B': '#009888', 'nl63': '#87C735',
'hku1A': '#2E74B3', 'hku1B': '#92B2DE', 'hku1': '#255191'}
cov_clades = list(df_to_plot['cov_clade'].unique())
x_coords = {}
all_x_ticks = []
last_coord = 0.0
for gene in genes:
x_coords[gene] = {}
for cov_clade in cov_clades:
last_coord+=0.25
x_coords[gene][cov_clade] = last_coord
all_x_ticks.append(last_coord)
last_coord+=1.0
fig, ax = plt.subplots(figsize=(15,8))
x_labels = []
gene_ticks = []
for gene in genes:
gene_coords = list(x_coords[gene].values())
gene_ticks.append(sum(gene_coords)/len(gene_coords))
x_labels.append(gene)
for cov_clade in cov_clades:
x = x_coords[gene][cov_clade]
df_row = df_to_plot[(df_to_plot['gene']==gene)&(df_to_plot['cov_clade']==cov_clade)]
y = float(df_row['adaptive_subs_per_codon_per_year'])
if bootstrap:
err_lower = float(df_row['lower_95ci'])
err_upper = float(df_row['upper_95ci'])
ax.vlines( x, err_lower, err_upper)
ax.plot(x, y, 'o', ms=14, color=color_map[cov_clade])
plt.xticks(gene_ticks, x_labels)
legend_markers = []
for cov_clade in cov_clades:
legend_markers.append(mlines.Line2D([0], [0], color='w', markerfacecolor=color_map[cov_clade], marker='o',
markersize=12, label=cov_clade))
plt.legend(handles=legend_markers, loc='upper right')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_ylabel('adaptive subs per codon per year (x10^-3)', fontsize=16)
ax.set_xlabel("gene", fontsize=16)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(14)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(14)
if filename:
fig.savefig(filename, dpi=300)
#Run all genes for Nicola, OC43 no lineages
plot_adaptive_subs_per_codon_per_year_slope(['oc43'], ['replicase1ab', 'rdrp', 'nonstructural2a', 'he', 'spike',
's1', 's2', 'nonstructural2', 'envelope',
'membrane', 'nucleocapsid',
'n2protein'],
3, 2, 0.75, 0.15, True,
year_max=None, year_min=None,
filename = 'oc43_nolineages_allgenes_bhatt.png')
#Run all genes for Nicola, 229e
plot_adaptive_subs_per_codon_per_year_slope(['229e'], ['replicase1ab', 'rdrp', 'spike',
's1', 's2', 'protein4a', 'protein4b', 'envelope',
'membrane', 'nucleocapsid'],
3, 2, 0.75, 0.15, True,
year_max=None, year_min=None, filename = '229e_allgenes_bhatt.png')
#Run all genes for Nicola, nl63
plot_adaptive_subs_per_codon_per_year_slope(['nl63'], ['replicase1ab', 'rdrp', 'spike',
's1', 's2', 'protein3',
'envelope', 'membrane', 'nucleocapsid'],
3, 2, 0.75, 0.15, True,
year_max=None, year_min=None, filename = 'nl63_allgenes_bhatt.png')
plot_adaptive_subs_per_codon_per_year_slope(['oc43', '229e', 'nl63'], ['replicase1ab', 'rdrp', 'spike', 's1',
's2', 'nucleocapsid'],
3, 2, 0.75, 0.15, False,
year_max=None, year_min=None, filename='adaptation_rates_across_genes.png')
#calculated as slope of adaptive subs per codon
def plot_adaptive_subs_per_codon_per_year_slope_no_hku1_lineages(covs, genes, window, min_seqs, midfreq_high, midfreq_low, bootstrap, year_max=None, year_min=None, filename=None):
data_to_plot = []
cov_clades = {'229e': [None], 'oc43': ['A', 'B'], 'nl63': [None], 'hku1': [None]}
for cov in covs:
clades = cov_clades[cov]
for clade in clades:
for gene in genes:
if clade == None:
cov_clade = cov
elif clade != None:
cov_clade = str(cov)+str(clade)
if bootstrap:
if clade == None:
save_json_name = 'bhatt_results/'+str(cov)+'_'+str(gene)+'_bhatt_analysis_bootstrapped.json'
elif clade != None:
save_json_name = 'bhatt_results/'+str(cov)+str(clade)+'_'+str(gene)+'_bhatt_analysis_bootstrapped.json'
if path.exists(save_json_name):
with open(save_json_name) as json_handle:
json_dict = json.load(json_handle)
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation, bootstrap_adaptive_substitutions,
bootstrap_adaptive_substitutions_per_codon,
bootstrap_rate_of_adaptation) = (json_dict['window_midpoint'],
json_dict['adaptive_substitutions'],
json_dict['adaptive_substitutions_per_codon'],
json_dict['rate_of_adaptation'],
json_dict['bootstrap_adaptive_substitutions'],
json_dict['bootstrap_adaptive_substitutions_per_codon'],
json_dict['bootstrap_rate_of_adaptation'])
else:
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation, bootstrap_adaptive_substitutions,
bootstrap_adaptive_substitutions_per_codon,
bootstrap_rate_of_adaptation) = calc_bhatt_a(cov, gene, window,
clade, min_seqs, midfreq_high,
midfreq_low, bootstrap, year_max, year_min)
save_json = {'cov': cov, 'gene': gene, 'window':window, 'clade':clade, 'min_seqs': min_seqs,
'midfreq_high': midfreq_high, 'midfreq_low': midfreq_low,
'window_midpoint':window_midpoint, 'adaptive_substitutions':adaptive_substitutions,
'adaptive_substitutions_per_codon':adaptive_substitutions_per_codon, 'rate_of_adaptation': rate_of_adaptation,
'bootstrap_adaptive_substitutions': bootstrap_adaptive_substitutions,
'bootstrap_adaptive_substitutions_per_codon': bootstrap_adaptive_substitutions_per_codon,
'bootstrap_rate_of_adaptation':bootstrap_rate_of_adaptation}
with open(save_json_name, 'w') as outfile:
json.dump(save_json, outfile)
slope_sci = rate_of_adaptation * (10**3)
bs_slope_sci = [x * (10**3) for x in bootstrap_rate_of_adaptation]
lower_95ci = np.percentile(sorted(bs_slope_sci), 2.5)
upper_95ci = np.percentile(sorted(bs_slope_sci), 97.5)
data_to_plot.append({'cov': cov, 'gene': gene, 'cov_clade': cov_clade,
'adaptive_subs_per_codon_per_year': slope_sci,
'lower_95ci': lower_95ci, 'upper_95ci': upper_95ci,
'ci': [lower_95ci, upper_95ci]})
else:
(window_midpoint, adaptive_substitutions,
adaptive_substitutions_per_codon,
rate_of_adaptation) = calc_bhatt_a(cov, gene, window, clade, min_seqs,
midfreq_high, midfreq_low,
bootstrap, year_max, year_min)
slope_sci = rate_of_adaptation * (10**3)
data_to_plot.append({'cov': cov, 'gene': gene, 'cov_clade': cov_clade,
'adaptive_subs_per_codon_per_year': slope_sci})
df_to_plot = pd.DataFrame(data_to_plot)
sns.set(font_scale=1.0)
sns.set_style("white")
color_map = {'oc43A': '#208288', 'oc43B':'#76C7BE', '229e': '#0B194C',
'nl63A': '#87C735', 'nl63B': '#009888', 'nl63': '#87C735',
'hku1A': '#2E74B3', 'hku1B': '#92B2DE', 'hku1': '#255191'}
cov_clades = list(df_to_plot['cov_clade'].unique())
x_coords = {}
all_x_ticks = []
last_coord = 0.0
for gene in genes:
x_coords[gene] = {}
for cov_clade in cov_clades:
last_coord+=0.25
x_coords[gene][cov_clade] = last_coord
all_x_ticks.append(last_coord)
last_coord+=1.0
fig, ax = plt.subplots(figsize=(15,8))
x_labels = []
gene_ticks = []
for gene in genes:
gene_coords = list(x_coords[gene].values())
gene_ticks.append(sum(gene_coords)/len(gene_coords))
x_labels.append(gene)
for cov_clade in cov_clades:
x = x_coords[gene][cov_clade]
df_row = df_to_plot[(df_to_plot['gene']==gene)&(df_to_plot['cov_clade']==cov_clade)]
y = float(df_row['adaptive_subs_per_codon_per_year'])
if bootstrap:
err_lower = float(df_row['lower_95ci'])
err_upper = float(df_row['upper_95ci'])
ax.vlines( x, err_lower, err_upper)
ax.plot(x, y, 'o', ms=14, color=color_map[cov_clade])
plt.xticks(gene_ticks, x_labels)
legend_markers = []
for cov_clade in cov_clades:
legend_markers.append(mlines.Line2D([0], [0], color='w', markerfacecolor=color_map[cov_clade], marker='o',
markersize=12, label=cov_clade))
plt.legend(handles=legend_markers, loc='upper right')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_ylabel('adaptive subs per codon per year (x10^-3)', fontsize=16)
ax.set_xlabel("gene", fontsize=16)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(14)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(14)
if filename:
fig.savefig(filename, dpi=300)
#oct 9: no nl63 or hku1 lineages
plot_adaptive_subs_per_codon_per_year_slope_no_hku1_lineages(['oc43', '229e', 'nl63', 'hku1'],
['spike','s1', 's2', 'rdrp'], 3, 3, 0.75, 0.15, True,
filename='plots/adaptive_subs_per_year_100bootstraps_nolineages_dec18.png')
#oct 9: hku1 lineages, but no nl63
plot_adaptive_subs_per_codon_per_year_slope(['oc43', '229e', 'nl63', 'hku1'],
['spike','s1', 's2', 'rdrp'], 3, 3, 0.75, 0.15, True,
filename='plots/adaptive_subs_per_year_100bootstraps_hku1lineages_dec18.png')
#sept 30: oc43 and 229e only
plot_adaptive_subs_per_codon_per_year_slope(['oc43','229e'],
['spike','s1', 's2', 'rdrp'], 3, 3, 0.75, 0.15, True,
filename='fig5_dec18.png')
#Run with 1000 bootstraps
plot_adaptive_subs_per_codon_per_year_slope(['oc43','229e','nl63', 'hku1'],
['spike','s1', 's2', 'replicase1ab'],
3, 3, 0.75, 0.15, True, filename='adaptive_subs_per_year_1000bootstraps.png')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Σύνοψη Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> Άνοιγμα στο TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/e1/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Εκτέλεση στο Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/e1/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> Προβολή πηγαίου στο GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/e1/guide/keras/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" /> Λήψη "σημειωματάριου"</a>
</td>
</table>
Note: Η κοινότητα του TensorFlow έχει μεταφράσει αυτά τα έγγραφα. Καθότι οι μεταφράσεις αυτές αποτελούν την καλύτερη δυνατή προσπάθεια , δεν υπάρχει εγγύηση ότι θα παραμείνουν ενημερωμένες σε σχέση με τα [επίσημα Αγγλικά έγγραφα](https://www.tensorflow.org/?hl=en).
Αν έχετε υποδείξεις για βελτίωση των αρχείων αυτών , δημιουργήστε ένα pull request στο [tensorflow/docs](https://github.com/tensorflow/docs) GitHub repository . Για να συμμετέχετε στη σύνταξη ή στην αναθεώρηση των μεταφράσεων της κοινότητας , επικοινωνήστε με το [docs@tensorflow.org list](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs).
Ο οδήγός αυτός προσφέρει τα βασικά ώστε να ξεκινήσετε να χρησιμοποιείτε το Keras. Η διάρκεια ανάγνωσης του είναι περίπου 10 λεπτά.
# Εισαγωγή tf.keras
`tf.keras` . Πρόκειται για την υλοποίησης Διασύνδεσης Προγραμματισμού Εφαρμογών (API) του Keras. Είναι ένα API υψηλού επιπέδου που μπορεί να δημιουργήσει και να εκπαιδεύσει μοντέλα τα οποία θα υποστηρίζουν άμεσα τις υπόλοιπες λειτουργίες του TensorFlow , όπως η ["ενθουσιώδης" εκτέλεση](https://www.tensorflow.org/guide/eager) , `tf.data` αγωγοί , και [Εκτιμητές](https://www.tensorflow.org/guide/estimator)
Το `tf.keras` καθιστά το TensorFlow ευκολότερο στη χρήση , χωρίς να θυσιάζει σε προσαρμοστικότητα και απόδοση
Για να ξεκινήσετε , εισάγετε το `tf.keras` ως μέρος του προγράμματος TensorFlow
σας :
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
```
Το tf.keras μπορέι να τρέξει οποιοδήποτε κομμάτι συμβατού κώδικα Keras , ωστόσο να έχετε υπόψιν σας τα εξής :
* Η έκδοση `tf.keras` της τελευταίας έκδοσης του TensorFlow μπορεί να μην ταυτίζεται με την τελευταία έκδοση `keras` από το PyPI. Ελέγξτε το `tf.keras.__version__.`
* Όταν [αποθηκεύετε τα βάρη ενός μοντέλου](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/save_and_serialize.ipynb) το `tf.keras` θα προεπιλέγει την [μορφοποίηση σημείων ελέγχου](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/checkpoint.ipynb). Θέτοντας `save_format='h5'` χρησιμοποιείται η μορφοποίηση HDF5 (εναλλακτικά, ορίστε την επέκταση του αρχείου σε `.h5`)
# Δημιουργία απλού μοντέλου
## Σειριακό μοντέλο
Στο Keras , συναθροίζοντας στρώματα(layers) δημιουργούνται τα *μοντελα (models)*. Ένα μοντέλο είναι (συνήθως) ένα γράφημα στρωμάτων. Ο πιο συνηθισμένος τύπος μοντέλου είναι η στοίβα (stack) στρωμάτων : το `tf.keras.Sequential` μοντέλο
Για την κατασκευή ενός απλού, πλήρως συνδεδεμένου (νευρωνικού) δικτύου (για παράδειγμα : πολυεπίπεδο perceptron):
```
from tensorflow.keras import layers
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
```
Μπορείτε να βρείτε ένα ολοκληρωμένο και σύντομο παράδειγμα για το πώς χρησιμοποιούνται τα σειριακά μοντέλα [εδώ](https://www.tensorflow.org/tutorials/quickstart/beginner)
Για να μάθετε να δημιουργείτε πιο προχωρημένα μοντέλα από τα σειριακά , δέιτε τα εξής :
* [Οδηγός για το Keras functional API link text](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/functional.ipynb)
*[Οδηγός για τη σύνθεση επιπέδων και μοντέλων από το μηδέν μέσω της ενθυλάκωσης](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/custom_layers_and_models.ipynb)
# Διαμόρφωση των επιπέδων
Υπάρχουν πολλά επίπεδα `tf.keras.layers` διαθέσιμα. Τα περισσότερα από αυτά έχουν συνήθη ορίσματα στον κατασκευαστή :
* `activation` : Θέτει τη συνάρτηση ενεργοποίησης το επίπεδο. Η παράμετρος αυτή διασαφηνίζεται από το όνομα μίας προεγκατεστημένης συνάρτησης , διαφορετικά ως ένα προς κλήση αντικείμενο. Ως προεπιλογή , δεν εφαρμόζεται activation
* `kernel_initializer` και `bias_initializer` : Τα σχέδια αρχικοποίησης που δημιουργούν τα βάρη του επιπέδου (kernel και bias). Η παράμετρος αυτή είναι είτε όνομα είτε αντικείμενο που μπορεί να κληθεί . Αρχικοποιείται , ως προεπιλογή , στον `"Glorot uniform"` αρχικοποιητή.
* `kernel_regularizer` και `bias_regularizer` : Τα σχέδια ρύθμισης που εφαρμόζουν τα βάρη του επιπέδου (kernel και bias), όπως για παράδειγμα οι L1 or
L2 regularization (L1 ή L2 ρυθμίσεις). Ως προεπιλογή , δεν εφαρμόζεται regularization
Ο ακόλουθος κώδικας δίνει υπόσταση στα `tf.keras.layers.Dense` επίπεδα με τη χρήση ορισμάτων σε κατασκευαστές :
```
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.keras.activations.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.Constant(2.0))
```
# Μάθηση και αξιολόγηση
## Στήνοντας την μάθηση
Αφότου έχει κατασκευαστεί το μοντέλο, ορίστε την διαδικασία μάθησης καλώντας την μέθοδο `compile`:
```
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='categorical_crossentropy',
metrics=['accuracy'])
```
Η μέθοδος `tf.keras.Model.compile` δέχεται 3 σημαντικά ορίσματα :
* `optimizer` : Το αντικείμενο αυτό προσδιορίζει την διαδικασία μάθησης. Περάστε στιγμιότυπα βελτιστοποίησης (optimizer instances) από το άρθρωμα (module) `tf.keras.optimizers` , όπως `tf.keras.optimizers.Adam` ή `tf.keras.optimizers.SGD`. Αν θέλετε να χρησιμοποιήσετε τις προεπιλεγμένες παραμέτρους , μπορείτε να προσδιορίσετε βελτιστοποιητές μέσω συμβολοσειρών(strings) , όπως `adam` ή `sgd`.
* `loss` : Η συνάρτηση που πρέπει να ελαχιστοποιείται κατά τη βελτιστοποίηση. Συνήθεις επιλογές είναι η μέση τετραγωνική απόκλιση(mse) ,`categorical_crossentropy`, και `binary_crossentropy`. Oι loss functions προσδιορίζονται μέσω του ονόματος ή παιρνώντας ένα αντικείμενο από το άρθρωμα `tf.keras.losses`.
* `metrics`: Χρησιμοποιείται για την παρακολούθηση της διαδικασίας. Πρόκειται για συμβολοσειρές ή για προς κλήση αντικείμενα από το άρθρωμα `tf.keras.metrics`
* Επιπρόσθετα , για να βεβαιωθείτε ότι το μοντέλο μαθαίνει και αξιολογεί eagerly("ενθουσιωδώς"), μπορείτε να θέσετε `run_eagerly=True` ως παράμετρο κατά την μεταγλώττιση
Ο ακόλουθος κώδικας υλοποιεί μερικά παραδείγματα παραμετροποίησης μοντέλου για μάθηση :
```
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.keras.optimizers.Adam(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=[tf.keras.metrics.CategoricalAccuracy()])
```
# Μάθηση από NumPy δεδομένα
Σε μικρό όγκο δεδομένων,συνιστάται η χρήση των in-memory πινάκων [NumPy](https://numpy.org/) για την μάθηση και την αξιολόγηση ενός μοντέλου. Το μοντέλο "ταιριάζει" στα δεδομένα εκμάθησης μέσω της μεθόδου `fit` :
```
import numpy as np
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
```
Η μέθοδος `tf.keras.Model.fit` δέχεται 3 ορίσματα :
* `epochs` : Η μάθηση οργανώνεται σε *εποχές* (epochs). *Εποχή* είναι μία επανάληψη σε όλο τον όγκο των δεδομένων εισόδου(αυτό γίνεται πρώτα σε μικρότερες "δεσμίδες" δεδομένων)
* `batch_size` : Όταν περνάνε τα δεδομένα NumPy , το μοντέλο τα τεμαχίζει σε (μικρότερες) "δεσμίδες" και εκτελεί επαναλήψεις πάνω σε αυτές κατά την εκμάθηση.
* `validation_data` : Όταν δημιουργείτε ένα πρωτότυπο ενός μοντέλους , θα ήταν επιθυμητό να παρακολουθείτε με ευκολία την απόδοσή του σε μερικά δεδομένα επαλήθευσης (validation data). Περνώντας αυτό το όρισμα -μία πλειάδα εισόδων και "ετικετών"- επιτρέπετε στο μοντέλο να δείχνει την απώλεια και τις μετρήσεις σε "συμπερασματική" λειτουργία (inference mode) για τα εισαγώμενα δεδομένα, στο τέλος κάθε *εποχής*
Ακολουθεί παράδειγμα με την χρήση της `validation_data` :
```
import numpy as np
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
val_data = np.random.random((100, 32))
val_labels = np.random.random((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
```
# Μάθηση από σύνολα δεδομένων(datasets) tf.data
Χρησιμοποιείστε τo [Datasets API](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/data.ipynb) για να κλιμακώσετε μεγάλα σύνολα δεδομένων ή μάθηση σε πολλές συσκευές. Περάστε ένα `tf.data.Dataset` στιγμιότυπο στην μέθοδο `fit` :
```
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
model.fit(dataset, epochs=10)
```
Από τη στιγμή που το `Dataset` παράγει "δεσμίδες" δεδομένων , αυτό το απόσπασμα δεν απαίτει το `batch_size`.
Τα σύνολα δεδομένων μπορούν να χρησιμοποιηθούν και για επιβεβαίωση :
```
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32)
model.fit(dataset, epochs=10,
validation_data=val_dataset)
```
#Αξιολόγηση και πρόβλεψη
Οι `tf.keras.Model.evaluate` και `tf.keras.Model.predict` μέθοδοι μπορούν να χρησιμοποιήσουν δεδομένα NumPy και ένα `tf.data.Dataset`.
Ακολουθεί ο τρόπος αξιολόγησης "συμπερασματικής" λειτουργίας των απωλειών και των μετρήσεων για τα παρεχόμενα δεδομένα :
```
# With Numpy arrays
data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))
model.evaluate(data, labels, batch_size=32)
# With a Dataset
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
model.evaluate(dataset)
```
Με αυτό το απόσπασμα μπορείτε να προβλέψετε την έξοδο του τελευταίου επιπέδου σε inference για τα παρεχόμενα δεδομένα , ως έναν πίνακα NumPy :
```
result = model.predict(data, batch_size=32)
print(result.shape)
```
Για έναν ολοκληρωμένο οδηγό στην εκμάθηση και την αξιολόγηση , ο οποίος περιλαμβάνει και οδηγίες για συγγραφή προσαρμοσμένων loops μαθήσεως από το μηδέν , δείτε τον οδηγό [αυτό](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/train_and_evaluate.ipynb).
# Δημιουργία σύνθετων μοντέλων
## To Functional API
Το μοντέλο `tf.keras.Sequential` είναι μία απλή στοίβα επιπέδων η οποία δεν μπορεί να εκπροσωπήσει αυθαίρετα μοντέλα .
Με τη χρήση του [Keras functional API](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/functional.ipynb#scrollTo=vCMYwDIE9dTT) μπορείτε να φτιάξετε σύνθετες τοπολογίες μοντέλων όπως :
* Μοντέλα πολλαπλών εισόδων,
* Μοντέλα πολλαπλών εξόδων,
* Μοντέλα με κοινόχρηστα επίπεδα (το ίδιο επίπεδο να καλείται πολλαπλές φορές),
* Μοντέλα με μη σειριακή ροή δεδομένων (π.χ residual connections)
H δημιουργία μοντέλων με το functional API ακολουθεί το εξής :
1. Ένα στιγμιότυπο ενός επιπέδου καλείται και επιστρέφει έναν τανυστή(tensor).
2. Τανυστές εισόδου και εξόδου χρησιμοποιούνται για να ορίσουν μία κατάσταση της `tf.keras.Model`.
3. Το μοντέλο μαθαίνει κατά τον ίδιο τρόπο με το `Σειριακό` μοντέλο.
Το ακόλουθο παράδειγμα χρησιμοποιεί το functional API για να φτιάξει ένα απλό, πλήρως συνδεδεμένο (νευρωνικό) δίκτυο :
```
inputs = tf.keras.Input(shape=(32,)) # Returns an input placeholder
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
```
Δώστε υπόσταση στο μοντέλο με είσοδο και έξοδο:
```
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
```
# Υποκατηγοριοποίηση στα μοντέλα
Φτιάξτε ένα πλήρως παραμετροποιήσιμο μοντέλο υποκατηγοριοποιώντας την `tf.keras.Model` και ορίζοντας το δικό σας "προς τα εμπρος πέρασμα"(forward pass,κοινώς διαδικασία υπολογισμού η οποία ξεκινάει από το 1ο προς το τελευταίο επίπεδο). Κατασκευάστε επίπεδα στην μέθοδο `__init__` και θέστε τα ως ιδιότητες της κλάσης . Ορίστε έπειτα το "προς τα εμπρος πέρασμα"(forward pass)στην μέθοδο `call`.
Η υποκατηγοριοποίηση στα μοντέλα είναι ιδιαίτερα χρήσιμη όταν η ["ενθουσιώδης" εκτέλεση](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/eager.ipynb) είναι ενεργοποιημένη ,καθώς επιτρέπει το "προς τα εμπρος πέρασμα"(forward pass) να εγγράφεται αναγκαστικά.
Σημείωση: αν το μοντέλο σας χρειάζεται να τρέχει *πάντα* αναγκαστικά, μπορείτε να θέσετε `dynamic=True` όταν καλείται ο `super` κατασκευαστής.
> Σημείο κλειδί : Χρησιμοποιήστε το σωστό , ανάλογα με την δουλειά , API . Παρόλο που η χρήση της υποκατηγοριοποίηση προσφέρει ευελιξία , κοστίζει σε πολυπλοκότητα και σε μεγαλύτερα περιθώρια για σφάλματα χρήστη. Αν είναι εφικ΄το , προτιμήστε το functional API.
Το ακόλουθο παράδειγμα δείχνει μια υποκλάση `tf.keras.Model` που χρησιμοποιεί
"προς τα εμπρός πέρασμα" το οποίο δεν χρειάζεται να εκτελείται αναγκαστικά:
```
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
```
Η νέα κλάση model λαμβάνει υπόσταση :
```
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
```
# Παραμετροποιήσιμα επίπεδα
Δημιουργήστε ένα παραμετροποιήσιμο επίπεδο υποκατηγοριοποιώντας την `tf.keras.layers.Layer` και υλοποιώντας τις παρακάτω μεθόδους :
* `__init__` : (Προαιρετικά) ορίστε τα υποεπίπεδα που θα χρησιμοποιηθούν από το επίπεδο
* `build` : Δημιουργεί τα βάρη του επιπέδου. Προσθέστε βάρη με την μέθοδο `add_weight`.
* `call` : Ορίστε το "προς τα εμπρός πέρασμα"
* Προαιρετικά, ένα επίπεδο μπορεί να σειριοποιηθεί με την υλοποίηση της μεθόδου `get_config` και της μεθόδου κλάσης `from_config`.
Ακολουθεί παράδειγμα ενός παραμετροποιήσιμου επιπέδου το οποίο υλοποιεί ένα `matmul` με ορίσματα μία είσοδο και έναν kernel (πυρήνα)
```
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
```
Δημιουργήστε ένα μοντέλο χρησιμοποιώντας το δικό σας επίπεδο :
```
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
```
#Επανακλήσεις
Μία επανάκληση(callback) είναι ένα αντικείμενο το οποίο περνάει σε ένα μοντέλο για να τροποποιήσει και να επεκτείνει τη συμπεριφορά του κατά την διάρκεια της μάθησης. Μπορείτε να γράψετε τις δικές σας επανακλήσεις ή να χρησιμοποιήσετε την `tf.keras.callbacks` η οποία περιλαμβάνει :
* `tf.keras.callbacks.ModelCheckpoint`: Αποθηκεύστε το μοντέλο σας ανά τακτά διαστήματα.
* `tf.keras.callbacks.LearningRateScheduler`: Δυναμικά, αλλάξτε τον ρυθμό μάθησης.
* `tf.keras.callbacks.EarlyStopping`: Διακόψτε την εκμάθηση όταν η απόδοση επαλήθευσης (validation performance) έχει σταματήσει να βελτιώνεται.
* `tf.keras.callbacks.TensorBoard`: Παρακολουθήστε την συμπεριφορά του μοντέλου με τη χρήση του [TensorBoard](https://tensorflow.org/tensorboard).
Για να χρησιμοποιήσετε την `tf.keras.callbacks.Callback` , περάστε την στην μέθοδο `fit` του μοντέλου :
```
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
```
# Αποθήκευση και επαναφορά
## Αποθήκευση μόνο των τιμών των βαρών
Αποθηκεύστε και φορτώστε τα βάρη ενός μοντέλου με τη χρήση της `tf.keras.Model.save_weights`:
```
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
```
Ως προεπιλογή , το παραπάνω αποθηκεύει τα βάρη του μοντέλου σε μορφοποίηση [TensorFlow checkpoint](../checkpoint.ipynb). Εναλλακτικά , μπορούν να αποθηκευτούν χρησιμοποιώντας την μορφοποιήση Keras HDF5(η προεπιλογή για την υλοποίηση του συστήματος υποστήριξης του Keras):
```
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
```
## Αποθήκευση μόνο των ρυθμίσεων του μοντέλου
Οι ρυθμίσεις ενός μοντέλου μπορούν να αποθηκευτούν-αυτό σειριοποιεί την αρχιτεκτονική του μοντέλου χωρίς βάρη. Οι αποθηκευμένες ρυθμίσεις μπορόυν να αναπαράγουν και να αρχικοποιήσουν το ίδιο μοντέλο, ακόμη και χωρίς τον κώδικα που όρισε το πρότυπο μοντέλο. To Keras υποστηρίζει τα JSON και YAML ως μορφές σειριοποίησης:
```
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
```
Αναπαραγωγή του μοντέλου(νεότερα αρχικοποιήθεντος) από το JSON :
```
fresh_model = tf.keras.models.model_from_json(json_string)
```
Η σειριοποίηση μοντέλου σε μορφή YAML απαιτεί την εγκατάσταση της `pyyaml` πριν γίνει είσοδος της TensorFlow:
```
yaml_string = model.to_yaml()
print(yaml_string)
```
Αναπαραγωγή του μοντέλου από το YAML :
```
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
```
Προσοχή : Μοντέλα ως υποκλάσεις δεν σειριοποιούνται διότι η αρχιτεκτονική τους ορίζεται από τον κώδικα Python στο σώμα τις μεθόδου `call`.
## Αποθήκευση ολόκληρου του μοντέλου σε ένα αρχείο
Ολόκληρο το μοντέλο μπορεί να αποθηκευτεί σε ένα μόνο αρχείο το οποίο περιέχει τα βάρη , τις ρυθμίσεις του μοντέλου , ακόμη και τις ρυθμίσεις του βελτιστοποιητή. Έτσι , μπορείτε να θέσετε ένα σημείο ελέγχου σε ένα μοντέλο και να συνεχίσετε την εκμάθηση αργότερα - από την ακριβώς ίδια κατάσταση - χωρίς πρόσβαση στον πρωταρχικό κώδικα :
```
# Create a simple model
model = tf.keras.Sequential([
layers.Dense(10, activation='softmax', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
```
Για να μάθετε περισσότερα για την αποθήκευση και την σειριοποίηση , πατήστε [εδώ](./save_and_serialize.ipynb).
# "Ενθουσιώδης" εκτέλεση(Eager execution)
H ["Ενθουσιώδης" εκτέλεση](https://github.com/tensorflow/docs/blob/master/site/en/guide/eager.ipynb) είναι ένα (επιτακτικό) περιβάλλον προγραμματισμού το οποίο αξιολογεί λειτουργίες αμέσως. Αυτό δεν απαιτείται από το Keras , ωστόσο η `tf.keras` παρέχει υποστήριξη και χρησιμεύει για την επιθεώρηση και την εκσφαλμάτωση του προγράμματος.
Όλα τα API της `tf.keras` για την κατασκευή μοντέλων είναι συμβατά με την "ενθουσιώδη εκτέλεση". Και καθώς τα `Σειριακά` και τα λειτουργικά (functional) API μπορούν να χρησιμοποιηθούν, η "ενθουσιώδης" εκτέλεση επωφελεί ιδιαίτερα την υποκατηγοριοποίηση μοντέλων και την κατασκευή παραμετροποιήσιμων επιπέδων-τα ΑPI's που σας προτρέπουν να γράψετε το "προς τα εμπρος πέρασμα" ως κώδικα (σε αντίθεση με τα API's που δημιουργούν μοντέλα μέσω της σύνθεσης των ήδη υπάρχοντων επιπέδων)
Δείτε τον οδηγό [αυτό](https://github.com/tensorflow/docs/blob/master/site/en/guide/eager.ipynb) για παραδείγματα χρήσης μοντέλων Keras με custom training looops και την `tf.GradientTape`. Ακόμη , μπορείτε να δείτε ένα ολοκληρωμένο και σύντομο παράδειγμα [εδώ](https://www.tensorflow.org/tutorials/quickstart/advanced).
# Διανομή
Τα μοντέλα της `tf.keras` μπορούν να εκτελεστούν σε πολλαπλές GPUs ,με την χρήση της `tf.distribute.Strategy`. Το API αυτό κατανείμει την μάθηση σε πολλαπλές κάρτες γραφικών με σχεδόν καθόλου αλλαγές στον υπάρχον κώδικα.
Επί του παρόντος, η `tf.distribute.MirroredStrategy` αποτελεί την μόνη στρατηγική κατανομής που υποστηρίζεται. Η `MirroredStrategy` πραγματοποιεί in-graph replication μαζί με συγχρονισμένη μάθηση, χρησιμοποιώντας all-reduce σε ένα μηχάνημα. Για να χρησιμοποιήσετε τη `distribute.Strategy`, εμφωλεύστε την αρχικοποίηση του βελτιστοποιητή, τον κατασκευαστή του μοντέλου, και το `compile` σε ένα μπλοκ κώδικα `Strategy.scope()`. Έπειτα , προχωρήστε στην εκμάθηση του μοντέλου.
Το ακόλουθο παράδειγμα κατανείμει ένα `tf.keras.Model` σε πολλαπλές κάρτες γραφικών σε ένα μηχάνημα.
Πρώτα , ορίζεται ένα μοντέλο εντός της `distribute.Strategy`:
```
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.SGD(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
```
Στη συνέχεια , εξασκείστε το μοντέλο πάνω σε δεδομένα κατά τα γνωστά :
```
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.shuffle(buffer_size=1024).batch(32)
model.fit(dataset, epochs=1)
```
Για περισσότερες πληροφορίες , δείτε τον [πλήρη οδηγό για την Κατανεμημένη Μάθηση με TensorFlow](../distributed_training.ipynb).
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
from scipy.stats import entropy
from google.colab import drive
drive.mount('/content/drive')
path="/content/drive/MyDrive/Research/alternate_minimisation/"
name="_50_50_10runs_entropy"
# mu1 = np.array([3,3,3,3,0])
# sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu2 = np.array([4,4,4,4,0])
# sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu3 = np.array([10,5,5,10,0])
# sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu4 = np.array([-10,-10,-10,-10,0])
# sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu5 = np.array([-21,4,4,-21,0])
# sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu6 = np.array([-10,18,18,-10,0])
# sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu7 = np.array([4,20,4,20,0])
# sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu8 = np.array([4,-20,-20,4,0])
# sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu9 = np.array([20,20,20,20,0])
# sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu10 = np.array([20,-10,-10,20,0])
# sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500)
# sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500)
# sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500)
# sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500)
# sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500)
# sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500)
# sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500)
# sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500)
# sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500)
# sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500)
# X = np.concatenate((sample1,sample2,sample3,sample4,sample5,sample6,sample7,sample8,sample9,sample10),axis=0)
# Y = np.concatenate((np.zeros((500,1)),np.ones((500,1)),2*np.ones((500,1)),3*np.ones((500,1)),4*np.ones((500,1)),
# 5*np.ones((500,1)),6*np.ones((500,1)),7*np.ones((500,1)),8*np.ones((500,1)),9*np.ones((500,1))),axis=0).astype(int)
# print(X.shape,Y.shape)
# # plt.scatter(sample1[:,0],sample1[:,1],label="class_0")
# # plt.scatter(sample2[:,0],sample2[:,1],label="class_1")
# # plt.scatter(sample3[:,0],sample3[:,1],label="class_2")
# # plt.scatter(sample4[:,0],sample4[:,1],label="class_3")
# # plt.scatter(sample5[:,0],sample5[:,1],label="class_4")
# # plt.scatter(sample6[:,0],sample6[:,1],label="class_5")
# # plt.scatter(sample7[:,0],sample7[:,1],label="class_6")
# # plt.scatter(sample8[:,0],sample8[:,1],label="class_7")
# # plt.scatter(sample9[:,0],sample9[:,1],label="class_8")
# # plt.scatter(sample10[:,0],sample10[:,1],label="class_9")
# # plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
# class SyntheticDataset(Dataset):
# """MosaicDataset dataset."""
# def __init__(self, x, y):
# """
# Args:
# csv_file (string): Path to the csv file with annotations.
# root_dir (string): Directory with all the images.
# transform (callable, optional): Optional transform to be applied
# on a sample.
# """
# self.x = x
# self.y = y
# #self.fore_idx = fore_idx
# def __len__(self):
# return len(self.y)
# def __getitem__(self, idx):
# return self.x[idx] , self.y[idx] #, self.fore_idx[idx]
# trainset = SyntheticDataset(X,Y)
# # testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
# classes = ('zero','one','two','three','four','five','six','seven','eight','nine')
# foreground_classes = {'zero','one','two'}
# fg_used = '012'
# fg1, fg2, fg3 = 0,1,2
# all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'}
# background_classes = all_classes - foreground_classes
# background_classes
# trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True)
# dataiter = iter(trainloader)
# background_data=[]
# background_label=[]
# foreground_data=[]
# foreground_label=[]
# batch_size=100
# for i in range(50):
# images, labels = dataiter.next()
# for j in range(batch_size):
# if(classes[labels[j]] in background_classes):
# img = images[j].tolist()
# background_data.append(img)
# background_label.append(labels[j])
# else:
# img = images[j].tolist()
# foreground_data.append(img)
# foreground_label.append(labels[j])
# foreground_data = torch.tensor(foreground_data)
# foreground_label = torch.tensor(foreground_label)
# background_data = torch.tensor(background_data)
# background_label = torch.tensor(background_label)
# def create_mosaic_img(bg_idx,fg_idx,fg):
# """
# bg_idx : list of indexes of background_data[] to be used as background images in mosaic
# fg_idx : index of image to be used as foreground image from foreground data
# fg : at what position/index foreground image has to be stored out of 0-8
# """
# image_list=[]
# j=0
# for i in range(9):
# if i != fg:
# image_list.append(background_data[bg_idx[j]])
# j+=1
# else:
# image_list.append(foreground_data[fg_idx])
# label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2
# #image_list = np.concatenate(image_list ,axis=0)
# image_list = torch.stack(image_list)
# return image_list,label
# desired_num = 3000
# mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
# fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
# mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
# list_set_labels = []
# for i in range(desired_num):
# set_idx = set()
# np.random.seed(i)
# bg_idx = np.random.randint(0,3500,8)
# set_idx = set(background_label[bg_idx].tolist())
# fg_idx = np.random.randint(0,1500)
# set_idx.add(foreground_label[fg_idx].item())
# fg = np.random.randint(0,9)
# fore_idx.append(fg)
# image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
# mosaic_list_of_images.append(image_list)
# mosaic_label.append(label)
# list_set_labels.append(set_idx)
# def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number):
# """
# mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
# labels : mosaic_dataset labels
# foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
# dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
# """
# avg_image_dataset = []
# for i in range(len(mosaic_dataset)):
# img = torch.zeros([5], dtype=torch.float64)
# for j in range(9):
# if j == foreground_index[i]:
# img = img + mosaic_dataset[i][j]*dataset_number/9
# else :
# img = img + mosaic_dataset[i][j]*(9-dataset_number)/(8*9)
# avg_image_dataset.append(img)
# return torch.stack(avg_image_dataset) , torch.stack(labels) , foreground_index
class MosaicDataset1(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
# data = [{"mosaic_list":mosaic_list_of_images, "mosaic_label": mosaic_label, "fore_idx":fore_idx}]
# np.save("mosaic_data.npy",data)
data = np.load(path+"mosaic_data.npy",allow_pickle=True)
mosaic_list_of_images = data[0]["mosaic_list"]
mosaic_label = data[0]["mosaic_label"]
fore_idx = data[0]["fore_idx"]
batch = 250
msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
```
**Focus Net**
```
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50) #,self.output)
self.linear2 = nn.Linear(50,self.output)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,self.d], dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
for i in range(self.K):
x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d
log_x = F.log_softmax(x,dim=1) # log alpha to calculate entropy
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d
return y , x,log_x
def helper(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
```
**Classification Net**
```
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
self.linear2 = nn.Linear(50,self.output)
def forward(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
criterion = nn.CrossEntropyLoss()
def my_cross_entropy(x, y,alpha,log_alpha,k):
# log_prob = -1.0 * F.log_softmax(x, 1)
# loss = log_prob.gather(1, y.unsqueeze(1))
# loss = loss.mean()
loss = criterion(x,y)
#alpha = torch.clamp(alpha,min=1e-10)
b = -1.0* alpha * log_alpha
b = torch.mean(torch.sum(b,dim=1))
closs = loss
entropy = b
loss = (1-k)*loss + ((k)*b)
return loss,closs,entropy
```
```
def calculate_attn_loss(dataloader,what,where,criter,k):
what.eval()
where.eval()
r_loss = 0
cc_loss = 0
cc_entropy = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha,log_alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
#ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch
# mx,_ = torch.max(alpha,1)
# entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
#loss = (1-k)*criter(outputs, labels) + k*ent
loss,closs,entropy = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
r_loss += loss.item()
cc_loss += closs.item()
cc_entropy += entropy.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,cc_loss/i,cc_entropy/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
number_runs = 10
full_analysis =[]
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
k = 0.005
every_what_epoch = 5
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(5,1,9,5).double()
torch.manual_seed(n)
what = Classification_deep(5,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.01)
optimizer_what = optim.Adam(what.parameters(), lr=0.01)
#criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 2000
# calculate zeroth epoch loss and FTPT values
running_loss ,_,_,anlys_data= calculate_attn_loss(train_loader,what,where,criterion,k)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
if ((epoch) % (every_what_epoch*2) ) <= every_what_epoch-1 :
print(epoch+1,"updating where_net, what_net is freezed")
print("--"*40)
elif ((epoch) % (every_what_epoch*2)) > every_what_epoch-1 :
print(epoch+1,"updating what_net, where_net is freezed")
print("--"*40)
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha,log_alpha = where(inputs)
outputs = what(avg)
my_loss,_,_ = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
# print statistics
running_loss += my_loss.item()
my_loss.backward()
if ((epoch) % (every_what_epoch*2) ) <= every_what_epoch-1 :
optimizer_where.step()
elif ( (epoch) % (every_what_epoch*2)) > every_what_epoch-1 :
optimizer_what.step()
# optimizer_where.step()
# optimizer_what.step()
#break
running_loss,ccloss,ccentropy,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f celoss: %.3f entropy: %.3f' %(epoch + 1,running_loss,ccloss,ccentropy))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.001:
break
print('Finished Training run ' +str(n))
#break
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
full_analysis.append((epoch, analysis_data))
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha,log_alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
a,b= full_analysis[0]
print(a)
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,6))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.title("Training trends for run "+str(cnt))
plt.savefig(path+"50_50_10runs_entropy/every5/run"+str(cnt)+".png",bbox_inches="tight")
plt.savefig(path+"50_50_10runs_entropy/every5/run"+str(cnt)+".pdf",bbox_inches="tight")
cnt+=1
np.mean(np.array(FTPT_analysis),axis=0) #array([87.85333333, 5.92 , 0. , 6.22666667])
FTPT_analysis.to_csv(path+"50_50_10runs_entropy/FTPT_analysis_every5"+name+".csv",index=False)
FTPT_analysis
```
| github_jupyter |
# Coverage of MultiPLIER LV using _P. aeruginosa_ data
The goal of this notebook is to examine why genes were found to be generic. Specifically, this notebook is trying to answer the question: Are generic genes found in more multiplier latent variables compared to specific genes?
The PLIER model performs a matrix factorization of gene expression data to get two matrices: loadings (Z) and latent matrix (B). The loadings (Z) are constrained to aligned with curated pathways and gene sets specified by prior knowledge [Figure 1B of Taroni et. al.](https://www.cell.com/cell-systems/pdfExtended/S2405-4712(19)30119-X). This ensure that some but not all latent variables capture known biology. The way PLIER does this is by applying a penalty such that the individual latent variables represent a few gene sets in order to make the latent variables more interpretable. Ideally there would be one latent variable associated with one gene set unambiguously.
While the PLIER model was trained on specific datasets, MultiPLIER extended this approach to all of recount2, where the latent variables should correspond to specific pathways or gene sets of interest. Therefore, we will look at the coverage of generic genes versus other genes across these MultiPLIER latent variables, which represent biological patterns.
**Definitions:**
* Generic genes: Are genes that are consistently differentially expressed across multiple simulated experiments.
* Other genes: These are all other non-generic genes. These genes include those that are not consistently differentially expressed across simulated experiments - i.e. the genes are specifically changed in an experiment. It could also indicate genes that are consistently unchanged (i.e. housekeeping genes)
Note: This notebook is perfoming the same analysis found in [1_get_multiplier_LV_coverage.ipynb](1_get_multiplier_LV_coverage.ipynb), which used human data. Here we're using _P. aeruginosa_ data.
```
%load_ext autoreload
%autoreload 2
import os
import random
import textwrap
import scipy
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import rpy2.robjects as ro
from rpy2.robjects import pandas2ri
from rpy2.robjects.conversion import localconverter
from ponyo import utils
from generic_expression_patterns_modules import lv
# Get data directory containing gene summary data
base_dir = os.path.abspath(os.path.join(os.getcwd(), "../"))
data_dir = os.path.join(base_dir, "pseudomonas_analysis")
# Read in config variables
config_filename = os.path.abspath(
os.path.join(base_dir, "configs", "config_pseudomonas_33245.tsv")
)
params = utils.read_config(config_filename)
local_dir = params["local_dir"]
project_id = params["project_id"]
quantile_threshold = 0.97
# Output file
nonzero_figure_filename = "nonzero_LV_coverage_multiPLIER_pa.svg"
highweight_figure_filename = "highweight_LV_coverage_multiPLIER_pa.svg"
```
## Load data
```
# Get gene summary file
summary_data_filename = os.path.join(
data_dir, f"generic_gene_summary_{project_id}_cbrB_v_WT.tsv"
)
# Load gene summary data
data = pd.read_csv(summary_data_filename, sep="\t", index_col=0, header=0)
# Check that genes are unique since we will be using them as dictionary keys below
assert data.shape[0] == len(data["Gene ID"].unique())
# Load multiplier models
# Converted formatted pickle files (loaded using phenoplier environment) from
# https://github.com/greenelab/phenoplier/blob/master/nbs/01_preprocessing/005-multiplier_recount2_models.ipynb
# into .tsv files
multiplier_model_z = pd.read_csv(
"multiplier_Pa_model_z.tsv", sep="\t", index_col=0, header=0
)
# Get a rough sense for how many genes contribute to a given LV
# (i.e. how many genes have a value != 0 for a given LV)
# Notice that multiPLIER is a sparse model
(multiplier_model_z != 0).sum().sort_values(ascending=True)
```
## Get gene data
Define generic genes based on simulated gene ranking. Refer to [figure](https://github.com/greenelab/generic-expression-patterns/blob/master/pseudomonas_analysis/gene_ranking_logFC.svg) as a guide.
**Definitions:**
* Generic genes: `Percentile (simulated) >= 80`
(Having a high rank indicates that these genes are consistently changed across simulated experiments.)
* Other genes: `Percentile (simulated) < 80`
(Having a lower rank indicates that these genes are not consistently changed across simulated experiments - i.e. the genes are specifically changed in an experiment. It could also indicate genes that are consistently unchanged.)
```
generic_threshold = 80
dict_genes = lv.get_generic_specific_genes(data, generic_threshold)
# Check overlap between multiplier genes and our genes
multiplier_genes = list(multiplier_model_z.index)
our_genes = list(data.index)
shared_genes = set(our_genes).intersection(multiplier_genes)
print(len(our_genes))
print(len(shared_genes))
# Drop gene ids not used in multiplier analysis
processed_dict_genes = lv.process_generic_specific_gene_lists(
dict_genes, multiplier_model_z
)
# Check numbers add up
assert len(shared_genes) == len(processed_dict_genes["generic"]) + len(
processed_dict_genes["other"]
)
```
## Get coverage of LVs
For each gene (generic or other) we want to find:
1. The number of LVs that gene is present
2. The number of LVs that the gene contributes a lot to (i.e. the gene is highly weighted within that LV)
### Nonzero LV coverage
```
dict_nonzero_coverage = lv.get_nonzero_LV_coverage(
processed_dict_genes, multiplier_model_z
)
# Check genes mapped correctly
assert processed_dict_genes["generic"][0] in dict_nonzero_coverage["generic"].index
assert len(dict_nonzero_coverage["generic"]) == len(processed_dict_genes["generic"])
assert len(dict_nonzero_coverage["other"]) == len(processed_dict_genes["other"])
```
### High weight LV coverage
```
# Quick look at the distribution of gene weights per LV
sns.distplot(multiplier_model_z["LV3"], kde=False)
plt.yscale("log")
dict_highweight_coverage = lv.get_highweight_LV_coverage(
processed_dict_genes, multiplier_model_z, quantile_threshold
)
# Check genes mapped correctly
assert processed_dict_genes["generic"][0] in dict_highweight_coverage["generic"].index
assert len(dict_highweight_coverage["generic"]) == len(processed_dict_genes["generic"])
assert len(dict_highweight_coverage["other"]) == len(processed_dict_genes["other"])
```
### Assemble LV coverage and plot
```
all_coverage = []
for gene_label in dict_genes.keys():
merged_df = pd.DataFrame(
dict_nonzero_coverage[gene_label], columns=["nonzero LV coverage"]
).merge(
pd.DataFrame(
dict_highweight_coverage[gene_label], columns=["highweight LV coverage"]
),
left_index=True,
right_index=True,
)
merged_df["gene type"] = gene_label
all_coverage.append(merged_df)
all_coverage_df = pd.concat(all_coverage)
all_coverage_df = lv.assemble_coverage_df(
processed_dict_genes, dict_nonzero_coverage, dict_highweight_coverage
)
all_coverage_df.head()
# Plot coverage distribution given list of generic coverage, specific coverage
nonzero_fig = sns.boxplot(
data=all_coverage_df,
x="gene type",
y="nonzero LV coverage",
notch=True,
palette=["#2c7fb8", "lightgrey"],
)
nonzero_fig.set_xlabel(None)
nonzero_fig.set_xticklabels(
["generic genes", "other genes"], fontsize=14, fontname="Verdana"
)
nonzero_fig.set_ylabel(
textwrap.fill("Number of LVs", width=30), fontsize=14, fontname="Verdana"
)
nonzero_fig.tick_params(labelsize=14)
nonzero_fig.set_title(
"Number of LVs genes are present in", fontsize=16, fontname="Verdana"
)
# Plot coverage distribution given list of generic coverage, specific coverage
highweight_fig = sns.boxplot(
data=all_coverage_df,
x="gene type",
y="highweight LV coverage",
notch=True,
palette=["#2c7fb8", "lightgrey"],
)
highweight_fig.set_xlabel(None)
highweight_fig.set_xticklabels(
["generic genes", "other genes"], fontsize=14, fontname="Verdana"
)
highweight_fig.set_ylabel(
textwrap.fill("Number of LVs", width=30), fontsize=14, fontname="Verdana"
)
highweight_fig.tick_params(labelsize=14)
highweight_fig.set_title(
"Number of LVs genes contribute highly to", fontsize=16, fontname="Verdana"
)
```
## Calculate statistics
* Is the reduction in generic coverage significant?
* Is the difference between generic versus other genes signficant?
```
# Test: mean number of LVs generic genes present in vs mean number of LVs that generic gene is high weight in
# (compare two blue boxes between plots)
generic_nonzero = all_coverage_df[all_coverage_df["gene type"] == "generic"][
"nonzero LV coverage"
].values
generic_highweight = all_coverage_df[all_coverage_df["gene type"] == "generic"][
"highweight LV coverage"
].values
(stats, pvalue) = scipy.stats.ttest_ind(generic_nonzero, generic_highweight)
print(pvalue)
# Test: mean number of LVs generic genes present in vs mean number of LVs other genes high weight in
# (compare blue and grey boxes in high weight plot)
other_highweight = all_coverage_df[all_coverage_df["gene type"] == "other"][
"highweight LV coverage"
].values
generic_highweight = all_coverage_df[all_coverage_df["gene type"] == "generic"][
"highweight LV coverage"
].values
(stats, pvalue) = scipy.stats.ttest_ind(other_highweight, generic_highweight)
print(pvalue)
# Check that coverage of other and generic genes across all LVs is NOT signficantly different
# (compare blue and grey boxes in nonzero weight plot)
other_nonzero = all_coverage_df[all_coverage_df["gene type"] == "other"][
"nonzero LV coverage"
].values
generic_nonzero = all_coverage_df[all_coverage_df["gene type"] == "generic"][
"nonzero LV coverage"
].values
(stats, pvalue) = scipy.stats.ttest_ind(other_nonzero, generic_nonzero)
print(pvalue)
```
## Get LVs that generic genes are highly weighted in
Since we are using quantiles to get high weight genes per LV, each LV has the same number of high weight genes. For each set of high weight genes, we will get the proportion of generic vs other genes. We will select the LVs that have a high proportion of generic genes to examine.
```
# Get proportion of generic genes per LV
prop_highweight_generic_dict = lv.get_prop_highweight_generic_genes(
processed_dict_genes, multiplier_model_z, quantile_threshold
)
# Return selected rows from summary matrix
multiplier_model_summary = pd.read_csv(
"multiplier_Pa_model_summary.tsv", sep="\t", index_col=0, header=0
)
lv.create_LV_df(
prop_highweight_generic_dict,
multiplier_model_summary,
0.5,
"Generic_LV_summary_table_Pa.tsv",
)
# Plot distribution of weights for these nodes
node = "LV30"
lv.plot_dist_weights(
node,
multiplier_model_z,
shared_genes,
20,
all_coverage_df,
f"weight_dist_{node}.svg",
)
```
## Save
```
# Save plot
nonzero_fig.figure.savefig(
nonzero_figure_filename,
format="svg",
bbox_inches="tight",
transparent=True,
pad_inches=0,
dpi=300,
)
# Save plot
highweight_fig.figure.savefig(
highweight_figure_filename,
format="svg",
bbox_inches="tight",
transparent=True,
pad_inches=0,
dpi=300,
)
```
**Takeaway:**
* In the first nonzero boxplot, generic and other genes are present in a similar number of LVs. This isn't surprising since the number of genes that contribute to each LV is <1000.
* In the second highweight boxplot, other genes and generic genes are highly weighted in a similar number of LVs, but overall generic genes contribute a lot to very few LVs. Despite the t-test returning a significant p-value for the difference, the distribution looks very similar.
* The only associated LV is related to type IV secretion system, which is a complex responsible for a broad range of functions: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3070162/
Compared to the trend found using [human data](1_get_multiplier_LV_coverage.ipynb), perhaps this indicates that generic genes have similar behavior/roles across organisms.
| github_jupyter |
**Exploratory Data Analysis**
```
import pandas as pd
import seaborn as sns
import numpy as np
from rdkit import Chem
from rdkit.Chem.Descriptors import MolLogP
from tqdm.auto import tqdm
from sklearn.preprocessing import StandardScaler
from sklearn.manifold import TSNE
from umap import UMAP
```
Make Pandas use Seaborn for plots
```
sns.set()
```
Enable Pandas progress_apply
```
tqdm.pandas()
```
A few settings to make plots look better. Here's a link to [my gist](https://gist.github.com/PatWalters/1b7600dd6d195e2cb8dded8454e1777e) with a bunch of tricks for making Seaborn plots look better.
```
sns.set(rc={'figure.figsize': (10, 10)})
sns.set_style('whitegrid')
sns.set_context('talk')
```
Examine solubility data from https://www.nature.com/articles/s41597-019-0151-1
```
df = pd.read_csv("curated-solubility-dataset.csv")
df
```
- G1 - occurs once in the dataset
- G2 - occurs twice in the dataset, SD > 0.5
- G3 - occurs twice in the dataset, SD <= 0.5
- G4 - occurs three or more times in the dataset, SD > 0.5
- G5 - occurs three or more times in the dataset, SD <= 0.5
```
df.Group.value_counts()
df.Group.value_counts(normalize=True)
df.Group.value_counts().to_frame().plot(kind="bar")
df_ok = df.query("Group in ['G3','G5']").copy()
df_ok.shape
```
Plot a frequency distribution for the solubility data using Seaborn's [displot](https://seaborn.pydata.org/generated/seaborn.displot.html)
Experiment with
- kind = "kde"
- kind = "hist"
- kind = "ecdf"
```
sns.displot(x=df_ok.Solubility,kind="hist",kde=True, height=8)
```
Let's bin the data
- >200 uM (green)
- 30-200 uM (yellow)
- <30 uM (red)
```
bins = [np.log10(x*1e-6) for x in [30,200]]
bins = [-100] + bins + [100]
df_ok['bin'] = pd.cut(df.Solubility,bins=bins,labels=["Low","Medium","High"])
color_map_3 = {"Low":"red","Medium":"yellow","High":"green"}
g = sns.displot(x="Solubility",kind="hist",kde=True, height=8, hue="bin",data=df_ok,palette=color_map_3)
g.fig.legends[0].set_title("Solubility Bin")
ax = sns.boxplot(x="bin",y="Solubility",data=df_ok)
ax.set_xlabel("Solubility Bin")
df_ok['is_sol'] = [True if x == "High" else False for x in df_ok.bin]
color_map_2 = {False :"red", True: "green"}
g = sns.displot(x="Solubility",kind="hist",kde=True, height=8, hue="is_sol",data=df_ok,palette=color_map_2)
g.fig.legends[0].set_title("Solubility Bin")
desc_columns = df_ok.select_dtypes([int,float]).columns[3:]
scaler = StandardScaler()
scaled_descriptors = scaler.fit_transform(df_ok[desc_columns])
scaled_descriptors
```
Use Truncated Stochastic Neighbor Embedding ([TSNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html)) to view the relationship between solubility and our descriptors.
```
tsne = TSNE()
tsne_crds = tsne.fit_transform(scaled_descriptors)
ax = sns.scatterplot(x=tsne_crds[:,0],y=tsne_crds[:,1],hue=df_ok.bin,palette=color_map_3)
ax.get_legend().set_title("Solubility Bin")
```
Some will argue that [Uniform Manifold Approximation](https://umap-learn.readthedocs.io/en/latest/) (UMAP) is a better way to do this. I'm not particularly partial to either, but here's how to do the same thing with UMAP. As you can see, the APIs are very similar.
```
umap = UMAP()
umap_crds = umap.fit_transform(scaled_descriptors)
ax = sns.scatterplot(x=umap_crds[:,0],y=umap_crds[:,1],hue=df_ok.bin,palette=color_map_3)
ax.get_legend().set_title("Solubility Bin")
```
Note that we are only using 17 descriptors here. In this case, we're ok running TSNE on our data. If we have more than 50 dimensions, it's usually a good idea to run [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) on the dataset before running TSNE.
```
df_ok.to_csv("solubility_data_ok.csv",index=False)
```
| github_jupyter |
# Multi-Class Single-Label classification
The natural extension of binary classification is a multi-class classification task.
We first approach multi-class single-label classification, which makes the assumption that each example is assigned to one and only one label.
We use the *Iris flower* data set, which consists of a classification into three mutually-exclusive classes; call these $A$, $B$ and $C$.
While one could train three unary predicates $A(x)$, $B(x)$ and $C(x)$, it turns out to be more effective if this problem is modelled by a single binary predicate $P(x,l)$, where $l$ is a variable denoting a multi-class label, in this case classes $A$, $B$ or $C$.
- This syntax allows one to write statements quantifying over the classes, e.g. $\forall x ( \exists l ( P(x,l)))$.
- Since the classes are mutually-exclusive in this case, the output layer of the $\mathtt{MLP}$ representing $P(x,l)$ will be a $\mathtt{softmax}$ layer, instead of a $\mathtt{sigmoid}$ function, to learn the probability of $A$, $B$ and $C$. This avoids writing additional constraints $\lnot (A(x) \land B(x))$, $\lnot (A(x) \land C(x))$, ...
```
import logging; logging.basicConfig(level=logging.INFO)
import tensorflow as tf
import pandas as pd
import logictensornetworks as ltn
```
# Data
Load the iris dataset: 50 samples from each of three species of iris flowers (setosa, virginica, versicolor), measured with four features.
```
df_train = pd.read_csv("iris_training.csv")
df_test = pd.read_csv("iris_test.csv")
print(df_train.head(5))
labels_train = df_train.pop("species")
labels_test = df_test.pop("species")
batch_size = 64
ds_train = tf.data.Dataset.from_tensor_slices((df_train,labels_train)).batch(batch_size)
ds_test = tf.data.Dataset.from_tensor_slices((df_test,labels_test)).batch(batch_size)
```
# LTN
Predicate with softmax `P(x,class)`
```
class MLP(tf.keras.Model):
"""Model that returns logits."""
def __init__(self, n_classes, hidden_layer_sizes=(16,16,8)):
super(MLP, self).__init__()
self.denses = [tf.keras.layers.Dense(s, activation="elu") for s in hidden_layer_sizes]
self.dense_class = tf.keras.layers.Dense(n_classes)
self.dropout = tf.keras.layers.Dropout(0.2)
def call(self, inputs, training=False):
x = inputs
for dense in self.denses:
x = dense(x)
x = self.dropout(x, training=training)
return self.dense_class(x)
logits_model = MLP(4)
p = ltn.Predicate(ltn.utils.LogitsToPredicateModel(logits_model,single_label=True))
```
Constants to index/iterate on the classes
```
class_A = ltn.Constant(0, trainable=False)
class_B = ltn.Constant(1, trainable=False)
class_C = ltn.Constant(2, trainable=False)
```
Operators and axioms
```
Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())
And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())
Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())
Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())
Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics="forall")
formula_aggregator = ltn.Wrapper_Formula_Aggregator(ltn.fuzzy_ops.Aggreg_pMeanError(p=2))
@tf.function
def axioms(features, labels, training=False):
x_A = ltn.Variable("x_A",features[labels==0])
x_B = ltn.Variable("x_B",features[labels==1])
x_C = ltn.Variable("x_C",features[labels==2])
axioms = [
Forall(x_A,p([x_A,class_A],training=training)),
Forall(x_B,p([x_B,class_B],training=training)),
Forall(x_C,p([x_C,class_C],training=training))
]
sat_level = formula_aggregator(axioms).tensor
return sat_level
```
Initialize all layers and the static graph
```
for features, labels in ds_test:
print("Initial sat level %.5f"%axioms(features,labels))
break
```
# Training
Define the metrics. While training, we measure:
1. The level of satisfiability of the Knowledge Base of the training data.
1. The level of satisfiability of the Knowledge Base of the test data.
3. The training accuracy.
4. The test accuracy.
```
metrics_dict = {
'train_sat_kb': tf.keras.metrics.Mean(name='train_sat_kb'),
'test_sat_kb': tf.keras.metrics.Mean(name='test_sat_kb'),
'train_accuracy': tf.keras.metrics.CategoricalAccuracy(name="train_accuracy"),
'test_accuracy': tf.keras.metrics.CategoricalAccuracy(name="test_accuracy")
}
```
Define the training and test step
```
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
@tf.function
def train_step(features, labels):
# sat and update
with tf.GradientTape() as tape:
sat = axioms(features, labels, training=True)
loss = 1.-sat
gradients = tape.gradient(loss, p.trainable_variables)
optimizer.apply_gradients(zip(gradients, p.trainable_variables))
sat = axioms(features, labels) # compute sat without dropout
metrics_dict['train_sat_kb'](sat)
# accuracy
predictions = logits_model(features)
metrics_dict['train_accuracy'](tf.one_hot(labels,3),predictions)
@tf.function
def test_step(features, labels):
# sat
sat = axioms(features, labels)
metrics_dict['test_sat_kb'](sat)
# accuracy
predictions = logits_model(features)
metrics_dict['test_accuracy'](tf.one_hot(labels,3),predictions)
```
Train
```
import commons
EPOCHS = 500
commons.train(
EPOCHS,
metrics_dict,
ds_train,
ds_test,
train_step,
test_step,
csv_path="iris_results.csv",
track_metrics=20
)
```
| github_jupyter |
```
import Bio.PDB as PDB
import numpy as np
import freesasa
import glob
from Bio.PDB.DSSP import DSSP
```
# Calculate parameters
```
surfaces = []
rsas = []
surface_seq = []
for file in glob.glob("data/training/crystal_structs/*.pdb"):
# parse the pdb file
p = PDB.PDBParser(QUIET=True)
s = p.get_structure(file, file)
# get the surface area
structure = freesasa.Structure(file)
result = freesasa.calc(structure)
area_classes = freesasa.classifyResults(result, structure)
# save this into numpy sheet result.totalArea()
surface = result.totalArea()
surfaces.append(surface)
# get the sequence length
seq = 0
for chain in s.get_chains():
seq += len([_ for _ in chain.get_residues() if PDB.is_aa(_)])
# save into numpy sheet rsa
rsas.append(seq)
# caculate the surface/sequence
surface_seq.append(surface/seq)
```
## Exracting secondary structure
We distinguished between the a, b and
c residues that are buried in the protein core (solvent accessibility
20%), moderately buried (between 20% and 50%), and solvent
exposed ( 50%).
```
p = PDB.PDBParser()
structure = p.get_structure(file, "data/training/crystal_structs/A0A140NA.pdb")
model = structure[0]
dssp = DSSP(model, "data/training/crystal_structs/A0A140NA.pdb")
# DSSP data is accessed by a tuple (chain_id, res_id)
a_key = list(dssp.keys())[2]
all_residues = list(dssp.keys())
dssp_info = [dssp[i] for i in all_residues]
asa = [dssp[i][3] for i in all_residues]
burried = [0 if i <= 0.2 else 2 if i >= 0.5 else 1 for i in asa]
secondary_q8 = [dssp[i][2] for i in all_residues]
# helix = H, G, I
# beta = B, E
# loop = rest
# 0 is alpha, 1 is beta, 2 is coil
secondary_q3 = [0 if i in ['H', 'G', 'I'] else 1 if i in ['B', 'E'] else 2 for i in secondary_q8]
count_helices = secondary_q3.count(0)
count_sheets = secondary_q3.count(1)
# calculate fraction of buried beta residues
# total amount residues
# list of moderatly and
# calculate fraction of moderately buried beta residues
mod_beta = 0
for i in range(len(burried)):
if burried[i] == 1 and secondary_q3[i] == 1:
mod_beta += 1
frac_mod_beta = mod_beta / count_sheets
print(frac_mod_beta)
# calc fraction of moderately buried alfa residues
mod_alfa = 0
for i in range(len(burried)):
if burried[i] == 1 and secondary_q3[i] == 0:
mod_alfa += 1
frac_mod_alfa = mod_alfa / count_helices
print(frac_mod_alfa)
# calc fraction of exposed a residues
exp_alfa = 0
for i in range(len(burried)):
if burried[i] == 2 and secondary_q3[i] == 0:
exp_alfa += 1
frac_exp_alfa = exp_alfa / count_helices
print(frac_exp_alfa)
# calc fraction of each of the 20 amino acid types
# calc fraction of K minus fraction of R
# fraction of negatively charged residues
# fraction of charged residues
# fraction of positively minus negatively charged residues
for file in glob.glob("data/training/crystal_structs/*.pdb"):
p = PDB.PDBParser()
structure = p.get_structure(file, file)
print(structure)
model = structure[0]
dssp = DSSP(model, file)
# DSSP data is accessed by a tuple (chain_id, res_id)
a_key = list(dssp.keys())[2]
print(dssp[a_key])
```
## Saving file
```
a = np.array(surfaces)
b = np.array(rsas)
c = np.array(surface_seq)
arr = np.column_stack((a, b, c))
np.savetxt("parameters.csv", arr, delimiter=",")
```
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
print(tf.__version__)
import numpy as np
import matplotlib.pyplot as plt
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv \
-O /tmp/sunspots.csv
import csv
time_step = []
sunspots = []
with open('/tmp/sunspots.csv') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader)
for row in reader:
sunspots.append(float(row[2]))
time_step.append(int(row[0]))
series = np.array(sunspots)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_series(time, series)
series = np.array(sunspots)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_series(time, series)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 30
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 64
batch_size = 256
train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
print(train_set)
print(x_train.shape)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 60])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=60, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set,epochs=500)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r')
plt.title('Training loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss"])
plt.figure()
zoomed_loss = loss[200:]
zoomed_epochs = range(200,500)
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(zoomed_epochs, zoomed_loss, 'r')
plt.title('Training loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss"])
plt.figure()
print(rnn_forecast)
```
| github_jupyter |
# Tensor Creation
```
from __future__ import print_function
import torch
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/pytorch_exercises"
torch.__version__
np.__version__
```
NOTE on notation
_x, _y, _z, ...: NumPy 0-d or 1-d arrays
_X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays
x, y, z, ...: 0-d or 1-d tensors
X, Y, Z, ...: 2-d or higher dimensional tensors
## From Python list
Q1. Convert a python list `a` into an int32 tensor.
```
a = [[1, 2, 3], [4, 5, 6]]
X = torch.IntTensor(a)
print(X)
```
Q2. Create a float32 tensor of shape (3, 2), filled with 10.
```
X = torch.FloatTensor(3, 2).fill_(10)
print(X)
```
## From Numpy Array
Q3. Convert a NumPy array _x into a tensor.
```
_x = np.array([1, 2, 3])
x = torch.from_numpy(_x)
print(x)
```
## Ones and zeros
Q4. Create a 3-by-3 2-D tensor with ones on the diagonal and zeros elsewhere.
```
X = torch.eye(3)
print(X)
assert np.array_equal(X.numpy(), np.eye(3))
```
Q5. Create a tensor with shape of (3, 2) filled with 1's.
```
X = torch.ones(3, 2)
print(X)
assert np.array_equal(X.numpy(), np.ones([3, 2]))
```
Q6. Create a tensor with shape of (3, 2) filled with 0's.
```
X = torch.zeros(3, 2)
print(X)
assert np.array_equal(X.numpy(), np.zeros([3, 2]))
```
## Numerical ranges
Q7. Create a 1D tensor which looks like 2, 4, 6, 8, ..., 100.
```
x = torch.arange(2, 101, 2) # Unlike numpy api, torch arange function requires the start argument.
print(x)
assert np.array_equal(x.numpy(), np.arange(2, 101, 2))
```
Q8. Create a 1D tensor of 50 evenly spaced elements between 3. and 10., inclusive.
```
x = torch.linspace(3, 10, 50)
print(x)
assert np.allclose(x.numpy(), np.linspace(3., 10, 50))
```
Q9. Create a 1-D tensor of 50 element spaced evenly on a log scale between 3. and 10.
```
x = torch.logspace(3, 10, 50)
assert np.allclose(x.numpy(), np.logspace(3., 10., 50))
plt.figure()
plt.scatter(range(len(_x)), _x)
plt.show()
```
## Matrix
Q10. Get the diagonal of X.
```
X = torch.Tensor([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
y = X.diag()
print(y)
assert np.array_equal(y.numpy(), np.diag(X.numpy()))
```
Q11. Get the 1th diagonal of X.
```
X = torch.Tensor([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
y = X.diag(1)
print(y)
assert np.array_equal(y.numpy(), np.diag(X.numpy(), 1))
```
Q12. Get the sum of the elements of the diagonal of X.
```
X = torch.Tensor([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
y = X.trace()
print(y)
assert np.array_equal(y, np.trace(X.numpy()))
```
Q13. Return the lower triangular part of X, the other elements are set to 0.
```
X = torch.Tensor([[1,2,3], [4,5,6], [7,8,9]])
Y = X.tril()
print(Y)
assert np.array_equal(Y.numpy(), np.tril(X.numpy()))
```
Q14. Return the upper triangular part of X, the other elements are set to 0.
```
X = torch.Tensor([[1,2,3], [4,5,6], [7,8,9]])
Y = X.triu()
print(Y)
assert np.array_equal(Y.numpy(), np.triu(X.numpy()))
```
## Save and Load
Q15. Save X to `temp.pt`.
```
X = torch.randn(1, 10)
torch.save(X, 'temp.pt')
```
Q16. Load the `temp.pt` you just saved.
```
X2 = torch.load('temp.pt')
print(X2)
```
Q17. Print X2 such that all elements are displayed with precision=1 (without actually changing the values of X2).
```
torch.set_printoptions(precision=1)
print(X2)
```
| github_jupyter |
```
%reload_ext nb_black
import json
import pandas as pd
with open("../secrets.json", "r") as f:
secrets = json.load(f)
import spotipy
import spotipy.util as util
from spotipy.oauth2 import SpotifyClientCredentials
import spotipy.oauth2 as oauth2
CLIENT_ID = secrets["spotify_client_id"]
CLIENT_SECRET = secrets["spotify_client_secret"]
credentials = oauth2.SpotifyClientCredentials(
client_id=CLIENT_ID, client_secret=CLIENT_SECRET
)
token = credentials.get_access_token()
sp = spotipy.Spotify(auth=token)
# track = "coldplay yellow"
# res = spotify.search(track, type="track", market="US", limit=1)
# print(res)
res = sp.categories(limit = 50)
cat_ids = []
for item in res['categories']['items']:
cat_ids.append(item['id'])
res = sp.category_playlists(category_id = cat_ids[0], limit = 10)
playlists_by_cat = {}
total
for cat in cat_ids:
res = sp.category_playlists(category_id = cat, limit = 10)
playlists = []
num_playlists_in_cat = 0
for item in res['playlists']['items']:
playlist_info = {}
i_name = item['name']
i_id = item['id']
num_tracks = item['tracks']['total']
i_uri = item['uri']
playlist_info['name'] = i_name
playlist_info['id'] = i_id
playlist_info['size'] = num_tracks
playlist_info['uri'] = i_uri
if num_tracks <= 100:
playlists.append(playlist_info)
num_playlists_in_cat+=1
print(num_playlists_in_cat)
playlists_by_cat[cat] = playlists
with open('../data/playlists_by_cat.json', 'w') as fp:
json.dump(playlists_by_cat, fp)
#quick count to see how many songs we're working with
total_tracks = 0
for item in playlists_by_cat:
for p in playlists_by_cat[item]:
#print(p['size'])
total_tracks+=p['size']
total_tracks
all_track_ids = []
for item in playlists_by_cat:
for p in playlists_by_cat[item]:
pl_id = p['id']
offset = 0
track_ids = []
while True:
response = sp.playlist_tracks(pl_id,
offset=offset,
fields='items.track.id,total',
additional_types=['track'])
#store the track ids from the playlist
for item in response['items']:
try:
track_ids.append( item['track']['id'])
except:
pass
offset = offset + len(response['items'])
print(offset, "/", response['total'])
if len(response['items']) == 0:
break
#Add the per_playlist ids to the master list
all_track_ids.extend(track_ids)
len(all_track_ids)
pd.DataFrame(all_track_ids).to_csv("gen_playlists_track_ids.csv")
track_ids_df = pd.DataFrame(all_track_ids)
track_ids_df.nunique() # nearly 3000 duplicates
track_ids_df = track_ids_df.drop_duplicates()
all_track_ids = list(track_ids_df[0])
all_track_ids
#this will take a minute or two. maybe change sleep to be a little shorter
track_info = []
import time
st = 0
end = len(all_track_ids)
# end = 200
step = 50
list(range(st, end, step))
for i in range(st, end, step):
print(i)
if len(all_track_ids) - i >= step - 1:
response = sp.tracks(all_track_ids[i : i + step])
else:
response = sp.tracks(all_track_ids[i:])
time.sleep(1)
track_info.append(response)
info_dict = {}
for batch in track_info:
type(batch)
for ind in batch["tracks"]:
track_artist = ind["artists"][0]["name"]
track_name = ind["name"]
track_album = ind["album"]["name"]
track_popularity = ind["popularity"]
track_id = ind["id"]
info_dict[track_id] = [track_artist, track_name, track_album, track_popularity]
len(info_dict)
tdf = pd.DataFrame(info_dict).T
tdf = tdf.reset_index()
# tdf.columns
tdf = tdf.rename(
columns={"index": "id", 0: "artist", 1: "title", 2: "album", 3: "popularity"}
)
tdf.head()
tdf.to_csv("../data/gen_track_info.csv")
track_features = []
import time
st = 0
end = len(all_track_ids)
# end = 200
step = 100
list(range(st, end, step))
for i in range(st, end, step):
print(i)
if len(all_track_ids) - i >= step - 1:
response = sp.audio_features(all_track_ids[i : i + step])
else:
response = sp.audio_features(all_track_ids[i:])
time.sleep(0.3)
track_features.append(response)
dfs = []
for item in track_features:
df = pd.DataFrame.from_dict(item)
dfs.append(df)
study_track_features = pd.concat(dfs)
study_track_features.head()
study_track_features.to_csv("../data/gen_track_features.csv")
gen_track_info = tdf.copy()
gen_track_features = study_track_features.copy()
gen_track_features.drop(columns=["track_href", "analysis_url", "uri", "type"], inplace=True)
full_df = gen_track_info.merge(gen_track_features, on="id", how="outer")
full_df.shape
full_df.to_csv("../data/gen_playlist_tracks_full.csv")
full_df.hist(figsize=(15, 10), bins = 30)
print(json.dumps(res, indent=4, sort_keys=True))
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Introduction
In this notebook, we will demonstrate how Google Sheets can be used as a simple medium for managing, updating, and evaluating Intents and Training Phrases in Dialogflow CX.
Specifically, we will show how to `update` **_Existing Intents_** and Training Phrases in Dialogflow CX using Google Sheets as a Source
## Prerequisites
- Ensure you have a GCP Service Account key with the Dialogflow API Admin privileges assigned to it
```
#If you haven't already, make sure you install the `dfcx-scrapi` library
!pip install dfcx-scrapi
```
# Imports
```
import pandas as pd
from dfcx_scrapi.tools.copy_util import CopyUtil
from dfcx_scrapi.tools.dataframe_functions import DataframeFunctions
```
# User Inputs
In the next section, we will collect runtime variables needed to execute this notebook.
This should be the only cell of the notebook you need to edit in order for this notebook to run.
For this notebook, we'll need the following inputs:
- `creds_path`: Your local path to your GCP Service Account Credentials
- `agent_id`: Your Dialogflow CX Agent ID in String format
- `google_sheet_name`: The name of your Google Sheet
- `google_sheet_tab_read`: The name of the tab in your Google Sheet to read the data from
```
creds_path = '<YOUR_CREDS_PATH_HERE>'
agent_id = '<YOUR_AGENT_ID_HERE>'
flow = '<YOUR_FLOW_DISPLAY_NAME>'
google_sheet_name = 'My Google Sheet Name'
google_sheet_tab_write = 'Write To My Tab Name'
```
# CX to Sheets - Filtered by Intents in Scope of a Flow
Here, we will demonstrate how to extract all of the Intents and Training Phrases associated with a specific `Flow` inside of a Dialogflow CX Agent.
In our previous notebook example, we extracted _ALL_ of the Intents and Training Phrases associated with the Agent.
But in some cases, you may only be interested in Intents that are _currently in use_ with `Flow A` or `Flow B`.
The following code allows you to easily extract that information and move it to a Google Sheet for review.
## Prerequisites
- In order for the `DataframeFunctions` class to interact with Google Sheets, you *must* share your Google Sheet with your Service Account email address.
```
cu = CopyUtil(creds_path=creds_path, agent_id=agent_id)
dffx = DataframeFunctions(creds_path)
flow_map = cu.flows.get_flows_map(reverse=True)
pages = cu.pages.list_pages(flow_map[flow])
resources = cu.get_page_dependencies(pages)
for key in resources.keys():
if key == 'intents':
intent_list = list(resources[key])
all_intents = cu.intents.list_intents()
final_intents = []
for intent in all_intents:
if intent.name in intent_list:
final_intents.append(intent)
df = pd.DataFrame()
for intent in final_intents:
df = df.append(cu.intents.intent_proto_to_dataframe(intent))
# Push DataFrame to Google Sheets
dffx.dataframe_to_sheets(google_sheet_name, google_sheet_tab_write, df)
print('Total # of Intents = {}'.format(df.intent.nunique()))
print('Total # of Training Phrases = {}'.format(df.tp.nunique()))
```
# Final Thoughts and Wrap-Up
You should see your Google Sheet is now updated with the Intents and Training Phrases from your Dialogflow CX Agent that are in scope of the `Flow` that you specified.
If you want to create _additional_ filters before pushing the data to Google Sheets, you can manipulate the `df` variable to do things like:
- Exclude 1 or more Intents
- Push Intents that contain > X # of Training Phrases to Tab A
- Push Intents that contain < Y # of Training Phrases to Tab B
| github_jupyter |
```
# import required dependencies
import sys
sys.path.insert(0, '../../../BERT-FAQ/')
from shared.utils import load_from_json
from shared.utils import dump_to_json
from shared.utils import make_dirs
from reranker import ReRanker
```
**1. Generating reranked results from Answer (BERT-Q-a)"**
```
# define output path
output_path="../../../BERT-FAQ/data/CovidFAQ/rank_results"
# define rank_field, w_t parameters
rank_field="BERT-Q-a"; w_t=10;
```
**query_type="user_query"; neg_type="hard"; loss_type='triplet'**
```
# define variables
query_type="user_query"; neg_type="hard"; loss_type='triplet'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="user_query"; neg_type="simple"; loss_type='triplet'**
```
# define variables
query_type="user_query"; neg_type="simple"; loss_type='triplet'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="faq"; neg_type="hard"; loss_type='triplet'**
```
# define variables
query_type="faq"; neg_type="hard"; loss_type='triplet'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="faq"; neg_type="simple"; loss_type='triplet'**
```
# define variables
query_type="faq"; neg_type="simple"; loss_type='triplet'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="user_query"; neg_type="hard"; loss_type='softmax'**
```
# define variables
query_type="user_query"; neg_type="hard"; loss_type='softmax'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="user_query"; neg_type="simple"; loss_type='softmax'**
```
# define variables
query_type="user_query"; neg_type="simple"; loss_type='softmax'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="faq"; neg_type="hard"; loss_type='softmax'**
```
# define variables
query_type="faq"; neg_type="hard"; loss_type='softmax'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="faq"; neg_type="simple"; loss_type='softmax'**
```
# define variables
query_type="faq"; neg_type="simple"; loss_type='softmax'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**2. Generating reranked results from Question (BERT-Q-q)"**
```
# define rank_field, w_t parameters
rank_field="BERT-Q-q"; w_t=10;
```
**query_type="user_query"; neg_type="hard"; loss_type='triplet'**
```
# define variables
query_type="user_query"; neg_type="hard"; loss_type='triplet'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="user_query"; neg_type="simple"; loss_type='triplet'**
```
# define variables
query_type="user_query"; neg_type="simple"; loss_type='triplet'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="faq"; neg_type="hard"; loss_type='triplet'**
```
# define variables
query_type="faq"; neg_type="hard"; loss_type='triplet'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="faq"; neg_type="simple"; loss_type='triplet'**
```
# define variables
query_type="faq"; neg_type="simple"; loss_type='triplet'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="user_query"; neg_type="hard"; loss_type='softmax'**
```
# define variables
query_type="user_query"; neg_type="hard"; loss_type='softmax'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="user_query"; neg_type="simple"; loss_type='softmax'**
```
# define variables
query_type="user_query"; neg_type="simple"; loss_type='softmax'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="faq"; neg_type="hard"; loss_type='softmax'**
```
# define variables
query_type="faq"; neg_type="hard"; loss_type='softmax'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
**query_type="faq"; neg_type="simple"; loss_type='softmax'**
```
# define variables
query_type="faq"; neg_type="simple"; loss_type='softmax'
# create instance of ReRanker class
r = ReRanker(rank_field=rank_field, w_t=w_t)
reranked_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
pred_output_path = output_path + "/supervised/" + rank_field + "/" + loss_type + "/" + query_type + "/" + neg_type
# generate reranked results
bert_query_by_question = load_from_json(pred_output_path + '/bert_query_by_question.json')
reranked_query_by_question = r.get_reranked_results(bert_query_by_question)
dump_to_json(reranked_query_by_question, reranked_output_path + '/reranked_query_by_question.json')
bert_query_by_answer = load_from_json(pred_output_path + '/bert_query_by_answer.json')
reranked_query_by_answer = r.get_reranked_results(bert_query_by_answer)
dump_to_json(reranked_query_by_answer, reranked_output_path + '/reranked_query_by_answer.json')
bert_query_by_question_answer = load_from_json(pred_output_path + '/bert_query_by_question_answer.json')
reranked_query_by_question_answer = r.get_reranked_results(bert_query_by_question_answer)
dump_to_json(reranked_query_by_question_answer, reranked_output_path + '/reranked_query_by_question_answer.json')
bert_query_by_question_answer_concat = load_from_json(pred_output_path + '/bert_query_by_question_answer_concat.json')
reranked_query_by_question_answer_concat = r.get_reranked_results(bert_query_by_question_answer_concat)
dump_to_json(reranked_query_by_question_answer_concat, reranked_output_path + '/reranked_query_by_question_answer_concat.json')
```
| github_jupyter |
```
"""Copyright 2020-2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
!pip install pyphen
import pyphen
!pip install mosestokenizer
from mosestokenizer import *
!pip install scipy
german_sentence_splitter = MosesSentenceSplitter('de')
german_tokenizer = MosesTokenizer('de')
german_dictionary = pyphen.Pyphen(lang='de_DE')
# FKRE
from typing import Sequence
import string
def fkre(sentences: Sequence[str]):
fkre = 0.0
for sentence in sentences:
sentence = setence.strip()
if not sentence:
sentence = "."
number_of_sentences = max(1,len(german_sentence_splitter([sentence]))) # Can't be less than one sentence.
tokens = german_tokenizer(sentence)
number_of_words = 0
number_of_syllables = 0
for token in tokens:
if token in string.punctuation: # We don't count punc towards syllables or word count.
continue
number_of_words+=1
number_of_syllables+=len(german_dictionary.inserted(token).split("-"))
number_of_words = max(1,number_of_words) # We assume there is at least one word.
fkre+=180 - (number_of_words/number_of_sentences) - (58.5 *(number_of_syllables/number_of_words))
return fkre/len(sentences)
fkre(["Ingrid Persdotter ist der Name einer schwedischen Nonne, die 1498 im Kloster Vadstena (Bild) einen stil- und 1.","Ingrid Persdotter ist der Name einer schwedischen Nonne, die 1498 im Kloster Vadstena (Bild) einen stil- und 1."])
import subprocess
multi_bleu = "multi-bleu-detok.perl" # Path to multibleu. You can get it from https://github.com/EdinburghNLP/nematus
def calculate_bleu(output_file,reference_file):
command = multi_bleu + " " + reference_file +" < " + output_file +" | cut -f 3 -d ' ' | cut -f 1 -d ','"
ps = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE)
output = ps.communicate()[0]
return float(output.strip())
# All files should be de-tokenized.
from scipy.special import expit
def score(source_file,output_file,reference_file):
bleu = calculate_bleu(output_file,reference_file)
ibleu = (bleu *0.9) - (calculate_bleu(output_file,source_file) * 0.1)
source = open(source_file)
fkre_source = fkre(source.readlines().strip().split())
fkre_output = fkre(source.readlines().strip().split())
fk_bleu = expit(((fkre_source- fkre_output) **0.5 )) * ( (ibleu/100.0)**0.5)
return {"bleu":bleu,"ibleu":ibleu,"fk-bleu":fk_bleu}
# For SARI we tokenized using MosesTokenizer("de") the source, output, and ref
# https://github.com/apache/joshua can then be used to calculate corpus level SARI.
# Alternatively you can use the following sentence based sari.
# =======================================================
# SARI -- Text Simplification Tunable Evaluation Metric
# =======================================================
#
# Author: Wei Xu (UPenn xwe@cis.upenn.edu)
#
# A Python implementation of the SARI metric for text simplification
# evaluation in the following paper
#
# "Optimizing Statistical Machine Translation for Text Simplification"
# Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen and Chris Callison-Burch
# In Transactions of the Association for Computational Linguistics (TACL) 2015
#
# There is also a Java implementation of the SARI metric
# that is integrated into the Joshua MT Decoder. It can
# be used for tuning Joshua models for a real end-to-end
# text simplification model.
#
from __future__ import division
from collections import Counter
import sys
def ReadInFile (filename):
with open(filename) as f:
lines = f.readlines()
lines = [x.strip() for x in lines]
return lines
def SARIngram(sgrams, cgrams, rgramslist, numref):
rgramsall = [rgram for rgrams in rgramslist for rgram in rgrams]
rgramcounter = Counter(rgramsall)
sgramcounter = Counter(sgrams)
sgramcounter_rep = Counter()
for sgram, scount in sgramcounter.items():
sgramcounter_rep[sgram] = scount * numref
cgramcounter = Counter(cgrams)
cgramcounter_rep = Counter()
for cgram, ccount in cgramcounter.items():
cgramcounter_rep[cgram] = ccount * numref
# KEEP
keepgramcounter_rep = sgramcounter_rep & cgramcounter_rep
keepgramcountergood_rep = keepgramcounter_rep & rgramcounter
keepgramcounterall_rep = sgramcounter_rep & rgramcounter
keeptmpscore1 = 0
keeptmpscore2 = 0
for keepgram in keepgramcountergood_rep:
keeptmpscore1 += keepgramcountergood_rep[keepgram] / keepgramcounter_rep[keepgram]
keeptmpscore2 += keepgramcountergood_rep[keepgram] / keepgramcounterall_rep[keepgram]
#print "KEEP", keepgram, keepscore, cgramcounter[keepgram], sgramcounter[keepgram], rgramcounter[keepgram]
keepscore_precision = 0
if len(keepgramcounter_rep) > 0:
keepscore_precision = keeptmpscore1 / len(keepgramcounter_rep)
keepscore_recall = 0
if len(keepgramcounterall_rep) > 0:
keepscore_recall = keeptmpscore2 / len(keepgramcounterall_rep)
keepscore = 0
if keepscore_precision > 0 or keepscore_recall > 0:
keepscore = 2 * keepscore_precision * keepscore_recall / (keepscore_precision + keepscore_recall)
# DELETION
delgramcounter_rep = sgramcounter_rep - cgramcounter_rep
delgramcountergood_rep = delgramcounter_rep - rgramcounter
delgramcounterall_rep = sgramcounter_rep - rgramcounter
deltmpscore1 = 0
deltmpscore2 = 0
for delgram in delgramcountergood_rep:
deltmpscore1 += delgramcountergood_rep[delgram] / delgramcounter_rep[delgram]
deltmpscore2 += delgramcountergood_rep[delgram] / delgramcounterall_rep[delgram]
delscore_precision = 0
if len(delgramcounter_rep) > 0:
delscore_precision = deltmpscore1 / len(delgramcounter_rep)
delscore_recall = 0
if len(delgramcounterall_rep) > 0:
delscore_recall = deltmpscore1 / len(delgramcounterall_rep)
delscore = 0
if delscore_precision > 0 or delscore_recall > 0:
delscore = 2 * delscore_precision * delscore_recall / (delscore_precision + delscore_recall)
# ADDITION
addgramcounter = set(cgramcounter) - set(sgramcounter)
addgramcountergood = set(addgramcounter) & set(rgramcounter)
addgramcounterall = set(rgramcounter) - set(sgramcounter)
addtmpscore = 0
for addgram in addgramcountergood:
addtmpscore += 1
addscore_precision = 0
addscore_recall = 0
if len(addgramcounter) > 0:
addscore_precision = addtmpscore / len(addgramcounter)
if len(addgramcounterall) > 0:
addscore_recall = addtmpscore / len(addgramcounterall)
addscore = 0
if addscore_precision > 0 or addscore_recall > 0:
addscore = 2 * addscore_precision * addscore_recall / (addscore_precision + addscore_recall)
return (keepscore, delscore_precision, addscore)
def SARIsent (ssent, csent, rsents) :
ssent = " ".join(german_tokenizer(ssent))
csent = " ".join(german_tokenizer(csent))
rsents = [" ".join(german_tokenizer(rsent)) for rsent in rsents]
numref = len(rsents)
s1grams = ssent.lower().split(" ")
c1grams = csent.lower().split(" ")
s2grams = []
c2grams = []
s3grams = []
c3grams = []
s4grams = []
c4grams = []
r1gramslist = []
r2gramslist = []
r3gramslist = []
r4gramslist = []
for rsent in rsents:
r1grams = rsent.lower().split(" ")
r2grams = []
r3grams = []
r4grams = []
r1gramslist.append(r1grams)
for i in range(0, len(r1grams)-1) :
if i < len(r1grams) - 1:
r2gram = r1grams[i] + " " + r1grams[i+1]
r2grams.append(r2gram)
if i < len(r1grams)-2:
r3gram = r1grams[i] + " " + r1grams[i+1] + " " + r1grams[i+2]
r3grams.append(r3gram)
if i < len(r1grams)-3:
r4gram = r1grams[i] + " " + r1grams[i+1] + " " + r1grams[i+2] + " " + r1grams[i+3]
r4grams.append(r4gram)
r2gramslist.append(r2grams)
r3gramslist.append(r3grams)
r4gramslist.append(r4grams)
for i in range(0, len(s1grams)-1) :
if i < len(s1grams) - 1:
s2gram = s1grams[i] + " " + s1grams[i+1]
s2grams.append(s2gram)
if i < len(s1grams)-2:
s3gram = s1grams[i] + " " + s1grams[i+1] + " " + s1grams[i+2]
s3grams.append(s3gram)
if i < len(s1grams)-3:
s4gram = s1grams[i] + " " + s1grams[i+1] + " " + s1grams[i+2] + " " + s1grams[i+3]
s4grams.append(s4gram)
for i in range(0, len(c1grams)-1) :
if i < len(c1grams) - 1:
c2gram = c1grams[i] + " " + c1grams[i+1]
c2grams.append(c2gram)
if i < len(c1grams)-2:
c3gram = c1grams[i] + " " + c1grams[i+1] + " " + c1grams[i+2]
c3grams.append(c3gram)
if i < len(c1grams)-3:
c4gram = c1grams[i] + " " + c1grams[i+1] + " " + c1grams[i+2] + " " + c1grams[i+3]
c4grams.append(c4gram)
(keep1score, del1score, add1score) = SARIngram(s1grams, c1grams, r1gramslist, numref)
(keep2score, del2score, add2score) = SARIngram(s2grams, c2grams, r2gramslist, numref)
(keep3score, del3score, add3score) = SARIngram(s3grams, c3grams, r3gramslist, numref)
(keep4score, del4score, add4score) = SARIngram(s4grams, c4grams, r4gramslist, numref)
avgkeepscore = sum([keep1score,keep2score,keep3score,keep4score])/4
avgdelscore = sum([del1score,del2score,del3score,del4score])/4
avgaddscore = sum([add1score,add2score,add3score,add4score])/4
finalscore = (avgkeepscore + avgdelscore + avgaddscore ) / 3
return finalscore
def main():
ssent = "About 95 species are currently accepted ."
csent1 = "About 95 you now get in ."
csent2 = "About 95 species are now agreed ."
csent3 = "About 95 species are currently agreed ."
rsents = ["About 95 species are currently known .", "About 95 species are now accepted .", "95 species are now accepted ."]
print(SARIsent(ssent, csent1, rsents))
print(SARIsent(ssent, csent2, rsents))
print(SARIsent(ssent, csent3, rsents))
if __name__ == '__main__':
main()
```
| github_jupyter |
**Chapter 13 – Loading and Preprocessing Data with TensorFlow**
_This notebook contains all the sample code and solutions to the exercises in chapter 13._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/13_loading_and_preprocessing_data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
!pip install -q -U tfx==0.21.2
print("You can safely ignore the package incompatibility errors.")
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "data"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
## Datasets
```
X = tf.range(10)
dataset = tf.data.Dataset.from_tensor_slices(X)
dataset
```
Equivalently:
```
dataset = tf.data.Dataset.range(10)
for item in dataset:
print(item)
dataset = dataset.repeat(3).batch(7)
for item in dataset:
print(item)
dataset = dataset.map(lambda x: x * 2)
for item in dataset:
print(item)
#dataset = dataset.apply(tf.data.experimental.unbatch()) # Now deprecated
dataset = dataset.unbatch()
dataset = dataset.filter(lambda x: x < 10) # keep only items < 10
for item in dataset.take(3):
print(item)
tf.random.set_seed(42)
dataset = tf.data.Dataset.range(10).repeat(3)
dataset = dataset.shuffle(buffer_size=3, seed=42).batch(7)
for item in dataset:
print(item)
```
## Split the California dataset to multiple CSV files
Let's start by loading and preparing the California housing dataset. We first load it, then split it into a training set, a validation set and a test set, and finally we scale it:
```
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
scaler.fit(X_train)
X_mean = scaler.mean_
X_std = scaler.scale_
```
For a very large dataset that does not fit in memory, you will typically want to split it into many files first, then have TensorFlow read these files in parallel. To demonstrate this, let's start by splitting the housing dataset and save it to 20 CSV files:
```
def save_to_multiple_csv_files(data, name_prefix, header=None, n_parts=10):
housing_dir = os.path.join("datasets", "housing")
os.makedirs(housing_dir, exist_ok=True)
path_format = os.path.join(housing_dir, "my_{}_{:02d}.csv")
filepaths = []
m = len(data)
for file_idx, row_indices in enumerate(np.array_split(np.arange(m), n_parts)):
part_csv = path_format.format(name_prefix, file_idx)
filepaths.append(part_csv)
with open(part_csv, "wt", encoding="utf-8") as f:
if header is not None:
f.write(header)
f.write("\n")
for row_idx in row_indices:
f.write(",".join([repr(col) for col in data[row_idx]]))
f.write("\n")
return filepaths
train_data = np.c_[X_train, y_train]
valid_data = np.c_[X_valid, y_valid]
test_data = np.c_[X_test, y_test]
header_cols = housing.feature_names + ["MedianHouseValue"]
header = ",".join(header_cols)
train_filepaths = save_to_multiple_csv_files(train_data, "train", header, n_parts=20)
valid_filepaths = save_to_multiple_csv_files(valid_data, "valid", header, n_parts=10)
test_filepaths = save_to_multiple_csv_files(test_data, "test", header, n_parts=10)
```
Okay, now let's take a peek at the first few lines of one of these CSV files:
```
import pandas as pd
pd.read_csv(train_filepaths[0]).head()
```
Or in text mode:
```
with open(train_filepaths[0]) as f:
for i in range(5):
print(f.readline(), end="")
train_filepaths
```
## Building an Input Pipeline
```
filepath_dataset = tf.data.Dataset.list_files(train_filepaths, seed=42)
for filepath in filepath_dataset:
print(filepath)
n_readers = 5
dataset = filepath_dataset.interleave(
lambda filepath: tf.data.TextLineDataset(filepath).skip(1),
cycle_length=n_readers)
for line in dataset.take(5):
print(line.numpy())
```
Notice that field 4 is interpreted as a string.
```
record_defaults=[0, np.nan, tf.constant(np.nan, dtype=tf.float64), "Hello", tf.constant([])]
parsed_fields = tf.io.decode_csv('1,2,3,4,5', record_defaults)
parsed_fields
```
Notice that all missing fields are replaced with their default value, when provided:
```
parsed_fields = tf.io.decode_csv(',,,,5', record_defaults)
parsed_fields
```
The 5th field is compulsory (since we provided `tf.constant([])` as the "default value"), so we get an exception if we do not provide it:
```
try:
parsed_fields = tf.io.decode_csv(',,,,', record_defaults)
except tf.errors.InvalidArgumentError as ex:
print(ex)
```
The number of fields should match exactly the number of fields in the `record_defaults`:
```
try:
parsed_fields = tf.io.decode_csv('1,2,3,4,5,6,7', record_defaults)
except tf.errors.InvalidArgumentError as ex:
print(ex)
n_inputs = 8 # X_train.shape[-1]
@tf.function
def preprocess(line):
defs = [0.] * n_inputs + [tf.constant([], dtype=tf.float32)]
fields = tf.io.decode_csv(line, record_defaults=defs)
x = tf.stack(fields[:-1])
y = tf.stack(fields[-1:])
return (x - X_mean) / X_std, y
preprocess(b'4.2083,44.0,5.3232,0.9171,846.0,2.3370,37.47,-122.2,2.782')
def csv_reader_dataset(filepaths, repeat=1, n_readers=5,
n_read_threads=None, shuffle_buffer_size=10000,
n_parse_threads=5, batch_size=32):
dataset = tf.data.Dataset.list_files(filepaths).repeat(repeat)
dataset = dataset.interleave(
lambda filepath: tf.data.TextLineDataset(filepath).skip(1),
cycle_length=n_readers, num_parallel_calls=n_read_threads)
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.map(preprocess, num_parallel_calls=n_parse_threads)
dataset = dataset.batch(batch_size)
return dataset.prefetch(1)
tf.random.set_seed(42)
train_set = csv_reader_dataset(train_filepaths, batch_size=3)
for X_batch, y_batch in train_set.take(2):
print("X =", X_batch)
print("y =", y_batch)
print()
train_set = csv_reader_dataset(train_filepaths, repeat=None)
valid_set = csv_reader_dataset(valid_filepaths)
test_set = csv_reader_dataset(test_filepaths)
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
keras.layers.Dense(1),
])
model.compile(loss="mse", optimizer=keras.optimizers.SGD(lr=1e-3))
batch_size = 32
model.fit(train_set, steps_per_epoch=len(X_train) // batch_size, epochs=10,
validation_data=valid_set)
model.evaluate(test_set, steps=len(X_test) // batch_size)
new_set = test_set.map(lambda X, y: X) # we could instead just pass test_set, Keras would ignore the labels
X_new = X_test
model.predict(new_set, steps=len(X_new) // batch_size)
optimizer = keras.optimizers.Nadam(lr=0.01)
loss_fn = keras.losses.mean_squared_error
n_epochs = 5
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
total_steps = n_epochs * n_steps_per_epoch
global_step = 0
for X_batch, y_batch in train_set.take(total_steps):
global_step += 1
print("\rGlobal step {}/{}".format(global_step, total_steps), end="")
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
optimizer = keras.optimizers.Nadam(lr=0.01)
loss_fn = keras.losses.mean_squared_error
@tf.function
def train(model, n_epochs, batch_size=32,
n_readers=5, n_read_threads=5, shuffle_buffer_size=10000, n_parse_threads=5):
train_set = csv_reader_dataset(train_filepaths, repeat=n_epochs, n_readers=n_readers,
n_read_threads=n_read_threads, shuffle_buffer_size=shuffle_buffer_size,
n_parse_threads=n_parse_threads, batch_size=batch_size)
for X_batch, y_batch in train_set:
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train(model, 5)
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
optimizer = keras.optimizers.Nadam(lr=0.01)
loss_fn = keras.losses.mean_squared_error
@tf.function
def train(model, n_epochs, batch_size=32,
n_readers=5, n_read_threads=5, shuffle_buffer_size=10000, n_parse_threads=5):
train_set = csv_reader_dataset(train_filepaths, repeat=n_epochs, n_readers=n_readers,
n_read_threads=n_read_threads, shuffle_buffer_size=shuffle_buffer_size,
n_parse_threads=n_parse_threads, batch_size=batch_size)
n_steps_per_epoch = len(X_train) // batch_size
total_steps = n_epochs * n_steps_per_epoch
global_step = 0
for X_batch, y_batch in train_set.take(total_steps):
global_step += 1
if tf.equal(global_step % 100, 0):
tf.print("\rGlobal step", global_step, "/", total_steps)
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train(model, 5)
```
Here is a short description of each method in the `Dataset` class:
```
for m in dir(tf.data.Dataset):
if not (m.startswith("_") or m.endswith("_")):
func = getattr(tf.data.Dataset, m)
if hasattr(func, "__doc__"):
print("● {:21s}{}".format(m + "()", func.__doc__.split("\n")[0]))
```
## The `TFRecord` binary format
A TFRecord file is just a list of binary records. You can create one using a `tf.io.TFRecordWriter`:
```
with tf.io.TFRecordWriter("my_data.tfrecord") as f:
f.write(b"This is the first record")
f.write(b"And this is the second record")
```
And you can read it using a `tf.data.TFRecordDataset`:
```
filepaths = ["my_data.tfrecord"]
dataset = tf.data.TFRecordDataset(filepaths)
for item in dataset:
print(item)
```
You can read multiple TFRecord files with just one `TFRecordDataset`. By default it will read them one at a time, but if you set `num_parallel_reads=3`, it will read 3 at a time in parallel and interleave their records:
```
filepaths = ["my_test_{}.tfrecord".format(i) for i in range(5)]
for i, filepath in enumerate(filepaths):
with tf.io.TFRecordWriter(filepath) as f:
for j in range(3):
f.write("File {} record {}".format(i, j).encode("utf-8"))
dataset = tf.data.TFRecordDataset(filepaths, num_parallel_reads=3)
for item in dataset:
print(item)
options = tf.io.TFRecordOptions(compression_type="GZIP")
with tf.io.TFRecordWriter("my_compressed.tfrecord", options) as f:
f.write(b"This is the first record")
f.write(b"And this is the second record")
dataset = tf.data.TFRecordDataset(["my_compressed.tfrecord"],
compression_type="GZIP")
for item in dataset:
print(item)
```
### A Brief Intro to Protocol Buffers
For this section you need to [install protobuf](https://developers.google.com/protocol-buffers/docs/downloads). In general you will not have to do so when using TensorFlow, as it comes with functions to create and parse protocol buffers of type `tf.train.Example`, which are generally sufficient. However, in this section we will learn about protocol buffers by creating our own simple protobuf definition, so we need the protobuf compiler (`protoc`): we will use it to compile the protobuf definition to a Python module that we can then use in our code.
First let's write a simple protobuf definition:
```
%%writefile person.proto
syntax = "proto3";
message Person {
string name = 1;
int32 id = 2;
repeated string email = 3;
}
```
And let's compile it (the `--descriptor_set_out` and `--include_imports` options are only required for the `tf.io.decode_proto()` example below):
```
!protoc person.proto --python_out=. --descriptor_set_out=person.desc --include_imports
!ls person*
from person_pb2 import Person
person = Person(name="Al", id=123, email=["a@b.com"]) # create a Person
print(person) # display the Person
person.name # read a field
person.name = "Alice" # modify a field
person.email[0] # repeated fields can be accessed like arrays
person.email.append("c@d.com") # add an email address
s = person.SerializeToString() # serialize to a byte string
s
person2 = Person() # create a new Person
person2.ParseFromString(s) # parse the byte string (27 bytes)
person == person2 # now they are equal
```
#### Custom protobuf
In rare cases, you may want to parse a custom protobuf (like the one we just created) in TensorFlow. For this you can use the `tf.io.decode_proto()` function:
```
person_tf = tf.io.decode_proto(
bytes=s,
message_type="Person",
field_names=["name", "id", "email"],
output_types=[tf.string, tf.int32, tf.string],
descriptor_source="person.desc")
person_tf.values
```
For more details, see the [`tf.io.decode_proto()`](https://www.tensorflow.org/api_docs/python/tf/io/decode_proto) documentation.
### TensorFlow Protobufs
Here is the definition of the tf.train.Example protobuf:
```proto
syntax = "proto3";
message BytesList { repeated bytes value = 1; }
message FloatList { repeated float value = 1 [packed = true]; }
message Int64List { repeated int64 value = 1 [packed = true]; }
message Feature {
oneof kind {
BytesList bytes_list = 1;
FloatList float_list = 2;
Int64List int64_list = 3;
}
};
message Features { map<string, Feature> feature = 1; };
message Example { Features features = 1; };
```
**Warning**: there's currently a bug preventing `from tensorflow.train import X` so we work around it by writing `X = tf.train.X`. See https://github.com/tensorflow/tensorflow/issues/33289 for more details.
```
#from tensorflow.train import BytesList, FloatList, Int64List
#from tensorflow.train import Feature, Features, Example
BytesList = tf.train.BytesList
FloatList = tf.train.FloatList
Int64List = tf.train.Int64List
Feature = tf.train.Feature
Features = tf.train.Features
Example = tf.train.Example
person_example = Example(
features=Features(
feature={
"name": Feature(bytes_list=BytesList(value=[b"Alice"])),
"id": Feature(int64_list=Int64List(value=[123])),
"emails": Feature(bytes_list=BytesList(value=[b"a@b.com", b"c@d.com"]))
}))
with tf.io.TFRecordWriter("my_contacts.tfrecord") as f:
f.write(person_example.SerializeToString())
feature_description = {
"name": tf.io.FixedLenFeature([], tf.string, default_value=""),
"id": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"emails": tf.io.VarLenFeature(tf.string),
}
for serialized_example in tf.data.TFRecordDataset(["my_contacts.tfrecord"]):
parsed_example = tf.io.parse_single_example(serialized_example,
feature_description)
parsed_example
parsed_example
parsed_example["emails"].values[0]
tf.sparse.to_dense(parsed_example["emails"], default_value=b"")
parsed_example["emails"].values
```
### Putting Images in TFRecords
```
from sklearn.datasets import load_sample_images
img = load_sample_images()["images"][0]
plt.imshow(img)
plt.axis("off")
plt.title("Original Image")
plt.show()
data = tf.io.encode_jpeg(img)
example_with_image = Example(features=Features(feature={
"image": Feature(bytes_list=BytesList(value=[data.numpy()]))}))
serialized_example = example_with_image.SerializeToString()
# then save to TFRecord
feature_description = { "image": tf.io.VarLenFeature(tf.string) }
example_with_image = tf.io.parse_single_example(serialized_example, feature_description)
decoded_img = tf.io.decode_jpeg(example_with_image["image"].values[0])
```
Or use `decode_image()` which supports BMP, GIF, JPEG and PNG formats:
```
decoded_img = tf.io.decode_image(example_with_image["image"].values[0])
plt.imshow(decoded_img)
plt.title("Decoded Image")
plt.axis("off")
plt.show()
```
### Putting Tensors and Sparse Tensors in TFRecords
Tensors can be serialized and parsed easily using `tf.io.serialize_tensor()` and `tf.io.parse_tensor()`:
```
t = tf.constant([[0., 1.], [2., 3.], [4., 5.]])
s = tf.io.serialize_tensor(t)
s
tf.io.parse_tensor(s, out_type=tf.float32)
serialized_sparse = tf.io.serialize_sparse(parsed_example["emails"])
serialized_sparse
BytesList(value=serialized_sparse.numpy())
dataset = tf.data.TFRecordDataset(["my_contacts.tfrecord"]).batch(10)
for serialized_examples in dataset:
parsed_examples = tf.io.parse_example(serialized_examples,
feature_description)
parsed_examples
```
## Handling Sequential Data Using `SequenceExample`
```proto
syntax = "proto3";
message FeatureList { repeated Feature feature = 1; };
message FeatureLists { map<string, FeatureList> feature_list = 1; };
message SequenceExample {
Features context = 1;
FeatureLists feature_lists = 2;
};
```
**Warning**: there's currently a bug preventing `from tensorflow.train import X` so we work around it by writing `X = tf.train.X`. See https://github.com/tensorflow/tensorflow/issues/33289 for more details.
```
#from tensorflow.train import FeatureList, FeatureLists, SequenceExample
FeatureList = tf.train.FeatureList
FeatureLists = tf.train.FeatureLists
SequenceExample = tf.train.SequenceExample
context = Features(feature={
"author_id": Feature(int64_list=Int64List(value=[123])),
"title": Feature(bytes_list=BytesList(value=[b"A", b"desert", b"place", b"."])),
"pub_date": Feature(int64_list=Int64List(value=[1623, 12, 25]))
})
content = [["When", "shall", "we", "three", "meet", "again", "?"],
["In", "thunder", ",", "lightning", ",", "or", "in", "rain", "?"]]
comments = [["When", "the", "hurlyburly", "'s", "done", "."],
["When", "the", "battle", "'s", "lost", "and", "won", "."]]
def words_to_feature(words):
return Feature(bytes_list=BytesList(value=[word.encode("utf-8")
for word in words]))
content_features = [words_to_feature(sentence) for sentence in content]
comments_features = [words_to_feature(comment) for comment in comments]
sequence_example = SequenceExample(
context=context,
feature_lists=FeatureLists(feature_list={
"content": FeatureList(feature=content_features),
"comments": FeatureList(feature=comments_features)
}))
sequence_example
serialized_sequence_example = sequence_example.SerializeToString()
context_feature_descriptions = {
"author_id": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"title": tf.io.VarLenFeature(tf.string),
"pub_date": tf.io.FixedLenFeature([3], tf.int64, default_value=[0, 0, 0]),
}
sequence_feature_descriptions = {
"content": tf.io.VarLenFeature(tf.string),
"comments": tf.io.VarLenFeature(tf.string),
}
parsed_context, parsed_feature_lists = tf.io.parse_single_sequence_example(
serialized_sequence_example, context_feature_descriptions,
sequence_feature_descriptions)
parsed_context
parsed_context["title"].values
parsed_feature_lists
print(tf.RaggedTensor.from_sparse(parsed_feature_lists["content"]))
```
# The Features API
Let's use the variant of the California housing dataset that we used in Chapter 2, since it contains categorical features and missing values:
```
import os
import tarfile
import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
os.makedirs(housing_path, exist_ok=True)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing = load_housing_data()
housing.head()
housing_median_age = tf.feature_column.numeric_column("housing_median_age")
age_mean, age_std = X_mean[1], X_std[1] # The median age is column in 1
housing_median_age = tf.feature_column.numeric_column(
"housing_median_age", normalizer_fn=lambda x: (x - age_mean) / age_std)
median_income = tf.feature_column.numeric_column("median_income")
bucketized_income = tf.feature_column.bucketized_column(
median_income, boundaries=[1.5, 3., 4.5, 6.])
bucketized_income
ocean_prox_vocab = ['<1H OCEAN', 'INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN']
ocean_proximity = tf.feature_column.categorical_column_with_vocabulary_list(
"ocean_proximity", ocean_prox_vocab)
ocean_proximity
# Just an example, it's not used later on
city_hash = tf.feature_column.categorical_column_with_hash_bucket(
"city", hash_bucket_size=1000)
city_hash
bucketized_age = tf.feature_column.bucketized_column(
housing_median_age, boundaries=[-1., -0.5, 0., 0.5, 1.]) # age was scaled
age_and_ocean_proximity = tf.feature_column.crossed_column(
[bucketized_age, ocean_proximity], hash_bucket_size=100)
latitude = tf.feature_column.numeric_column("latitude")
longitude = tf.feature_column.numeric_column("longitude")
bucketized_latitude = tf.feature_column.bucketized_column(
latitude, boundaries=list(np.linspace(32., 42., 20 - 1)))
bucketized_longitude = tf.feature_column.bucketized_column(
longitude, boundaries=list(np.linspace(-125., -114., 20 - 1)))
location = tf.feature_column.crossed_column(
[bucketized_latitude, bucketized_longitude], hash_bucket_size=1000)
ocean_proximity_one_hot = tf.feature_column.indicator_column(ocean_proximity)
ocean_proximity_embed = tf.feature_column.embedding_column(ocean_proximity,
dimension=2)
```
### Using Feature Columns for Parsing
```
median_house_value = tf.feature_column.numeric_column("median_house_value")
columns = [housing_median_age, median_house_value]
feature_descriptions = tf.feature_column.make_parse_example_spec(columns)
feature_descriptions
with tf.io.TFRecordWriter("my_data_with_features.tfrecords") as f:
for x, y in zip(X_train[:, 1:2], y_train):
example = Example(features=Features(feature={
"housing_median_age": Feature(float_list=FloatList(value=[x])),
"median_house_value": Feature(float_list=FloatList(value=[y]))
}))
f.write(example.SerializeToString())
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
def parse_examples(serialized_examples):
examples = tf.io.parse_example(serialized_examples, feature_descriptions)
targets = examples.pop("median_house_value") # separate the targets
return examples, targets
batch_size = 32
dataset = tf.data.TFRecordDataset(["my_data_with_features.tfrecords"])
dataset = dataset.repeat().shuffle(10000).batch(batch_size).map(parse_examples)
```
**Warning**: the `DenseFeatures` layer currently does not work with the Functional API, see [TF issue #27416](https://github.com/tensorflow/tensorflow/issues/27416). Hopefully this will be resolved before the final release of TF 2.0.
```
columns_without_target = columns[:-1]
model = keras.models.Sequential([
keras.layers.DenseFeatures(feature_columns=columns_without_target),
keras.layers.Dense(1)
])
model.compile(loss="mse",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
model.fit(dataset, steps_per_epoch=len(X_train) // batch_size, epochs=5)
some_columns = [ocean_proximity_embed, bucketized_income]
dense_features = keras.layers.DenseFeatures(some_columns)
dense_features({
"ocean_proximity": [["NEAR OCEAN"], ["INLAND"], ["INLAND"]],
"median_income": [[3.], [7.2], [1.]]
})
```
# TF Transform
```
try:
import tensorflow_transform as tft
def preprocess(inputs): # inputs is a batch of input features
median_age = inputs["housing_median_age"]
ocean_proximity = inputs["ocean_proximity"]
standardized_age = tft.scale_to_z_score(median_age - tft.mean(median_age))
ocean_proximity_id = tft.compute_and_apply_vocabulary(ocean_proximity)
return {
"standardized_median_age": standardized_age,
"ocean_proximity_id": ocean_proximity_id
}
except ImportError:
print("TF Transform is not installed. Try running: pip3 install -U tensorflow-transform")
```
# TensorFlow Datasets
```
import tensorflow_datasets as tfds
datasets = tfds.load(name="mnist")
mnist_train, mnist_test = datasets["train"], datasets["test"]
print(tfds.list_builders())
plt.figure(figsize=(6,3))
mnist_train = mnist_train.repeat(5).batch(32).prefetch(1)
for item in mnist_train:
images = item["image"]
labels = item["label"]
for index in range(5):
plt.subplot(1, 5, index + 1)
image = images[index, ..., 0]
label = labels[index].numpy()
plt.imshow(image, cmap="binary")
plt.title(label)
plt.axis("off")
break # just showing part of the first batch
datasets = tfds.load(name="mnist")
mnist_train, mnist_test = datasets["train"], datasets["test"]
mnist_train = mnist_train.repeat(5).batch(32)
mnist_train = mnist_train.map(lambda items: (items["image"], items["label"]))
mnist_train = mnist_train.prefetch(1)
for images, labels in mnist_train.take(1):
print(images.shape)
print(labels.numpy())
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
datasets = tfds.load(name="mnist", batch_size=32, as_supervised=True)
mnist_train = datasets["train"].repeat().prefetch(1)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28, 1]),
keras.layers.Lambda(lambda images: tf.cast(images, tf.float32)),
keras.layers.Dense(10, activation="softmax")])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(lr=1e-3),
metrics=["accuracy"])
model.fit(mnist_train, steps_per_epoch=60000 // 32, epochs=5)
```
# TensorFlow Hub
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
import tensorflow_hub as hub
hub_layer = hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1",
output_shape=[50], input_shape=[], dtype=tf.string)
model = keras.Sequential()
model.add(hub_layer)
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
sentences = tf.constant(["It was a great movie", "The actors were amazing"])
embeddings = hub_layer(sentences)
embeddings
```
# Exercises
## 1. to 8.
See Appendix A
## 9.
### a.
_Exercise: Load the Fashion MNIST dataset (introduced in Chapter 10); split it into a training set, a validation set, and a test set; shuffle the training set; and save each dataset to multiple TFRecord files. Each record should be a serialized `Example` protobuf with two features: the serialized image (use `tf.io.serialize_tensor()` to serialize each image), and the label. Note: for large images, you could use `tf.io.encode_jpeg()` instead. This would save a lot of space, but it would lose a bit of image quality._
```
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
train_set = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(len(X_train))
valid_set = tf.data.Dataset.from_tensor_slices((X_valid, y_valid))
test_set = tf.data.Dataset.from_tensor_slices((X_test, y_test))
def create_example(image, label):
image_data = tf.io.serialize_tensor(image)
#image_data = tf.io.encode_jpeg(image[..., np.newaxis])
return Example(
features=Features(
feature={
"image": Feature(bytes_list=BytesList(value=[image_data.numpy()])),
"label": Feature(int64_list=Int64List(value=[label])),
}))
for image, label in valid_set.take(1):
print(create_example(image, label))
```
The following function saves a given dataset to a set of TFRecord files. The examples are written to the files in a round-robin fashion. To do this, we enumerate all the examples using the `dataset.enumerate()` method, and we compute `index % n_shards` to decide which file to write to. We use the standard `contextlib.ExitStack` class to make sure that all writers are properly closed whether or not an I/O error occurs while writing.
```
from contextlib import ExitStack
def write_tfrecords(name, dataset, n_shards=10):
paths = ["{}.tfrecord-{:05d}-of-{:05d}".format(name, index, n_shards)
for index in range(n_shards)]
with ExitStack() as stack:
writers = [stack.enter_context(tf.io.TFRecordWriter(path))
for path in paths]
for index, (image, label) in dataset.enumerate():
shard = index % n_shards
example = create_example(image, label)
writers[shard].write(example.SerializeToString())
return paths
train_filepaths = write_tfrecords("my_fashion_mnist.train", train_set)
valid_filepaths = write_tfrecords("my_fashion_mnist.valid", valid_set)
test_filepaths = write_tfrecords("my_fashion_mnist.test", test_set)
```
### b.
_Exercise: Then use tf.data to create an efficient dataset for each set. Finally, use a Keras model to train these datasets, including a preprocessing layer to standardize each input feature. Try to make the input pipeline as efficient as possible, using TensorBoard to visualize profiling data._
```
def preprocess(tfrecord):
feature_descriptions = {
"image": tf.io.FixedLenFeature([], tf.string, default_value=""),
"label": tf.io.FixedLenFeature([], tf.int64, default_value=-1)
}
example = tf.io.parse_single_example(tfrecord, feature_descriptions)
image = tf.io.parse_tensor(example["image"], out_type=tf.uint8)
#image = tf.io.decode_jpeg(example["image"])
image = tf.reshape(image, shape=[28, 28])
return image, example["label"]
def mnist_dataset(filepaths, n_read_threads=5, shuffle_buffer_size=None,
n_parse_threads=5, batch_size=32, cache=True):
dataset = tf.data.TFRecordDataset(filepaths,
num_parallel_reads=n_read_threads)
if cache:
dataset = dataset.cache()
if shuffle_buffer_size:
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.map(preprocess, num_parallel_calls=n_parse_threads)
dataset = dataset.batch(batch_size)
return dataset.prefetch(1)
train_set = mnist_dataset(train_filepaths, shuffle_buffer_size=60000)
valid_set = mnist_dataset(train_filepaths)
test_set = mnist_dataset(train_filepaths)
for X, y in train_set.take(1):
for i in range(5):
plt.subplot(1, 5, i + 1)
plt.imshow(X[i].numpy(), cmap="binary")
plt.axis("off")
plt.title(str(y[i].numpy()))
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
class Standardization(keras.layers.Layer):
def adapt(self, data_sample):
self.means_ = np.mean(data_sample, axis=0, keepdims=True)
self.stds_ = np.std(data_sample, axis=0, keepdims=True)
def call(self, inputs):
return (inputs - self.means_) / (self.stds_ + keras.backend.epsilon())
standardization = Standardization(input_shape=[28, 28])
# or perhaps soon:
#standardization = keras.layers.Normalization()
sample_image_batches = train_set.take(100).map(lambda image, label: image)
sample_images = np.concatenate(list(sample_image_batches.as_numpy_iterator()),
axis=0).astype(np.float32)
standardization.adapt(sample_images)
model = keras.models.Sequential([
standardization,
keras.layers.Flatten(),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer="nadam", metrics=["accuracy"])
from datetime import datetime
logs = os.path.join(os.curdir, "my_logs",
"run_" + datetime.now().strftime("%Y%m%d_%H%M%S"))
tensorboard_cb = tf.keras.callbacks.TensorBoard(
log_dir=logs, histogram_freq=1, profile_batch=10)
model.fit(train_set, epochs=5, validation_data=valid_set,
callbacks=[tensorboard_cb])
```
**Warning:** The profiling tab in TensorBoard works if you use TensorFlow 2.2+. You also need to make sure `tensorboard_plugin_profile` is installed (and restart Jupyter if necessary).
```
%load_ext tensorboard
%tensorboard --logdir=./my_logs --port=6006
```
## 10.
_Exercise: In this exercise you will download a dataset, split it, create a `tf.data.Dataset` to load it and preprocess it efficiently, then build and train a binary classification model containing an `Embedding` layer._
### a.
_Exercise: Download the [Large Movie Review Dataset](https://homl.info/imdb), which contains 50,000 movies reviews from the [Internet Movie Database](https://imdb.com/). The data is organized in two directories, `train` and `test`, each containing a `pos` subdirectory with 12,500 positive reviews and a `neg` subdirectory with 12,500 negative reviews. Each review is stored in a separate text file. There are other files and folders (including preprocessed bag-of-words), but we will ignore them in this exercise._
```
from pathlib import Path
DOWNLOAD_ROOT = "http://ai.stanford.edu/~amaas/data/sentiment/"
FILENAME = "aclImdb_v1.tar.gz"
filepath = keras.utils.get_file(FILENAME, DOWNLOAD_ROOT + FILENAME, extract=True)
path = Path(filepath).parent / "aclImdb"
path
for name, subdirs, files in os.walk(path):
indent = len(Path(name).parts) - len(path.parts)
print(" " * indent + Path(name).parts[-1] + os.sep)
for index, filename in enumerate(sorted(files)):
if index == 3:
print(" " * (indent + 1) + "...")
break
print(" " * (indent + 1) + filename)
def review_paths(dirpath):
return [str(path) for path in dirpath.glob("*.txt")]
train_pos = review_paths(path / "train" / "pos")
train_neg = review_paths(path / "train" / "neg")
test_valid_pos = review_paths(path / "test" / "pos")
test_valid_neg = review_paths(path / "test" / "neg")
len(train_pos), len(train_neg), len(test_valid_pos), len(test_valid_neg)
```
### b.
_Exercise: Split the test set into a validation set (15,000) and a test set (10,000)._
```
np.random.shuffle(test_valid_pos)
test_pos = test_valid_pos[:5000]
test_neg = test_valid_neg[:5000]
valid_pos = test_valid_pos[5000:]
valid_neg = test_valid_neg[5000:]
```
### c.
_Exercise: Use tf.data to create an efficient dataset for each set._
Since the dataset fits in memory, we can just load all the data using pure Python code and use `tf.data.Dataset.from_tensor_slices()`:
```
def imdb_dataset(filepaths_positive, filepaths_negative):
reviews = []
labels = []
for filepaths, label in ((filepaths_negative, 0), (filepaths_positive, 1)):
for filepath in filepaths:
with open(filepath) as review_file:
reviews.append(review_file.read())
labels.append(label)
return tf.data.Dataset.from_tensor_slices(
(tf.constant(reviews), tf.constant(labels)))
for X, y in imdb_dataset(train_pos, train_neg).take(3):
print(X)
print(y)
print()
%timeit -r1 for X, y in imdb_dataset(train_pos, train_neg).repeat(10): pass
```
It takes about 20 seconds to load the dataset and go through it 10 times.
But let's pretend the dataset does not fit in memory, just to make things more interesting. Luckily, each review fits on just one line (they use `<br />` to indicate line breaks), so we can read the reviews using a `TextLineDataset`. If they didn't we would have to preprocess the input files (e.g., converting them to TFRecords). For very large datasets, it would make sense a tool like Apache Beam for that.
```
def imdb_dataset(filepaths_positive, filepaths_negative, n_read_threads=5):
dataset_neg = tf.data.TextLineDataset(filepaths_negative,
num_parallel_reads=n_read_threads)
dataset_neg = dataset_neg.map(lambda review: (review, 0))
dataset_pos = tf.data.TextLineDataset(filepaths_positive,
num_parallel_reads=n_read_threads)
dataset_pos = dataset_pos.map(lambda review: (review, 1))
return tf.data.Dataset.concatenate(dataset_pos, dataset_neg)
%timeit -r1 for X, y in imdb_dataset(train_pos, train_neg).repeat(10): pass
```
Now it takes about 34 seconds to go through the dataset 10 times. That's much slower, essentially because the dataset is not cached in RAM, so it must be reloaded at each epoch. If you add `.cache()` just before `.repeat(10)`, you will see that this implementation will be about as fast as the previous one.
```
%timeit -r1 for X, y in imdb_dataset(train_pos, train_neg).cache().repeat(10): pass
batch_size = 32
train_set = imdb_dataset(train_pos, train_neg).shuffle(25000).batch(batch_size).prefetch(1)
valid_set = imdb_dataset(valid_pos, valid_neg).batch(batch_size).prefetch(1)
test_set = imdb_dataset(test_pos, test_neg).batch(batch_size).prefetch(1)
```
### d.
_Exercise: Create a binary classification model, using a `TextVectorization` layer to preprocess each review. If the `TextVectorization` layer is not yet available (or if you like a challenge), try to create your own custom preprocessing layer: you can use the functions in the `tf.strings` package, for example `lower()` to make everything lowercase, `regex_replace()` to replace punctuation with spaces, and `split()` to split words on spaces. You should use a lookup table to output word indices, which must be prepared in the `adapt()` method._
Let's first write a function to preprocess the reviews, cropping them to 300 characters, converting them to lower case, then replacing `<br />` and all non-letter characters to spaces, splitting the reviews into words, and finally padding or cropping each review so it ends up with exactly `n_words` tokens:
```
def preprocess(X_batch, n_words=50):
shape = tf.shape(X_batch) * tf.constant([1, 0]) + tf.constant([0, n_words])
Z = tf.strings.substr(X_batch, 0, 300)
Z = tf.strings.lower(Z)
Z = tf.strings.regex_replace(Z, b"<br\\s*/?>", b" ")
Z = tf.strings.regex_replace(Z, b"[^a-z]", b" ")
Z = tf.strings.split(Z)
return Z.to_tensor(shape=shape, default_value=b"<pad>")
X_example = tf.constant(["It's a great, great movie! I loved it.", "It was terrible, run away!!!"])
preprocess(X_example)
```
Now let's write a second utility function that will take a data sample with the same format as the output of the `preprocess()` function, and will output the list of the top `max_size` most frequent words, ensuring that the padding token is first:
```
from collections import Counter
def get_vocabulary(data_sample, max_size=1000):
preprocessed_reviews = preprocess(data_sample).numpy()
counter = Counter()
for words in preprocessed_reviews:
for word in words:
if word != b"<pad>":
counter[word] += 1
return [b"<pad>"] + [word for word, count in counter.most_common(max_size)]
get_vocabulary(X_example)
```
Now we are ready to create the `TextVectorization` layer. Its constructor just saves the hyperparameters (`max_vocabulary_size` and `n_oov_buckets`). The `adapt()` method computes the vocabulary using the `get_vocabulary()` function, then it builds a `StaticVocabularyTable` (see Chapter 16 for more details). The `call()` method preprocesses the reviews to get a padded list of words for each review, then it uses the `StaticVocabularyTable` to lookup the index of each word in the vocabulary:
```
class TextVectorization(keras.layers.Layer):
def __init__(self, max_vocabulary_size=1000, n_oov_buckets=100, dtype=tf.string, **kwargs):
super().__init__(dtype=dtype, **kwargs)
self.max_vocabulary_size = max_vocabulary_size
self.n_oov_buckets = n_oov_buckets
def adapt(self, data_sample):
self.vocab = get_vocabulary(data_sample, self.max_vocabulary_size)
words = tf.constant(self.vocab)
word_ids = tf.range(len(self.vocab), dtype=tf.int64)
vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids)
self.table = tf.lookup.StaticVocabularyTable(vocab_init, self.n_oov_buckets)
def call(self, inputs):
preprocessed_inputs = preprocess(inputs)
return self.table.lookup(preprocessed_inputs)
```
Let's try it on our small `X_example` we defined earlier:
```
text_vectorization = TextVectorization()
text_vectorization.adapt(X_example)
text_vectorization(X_example)
```
Looks good! As you can see, each review was cleaned up and tokenized, then each word was encoded as its index in the vocabulary (all the 0s correspond to the `<pad>` tokens).
Now let's create another `TextVectorization` layer and let's adapt it to the full IMDB training set (if the training set did not fit in RAM, we could just use a smaller sample of the training set by calling `train_set.take(500)`):
```
max_vocabulary_size = 1000
n_oov_buckets = 100
sample_review_batches = train_set.map(lambda review, label: review)
sample_reviews = np.concatenate(list(sample_review_batches.as_numpy_iterator()),
axis=0)
text_vectorization = TextVectorization(max_vocabulary_size, n_oov_buckets,
input_shape=[])
text_vectorization.adapt(sample_reviews)
```
Let's run it on the same `X_example`, just to make sure the word IDs are larger now, since the vocabulary bigger:
```
text_vectorization(X_example)
```
Good! Now let's take a look at the first 10 words in the vocabulary:
```
text_vectorization.vocab[:10]
```
These are the most common words in the reviews.
Now to build our model we will need to encode all these word IDs somehow. One approach is to create bags of words: for each review, and for each word in the vocabulary, we count the number of occurences of that word in the review. For example:
```
simple_example = tf.constant([[1, 3, 1, 0, 0], [2, 2, 0, 0, 0]])
tf.reduce_sum(tf.one_hot(simple_example, 4), axis=1)
```
The first review has 2 times the word 0, 2 times the word 1, 0 times the word 2, and 1 time the word 3, so its bag-of-words representation is `[2, 2, 0, 1]`. Similarly, the second review has 3 times the word 0, 0 times the word 1, and so on. Let's wrap this logic in a small custom layer, and let's test it. We'll drop the counts for the word 0, since this corresponds to the `<pad>` token, which we don't care about.
```
class BagOfWords(keras.layers.Layer):
def __init__(self, n_tokens, dtype=tf.int32, **kwargs):
super().__init__(dtype=tf.int32, **kwargs)
self.n_tokens = n_tokens
def call(self, inputs):
one_hot = tf.one_hot(inputs, self.n_tokens)
return tf.reduce_sum(one_hot, axis=1)[:, 1:]
```
Let's test it:
```
bag_of_words = BagOfWords(n_tokens=4)
bag_of_words(simple_example)
```
It works fine! Now let's create another `BagOfWord` with the right vocabulary size for our training set:
```
n_tokens = max_vocabulary_size + n_oov_buckets + 1 # add 1 for <pad>
bag_of_words = BagOfWords(n_tokens)
```
We're ready to train the model!
```
model = keras.models.Sequential([
text_vectorization,
bag_of_words,
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
])
model.compile(loss="binary_crossentropy", optimizer="nadam",
metrics=["accuracy"])
model.fit(train_set, epochs=5, validation_data=valid_set)
```
We get about 75% accuracy on the validation set after just the first epoch, but after that the model makes no progress. We will do better in Chapter 16. For now the point is just to perform efficient preprocessing using `tf.data` and Keras preprocessing layers.
### e.
_Exercise: Add an `Embedding` layer and compute the mean embedding for each review, multiplied by the square root of the number of words (see Chapter 16). This rescaled mean embedding can then be passed to the rest of your model._
To compute the mean embedding for each review, and multiply it by the square root of the number of words in that review, we will need a little function:
```
def compute_mean_embedding(inputs):
not_pad = tf.math.count_nonzero(inputs, axis=-1)
n_words = tf.math.count_nonzero(not_pad, axis=-1, keepdims=True)
sqrt_n_words = tf.math.sqrt(tf.cast(n_words, tf.float32))
return tf.reduce_mean(inputs, axis=1) * sqrt_n_words
another_example = tf.constant([[[1., 2., 3.], [4., 5., 0.], [0., 0., 0.]],
[[6., 0., 0.], [0., 0., 0.], [0., 0., 0.]]])
compute_mean_embedding(another_example)
```
Let's check that this is correct. The first review contains 2 words (the last token is a zero vector, which represents the `<pad>` token). The second review contains 1 word. So we need to compute the mean embedding for each review, and multiply the first one by the square root of 2, and the second one by the square root of 1:
```
tf.reduce_mean(another_example, axis=1) * tf.sqrt([[2.], [1.]])
```
Perfect. Now we're ready to train our final model. It's the same as before, except we replaced the `BagOfWords` layer with an `Embedding` layer followed by a `Lambda` layer that calls the `compute_mean_embedding` layer:
```
embedding_size = 20
model = keras.models.Sequential([
text_vectorization,
keras.layers.Embedding(input_dim=n_tokens,
output_dim=embedding_size,
mask_zero=True), # <pad> tokens => zero vectors
keras.layers.Lambda(compute_mean_embedding),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
])
```
### f.
_Exercise: Train the model and see what accuracy you get. Try to optimize your pipelines to make training as fast as possible._
```
model.compile(loss="binary_crossentropy", optimizer="nadam", metrics=["accuracy"])
model.fit(train_set, epochs=5, validation_data=valid_set)
```
The model is not better using embeddings (but we will do better in Chapter 16). The pipeline looks fast enough (we optimized it earlier).
### g.
_Exercise: Use TFDS to load the same dataset more easily: `tfds.load("imdb_reviews")`._
```
import tensorflow_datasets as tfds
datasets = tfds.load(name="imdb_reviews")
train_set, test_set = datasets["train"], datasets["test"]
for example in train_set.take(1):
print(example["text"])
print(example["label"])
```
| github_jupyter |
# Census Notebook
**Authorship**<br />
Original Author: Taurean Dyer<br />
Last Edit: Taurean Dyer, 9/26/2019<br />
**Test System Specs**<br />
Test System Hardware: GV100<br />
Test System Software: Ubuntu 18.04<br />
RAPIDS Version: 0.10.0a - Docker Install<br />
Driver: 410.79<br />
CUDA: 10.0<br />
**Known Working Systems**<br />
RAPIDS Versions:0.8, 0.9, 0.10
# Intro
Held every 10 years, the US census gives a detailed snapshot in time about the makeup of the country. The last census in 2010 surveyed nearly 309 million people. IPUMS.org provides researchers an open source data set with 1% to 10% of the census data set. In this notebook, we want to see how education affects total income earned in the US based on data from each census from the 1970 to 2010 and see if we can predict some results if the census was held today, according to the national average. We will go through the ETL, training the model, and then testing the prediction. We'll make every effort to get as balanced of a dataset as we can. We'll also pull some extra variables to allow for further self-exploration of gender based education and income breakdowns. On a single Titan RTX, you can run the whole notebook workflow on the 4GB dataset of 14 million rows by 44 columns in less than 3 minutes.
**Let's begin!**
## Imports
```
import pandas as pd
import numpy as np
import cuml
import cudf
import dask_cudf
import sys
import os
from pprint import pprint
import warnings
warnings.filterwarnings('ignore')
```
## Get your data!
```
import urllib.request
import time
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
cluster = LocalCUDACluster()
client = Client(cluster)
client
```
The ipums dataset is in our S3 bucket and zipped.
1. We'll need to create a folder for our data in the `/data` folder
1. Download the zipped data into that folder from S3
1. Load the zipped data quickly into cudf using it's read_csv() parameters
```
data_dir = '../data/census/'
if not os.path.exists(data_dir):
print('creating census data directory')
os.system('mkdir ../data/census/')
# download the IPUMS dataset
base_url = 'https://rapidsai-data.s3.us-east-2.amazonaws.com/datasets/'
fn = 'ipums_education2income_1970-2010.csv.gz'
if not os.path.isfile(data_dir+fn):
print(f'Downloading {base_url+fn} to {data_dir+fn}')
urllib.request.urlretrieve(base_url+fn, data_dir+fn)
def load_data(cached = data_dir+fn):
if os.path.exists(cached):
print('use ipums data')
X = cudf.read_csv(cached, compression='infer')
else:
print("No data found! Please check your that your data directory is ../../data/census/ and that you downloaded the data. If you did, please delete the `../../../data/census/` directory and try the above 2 cells again")
X = null
return X
df = load_data(data_dir+fn)
# limit
df = df[0:100]
print('data',df.shape)
print(df.head(5).to_pandas())
df.dtypes
original_counts = df.YEAR.value_counts()
print(original_counts) ### Remember these numbers!
```
## ETL
### Cleaning Income data
First, let's focus on cleaning out the bad values for Total Income `INCTOT`. First, let's see if there are an `N/A` values, as when we did `head()`, we saw some in other columns, like CBSERIAL
```
df['INCTOT_NA'] = df['INCTOT'].isna()
print(df.INCTOT_NA.value_counts())
```
Okay, great, there are no `N/A`s...or are there? Let's drop `INCTOT_NA` and see what our value counts look like
```
df=df.drop('INCTOT_NA', axis=1)
print(df.INCTOT.value_counts().to_pandas()) ### Wow, look how many people in America make $10,000,000! Wait a minutes...
```
Not that many people make $10M a year. Checking https://usa.ipums.org/usa-action/variables/INCTOT#codes_section, `9999999`is INCTOT's code for `N/A`. That was why when we ran `isna`, RAPIDS won't find any. Let's first create a new dataframe that is only NA values, then let's pull those encoded `N/A`s out of our working dataframe!
```
print('data',df.shape)
tdf = df.query('INCTOT == 9999999')
df = df.query('INCTOT != 9999999')
print('working data',df.shape)
print('junk count data',tdf.shape)
```
We're down by nearly 1/4 of our original dataset size. For the curious, now we should be able to get accurate Total Income data, by year, not taking into account inflation
```
print(df.groupby('YEAR')['INCTOT'].mean()) # without that cleanup, the average would have bene in the millions....
```
#### Normalize Income for inflation
Now that we have reduced our dataframe to a baseline clean data to answer our question, we should normalize the amounts for inflation. `CPI99`is the value that IPUMS uses to contian the inflation factor. All we have to do is multipy by year. Let's see how that changes the Total Income values from just above!
```
print(df.groupby('YEAR')['CPI99'].mean()) ## it just returns the CPI99
df['INCTOT'] = df['INCTOT'] * df['CPI99']
print(df.groupby('YEAR')['INCTOT'].mean()) ## let's see what we got!
```
### Cleaning Education Data
Okay, great! Now we have income cleaned up, it should also have cleaned much of our next sets of values of interes, namely Education and Education Detailed. However, there are still some `N/A`s in key variables to worry about, which can cause problmes later. Let's create a list of them...
```
suspect = ['CBSERIAL','EDUC', 'EDUCD', 'EDUC_HEAD', 'EDUC_POP', 'EDUC_MOM','EDUCD_MOM2','EDUCD_POP2', 'INCTOT_MOM','INCTOT_POP','INCTOT_MOM2','INCTOT_POP2', 'INCTOT_HEAD']
for i in range(0, len(suspect)):
df[suspect[i]] = df[suspect[i]].fillna(-1)
print(suspect[i], df[suspect[i]].value_counts())
```
Let's get drop any rows of any `-1`s in Education and Education Detailed.
```
totincome = ['EDUC','EDUCD']
for i in range(0, len(totincome)):
query = totincome[i] + ' != -1'
df = df.query(query)
print(totincome[i])
print(df.shape)
df.head().to_pandas().head()
```
Well, the good news is that we lost no further rows, start to normalize the data so when we do our OLS, one year doesn't unfairly dominate the data
## Normalize the Data
The in the last step, need to keep our data at about the same ratio as we when started (1% of the population), with the exception of 1980, which was a 5% and needs to be reduced. This is why we kept the temp dataframe `tdf` - to get the counts per year. we will find out just how many have to realize
```
print('Working data: \n', df.YEAR.value_counts())
print('junk count data: \n', tdf.YEAR.value_counts())
```
And now, so that we can do MSE, let's make all the dtypes the same.
```
df.dtypes
keep_cols = ['YEAR', 'DATANUM', 'SERIAL', 'CBSERIAL', 'HHWT', 'GQ', 'PERNUM', 'SEX', 'AGE', 'INCTOT', 'EDUC', 'EDUCD', 'EDUC_HEAD', 'EDUC_POP', 'EDUC_MOM','EDUCD_MOM2','EDUCD_POP2', 'INCTOT_MOM','INCTOT_POP','INCTOT_MOM2','INCTOT_POP2', 'INCTOT_HEAD', 'SEX_HEAD']
df = df.loc[:, keep_cols]
#df = df.drop(col for col in df.columns if col not in keep_cols)
for i in range(0, len(keep_cols)):
df[keep_cols[i]] = df[keep_cols[i]].fillna(-1)
print(keep_cols[i], df[keep_cols[i]].value_counts())
df[keep_cols[i]]= df[keep_cols[i]].astype('float64')
## I WANTED TO REDUCE THE 1980 SAMPLE HERE, BUT .SAMPLE() IS NEEDED AND NOT WORKING, UNLESS THERE IS A WORK AROUND...
```
With the important data now clean and normalized, let's start doing the regression
## Ridge Regression
We have 44 variables. The other variables may provide important predictive information. The Ridge Regression technique with cross validation to identify the best hyperparamters may be the best way to get the most accurate model. We'll have to
* define our performance metrics
* split our data into train and test sets
* train and test our model
Let's begin and see what we get!
```
# As our performance metrics we'll use a basic mean squared error and coefficient of determination implementation
def mse(y_test, y_pred):
return ((y_test.reset_index(drop=True) - y_pred.reset_index(drop=True)) ** 2).mean()
def cod(y_test, y_pred):
y_bar = y_test.mean()
total = ((y_test - y_bar) ** 2).sum()
residuals = ((y_test.reset_index(drop=True) - y_pred.reset_index(drop=True)) ** 2).sum()
return 1 - (residuals / total)
from cuml.preprocessing.model_selection import train_test_split
trainsize = .9
yCol = "EDUC"
from cuml.preprocessing.model_selection import train_test_split
from cuml.linear_model.ridge import Ridge
def train_and_score(data, clf, train_frac=0.8, n_runs=20):
mse_scores, cod_scores = [], []
for _ in range(n_runs):
X_train, X_test, y_train, y_test = cuml.preprocessing.model_selection.train_test_split(df, yCol, train_size=.9)
y_pred = clf.fit(X_train, y_train).predict(X_test)
mse_scores.append(mse(y_test, y_pred))
cod_scores.append(cod(y_test, y_pred))
return mse_scores, cod_scores
```
## Results
**Moment of truth! Let's see how our regression training does!**
```
import numpy as np
n_runs = 20
clf = Ridge()
mse_scores, cod_scores = train_and_score(df, clf, n_runs=n_runs)
print(f"median MSE ({n_runs} runs): {np.median(mse_scores)}")
print(f"median COD ({n_runs} runs): {np.median(cod_scores)}")
```
**Fun fact:** if you made INCTOT the y axis, your prediction results would not be so pretty! It just shows that your education level can be an indicator for your income, but your income is NOT a great predictor for your education level. You have better odds flipping a coin!
* median MSE (50 runs): 518189521.07548225
* median COD (50 runs): 0.425769113846303
## Next Steps/Self Study
* You can pickle the model and use it in another workflow
* You can redo the workflow with based on head of household using `EDUC`, `SEX`, and `INCTOT` for X in `X`_HEAD
* You can see the growing role of education with women in their changing role in the workforce and income with "EDUC_MOM" and "EDUC_POP
| github_jupyter |
# 采用机器翻译实现Seq2Seq
```
import sys
sys.path.append('../')
import collections
import d2l
import zipfile
from d2l.data.base import Vocab
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils import data
from torch import optim
```
## Seq2Seq的结构
# Sequence to Sequence模型
### 模型:
训练

预测

### 具体结构:

### Seq2SeqEncoder实现
```
class Seq2SeqEncoder(d2l.Encoder):
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0, **kwargs):
super(Seq2SeqEncoder, self).__init__(**kwargs)
self.num_hiddens=num_hiddens
self.num_layers=num_layers
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = nn.LSTM(embed_size,num_hiddens, num_layers, dropout=dropout)
def begin_state(self, batch_size, device):
return [torch.zeros(size=(self.num_layers, batch_size, self.num_hiddens), device=device),
torch.zeros(size=(self.num_layers, batch_size, self.num_hiddens), device=device)]
def forward(self, X, *args):
X = self.embedding(X) # X shape: (batch_size, seq_len, embed_size)
X = X.transpose(0, 1) # RNN needs first axes to be time
# state = self.begin_state(X.shape[1], device=X.device)
out, state = self.rnn(X)
# The shape of out is (seq_len, batch_size, num_hiddens).
# state contains the hidden state and the memory cell
# of the last time step, the shape is (num_layers, batch_size, num_hiddens)
return out, state
encoder = Seq2SeqEncoder(vocab_size=10, embed_size=8,num_hiddens=16, num_layers=2)
X = torch.zeros((4, 7),dtype=torch.long)
output, state = encoder(X)
output.shape, len(state), state[0].shape, state[1].shape
```
### Seq2SeqDecoder实现
```
class Seq2SeqDecoder(d2l.Decoder):
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
dropout=0, **kwargs):
super(Seq2SeqDecoder, self).__init__(**kwargs)
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = nn.LSTM(embed_size,num_hiddens, num_layers, dropout=dropout)
self.dense = nn.Linear(num_hiddens,vocab_size)
def init_state(self, enc_outputs, *args):
return enc_outputs[1]
def forward(self, X, state):
X = self.embedding(X).transpose(0, 1)
out, state = self.rnn(X, state)
# Make the batch to be the first dimension to simplify loss computation.
out = self.dense(out).transpose(0, 1)
return out, state
decoder = Seq2SeqDecoder(vocab_size=10, embed_size=8,num_hiddens=16, num_layers=2)
state = decoder.init_state(encoder(X))
out, state = decoder(X, state)
out.shape, len(state), state[0].shape, state[1].shape
```
### 训练
```
with open('../data/fra.txt', 'r', encoding='utf-8') as f:
raw_text = f.read()
print(raw_text[0:1000])
embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.0
batch_size, num_examples, max_len = 64, 1e3, 10
lr, num_epochs, ctx = 0.005, 300, d2l.try_gpu()
src_vocab, tgt_vocab, train_iter = d2l.load_data_nmt(batch_size, max_len,num_examples)
encoder = Seq2SeqEncoder(
len(src_vocab), embed_size, num_hiddens, num_layers, dropout)
decoder = Seq2SeqDecoder(
len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
model = d2l.EncoderDecoder(encoder, decoder)
d2l.train_ch7(model, train_iter, lr, num_epochs, ctx)
```
## 测试
```
for sentence in ['Go .', 'Wow !', "I'm OK .", 'I won !']:
print(sentence + ' => ' + d2l.translate_ch7(
model, sentence, src_vocab, tgt_vocab, max_len, ctx))
```
| github_jupyter |
# Estimate car price - Introduction to Python wrapper for SAP HANA
This notebook is part of a Machine Learning project that is described and available to download on
<BR><a href="https://blogs.sap.com/2019/11/05/hands-on-tutorial-machine-learning-push-down-to-sap-hana-with-python/">https://blogs.sap.com/2019/11/05/hands-on-tutorial-machine-learning-push-down-to-sap-hana-with-python/</a>
<BR><BR>The purpose of this notebook is for you to become familiar with the most important steps to train a Machine Learning model in SAP HANA through Python. The following notebooks contain a more realistic example.
### Steps in this notebook
- Connect to SAP HANA
- Create a SAP HANA DataFrame which points to the data
- Take a brief look at the data
- Deal with missing values by ignoring all rows that are not complete
- Train a Descision Tree in SAP HANA to estimate the price of a vehicle
- Calculate the model's quality on the training data
### Documentation
- SAP HANA Python Client API for Machine Learning Algorithms:
https://help.sap.com/doc/0172e3957b5946da85d3fde85ee8f33d/latest/en-US/html/hana_ml.html
- SAP HANA Predictive Analysis Library (PAL):
https://help.sap.com/viewer/2cfbc5cf2bc14f028cfbe2a2bba60a50/latest/en-US/f652a8186a144e929a1ade7a3cb7abe8.html
- Dataset: https://www.kaggle.com/bozungu/ebay-used-car-sales-data
### Create a SAP HANA DataFrame, which points to the training data
Instantiate a connecton object to SAP HANA.
- For simplicity, to help you get started, these values are hardcoded here.
- We recommend keeping these credentials in the Secure User Store of the SAP HANA Client. Retrieving the credentials from the Secure User Store prevents having to specify these credentials in clear text. See the blog on the SAP Commmunity to which these notebooks belong, for steps on how to use that Secure User Store.
```
import hana_ml.dataframe as dataframe
conn = dataframe.ConnectionContext(userkey = 'hana_hxe', encrypt = 'true', sslValidateCertificate = 'false')
```
Create the SAP HANA DataFrame, which points to the table with historic sales. No data is extracted.
```
# Create the HANA dataframe in the structure of the specified table
df_remote = conn.table(table = 'USEDCARPRICES')
```
### Peek at the data and retrieve a small number of rows
Notice how no data is displayed, when calling the HANA DataFrame. You will only see the object type: hana_ml.dataframe.DataFrame. At the top of this page you find a link to the SAP HANA Python Client API documentation, where you find all details about the hana_ml package.
```
df_remote
```
To retrieve data into Python, you need to call the collect() function on the DataFrame object. In order to reduce the number of rows that are retrieved, use the head() function beforehand.
```
df_remote.head(3).collect()
```
### Descriptive statistics
Display most important data column statistics. All values were calculated within SAP HANA. Notice how some columns have null values. These are rows with missing values.
```
df_remote.describe().collect()
```
### Plot number of vehicles by model
The hana_ml package can also create a number of plots, whose underlying data was calculated within SAP HANA. For more specific requirements you can also push down further calculations to SAP HANA and retrieve the result with the collect() function as pandas data frame to create your own plot. Now display the number of vehicles by model.
```
%matplotlib inline
from hana_ml.visualizers.eda import EDAVisualizer
import matplotlib.pyplot as plt
f = plt.figure()
ax1 = f.add_subplot(111) # 111 refers to 1x1 grid, 1st subplot
eda = EDAVisualizer(ax1)
ax, bar_data = eda.bar_plot(data = df_remote,
column = 'MODEL',
aggregation = {'MODEL':'count'},
title = 'Number of vehicles by model')
```
### Drop rows with missing values
Many algorithms require the data to be complete without missing values. Descriptive statistics above showed that various columns miss data. There are various options to deal with such missing values, ie to impute or remove the row or column. In the following notebook we will impute. In this introductory example we remove the rows with missing values from the SAP HANA Data Frame.
<BR><BR>The rows are not removed from the physical table. They are dropped from the logical construct of the SAP HANA Data Frame. Hence any process or application that might be using the underlying data is not affected.
```
df_remote = df_remote.dropna()
```
The SAP HANA Data Frame's SELECT statement shows how the rows with missing values were filtered out.
```
df_remote.select_statement
```
### Train decision tree regression
We will train a decision tree to estimate the price. The algorithm does not support the column type INT for the target. Hence convert the PRICE column to type DOUBLE in the SAP HANA Data Frame. The data type is not changed in the physical table.
```
df_remote = df_remote.cast('PRICE', 'DOUBLE')
```
Train the decision tree with some hardcoded parameters. In the following notebooks we will search for parameters that lead to a stronger models. This notebook is just introducing the basic concept of training Machine Learning models within SAP HANA.
```
from hana_ml.algorithms.pal import trees
tree_reg = trees.DecisionTreeRegressor(algorithm = 'cart',
min_records_of_parent = 10,
min_records_of_leaf = 2,
thread_ratio = 0.4,
split_threshold = 1e-5,
model_format = 'json',
output_rules = True)
# Specify the tree's predictors
features = ['GEARBOX', 'VEHICLETYPE', 'YEAR', 'MODEL', 'HP', 'FUELTYPE', 'KILOMETER']
# Train the tree
tree_reg.fit(data = df_remote,
key = 'CAR_ID',
label = 'PRICE',
features = features)
```
Once the above cell has been executed, a model has been trained. To see the DecisionTreeRegressor function's signature move the cursor into the round brackets of the function, ie place it after conn_context=conn and press SHIFT+TAB. The signature will be shown as tooltip.
## Quality metric
Calculate the model's performance on the training data. In the following notebooks, the data will be split for training and testing, leading to a more meaninful quality indicators. We calculate R^2, the coefficient of determination.
https://en.wikipedia.org/wiki/Coefficient_of_determination
```
print('R^2 on training data: ' + str(round(tree_reg.score(data = df_remote,
key = 'CAR_ID'), 3)))
```
### Next
This was just a brief introduction to the concept of the Python wrapper for SAP HANA. In the following notebooks we will create a much stronger model!
| github_jupyter |
```
#from scipy.io import loadmat
#import h5py
import xarray as xr
import numpy as np
#PLOTTING
import cartopy
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib.colorbar import Colorbar
import matplotlib.ticker as mticker
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
#resampling to grid
#from pyresample.geometry import AreaDefinition
#from pyresample.geometry import GridDefinition
#from pyresample import image, geometry, load_area, save_quicklook, SwathDefinition, area_def2basemap
#from pyresample.kd_tree import resample_nearest
#from pyresample.utils import check_and_wrap
#from scipy import spatial
#import xmitgcm.llcreader as llcreader
#%matplotlib inline
#import holoviews as hv
#from holoviews.operation.datashader import regrid
#hv.extension('bokeh')
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (15,10)
#%matplotlib inline
#import holoviews as hv
#from holoviews.operation.datashader import regrid
#hv.extension('bokeh')
import glob
#where to find the data
adir_data= 'f:/data/project_data/fluxsat/orbit/'
adir_figs= 'f:/data/project_data/fluxsat/figures/'
# add land mask
#get bathymetry from ETOPO1
#fname_topo = './../../data/topo/ETOPO1_Ice_g_gmt4.grd'
fname_topo = 'f:/data/topo/ETOPO1_Ice_g_gmt4.grd'
ds = xr.open_dataset(fname_topo)
ds_topo = ds.rename_dims({'x':'lon','y':'lat'}).rename({'x':'lon','y':'lat'})
tem = ds_topo.attrs
ds_topo = ds_topo.rename({'z':'etopo_depth'})
ds_topo.etopo_depth.attrs=tem
_, index = np.unique(ds_topo['lon'], return_index=True)
ds_topo = ds_topo.isel(lon=index)
_, index = np.unique(ds_topo['lat'], return_index=True)
ds_topo = ds_topo.isel(lat=index)
#import sys
#sys.path.append('C:/Users/gentemann/Desktop/git_python/ECCOv4-py')
#import ecco_v4_py as ecco
ds_topo2 = ds_topo.interp(lat=np.arange(-90,90,0.1),lon=np.arange(-180,180,0.1))
#ds_topo2.etopo_depth.plot()
#get all filenames
filenames = glob.glob(adir_data+'3dys*.nc')
ds = xr.open_mfdataset(filenames)
da2 = ds.sec
da2
da2[:,:,0].plot()
# now lets add in some real data to the orbits
fname = adir_data + 'clayson_fluxes.nc'
ds_sst = xr.open_dataset(fname)
#interpolate onto sat data
ds_sst = ds_sst.interp(lat=np.arange(-90,90,0.1),lon=np.arange(-180,180,0.1))
grid_def_lons, grid_def_lats = np.arange(-180,180,0.1), np.arange(-90,90,0.1)
da_sst=[]
for i in range(48):
tem = np.where(da2[:,:,i]==1,ds_sst.hi[2,:,:].data,np.nan)
tem = np.expand_dims(tem,2)
tem = xr.DataArray(tem,name='sst',
coords={'lat':grid_def_lats,'lon':grid_def_lons,'orbit':[i]},
dims=('lat','lon','orbit'))
da_sst.append(tem)
da_sst2 = xr.concat(da_sst, dim='orbit')
ds_sst.sel(lon=slice(120,180),lat=slice(10,60)).hi[0,:,:].plot()
da_sst2[:,:,6:7].mean('orbit').plot()
#can put in any mask from here: https://neo.sci.gsfc.nasa.gov/view.php?datasetId=MOD_NDVI_M&date=2020-11-01
img = plt.imread('F:/data/sat_data/background_images/bluemarble_10km_august.png')
img_extent = (-180, 180, -90, 90)
#plt.imshow(img, origin='upper', extent=img_extent)
img2 = np.flip(img,0)
for i in range(3):
img2[:,:,i] = np.where(ds_topo2.etopo_depth.data>0,img2[:,:,i],np.nan)
img2 = np.flip(img2,0)
img_white = np.where(np.isfinite(img2),img2,1.0)
plt.imshow(img_white, origin='upper', extent=img_extent)
dy = da_sst2[:,:,6:12].mean(dim=['orbit'])
#okay this is super kludgy, but the low res doesn't interpolate across -180/180 right, it doesn't wrap
#so here i move everything to 0-360, then interpolate, then move back to -180/180
tem = ds_sst.lo.rename({'latlo':'lat','lonlo':'lon'})
tem.coords['lon'] = np.mod(tem['lon'], 360)
tem = tem.sortby(tem.lon)
dytem = dy.copy(deep=True)
dytem.coords['lon'] = np.mod(dytem['lon'], 360)
dytem = dytem.sortby(dytem.lon)
tem2 = tem.interp(lat=dytem.lat,lon=dytem.lon)
tem2.coords['lon'] = (tem2.coords['lon'] + 180) % 360 - 180
tem2 = tem2.sortby(tem2.lon)
tem = tem2
dy2 = dy.where(np.isfinite(dy),tem.data)
#dy2.sel(lon=slice(100,180),lat=slice(0,80)).plot()
#ig = plt.figure(figsize=(10, 8))
dy = dy2
dy = dy.where(ds_topo2.etopo_depth<0) #mask land regions
dy = dy.where(dy>0,np.nan)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
ax.imshow(img2, origin='upper', extent=img_extent, transform=ccrs.PlateCarree())
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='rainbow_r',vmin=0,vmax=400,add_colorbar=False)
ax.set_global()
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
#ax.set_extent([0,90,-90,90], crs=ccrs.PlateCarree())
#ax.set_ylim(0,90)
plt.savefig(adir_figs+'clayson_so1.png', dpi=300)
dy = da_sst2[:,:,23:30].mean(dim=['orbit'])
#okay this is super kludgy, but the low res doesn't interpolate across -180/180 right, it doesn't wrap
#so here i move everything to 0-360, then interpolate, then move back to -180/180
tem = ds_sst.lo.rename({'latlo':'lat','lonlo':'lon'})
tem.coords['lon'] = np.mod(tem['lon'], 360)
tem = tem.sortby(tem.lon)
dytem = dy.copy(deep=True)
dytem.coords['lon'] = np.mod(dytem['lon'], 360)
dytem = dytem.sortby(dytem.lon)
tem2 = tem.interp(lat=dytem.lat,lon=dytem.lon)
tem2.coords['lon'] = (tem2.coords['lon'] + 180) % 360 - 180
tem2 = tem2.sortby(tem2.lon)
tem = tem2
dy2 = dy.where(np.isfinite(dy),tem.data)
#dy2.sel(lon=slice(100,180),lat=slice(0,80)).plot()
#ig = plt.figure(figsize=(10, 8))
dy = dy2
dy = dy.where(ds_topo2.etopo_depth<0) #mask land regions
dy = dy.where(dy>0,np.nan)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
ax.imshow(img2, origin='upper', extent=img_extent, transform=ccrs.PlateCarree())
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='rainbow_r',vmin=0,vmax=400,add_colorbar=False)
ax.set_global()
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
#ax.set_extent([0,90,-90,90], crs=ccrs.PlateCarree())
#ax.set_ylim(0,90)
plt.savefig(adir_figs+'clayson_so1a.png', dpi=300)
dy = da_sst2[:,:,23:30].mean(dim=['orbit'])
#okay this is super kludgy, but the low res doesn't interpolate across -180/180 right, it doesn't wrap
#so here i move everything to 0-360, then interpolate, then move back to -180/180
tem = ds_sst.lo.rename({'latlo':'lat','lonlo':'lon'})
tem.coords['lon'] = np.mod(tem['lon'], 360)
tem = tem.sortby(tem.lon)
dytem = dy.copy(deep=True)
dytem.coords['lon'] = np.mod(dytem['lon'], 360)
dytem = dytem.sortby(dytem.lon)
tem2 = tem.interp(lat=dytem.lat,lon=dytem.lon,method='nearest')
tem2.coords['lon'] = (tem2.coords['lon'] + 180) % 360 - 180
tem2 = tem2.sortby(tem2.lon)
tem = tem2
dy2 = dy.where(np.isfinite(dy),tem.data)
#dy2.sel(lon=slice(100,180),lat=slice(0,80)).plot()
#ig = plt.figure(figsize=(10, 8))
dy = dy2
dy = dy.where(ds_topo2.etopo_depth<0) #mask land regions
dy = dy.where(dy>0,np.nan)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
ax.imshow(img2, origin='upper', extent=img_extent, transform=ccrs.PlateCarree())
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='jet',vmin=0,vmax=400,add_colorbar=False)
ax.set_global()
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
#ax.set_extent([0,90,-90,90], crs=ccrs.PlateCarree())
#ax.set_ylim(0,90)
plt.savefig(adir_figs+'clayson_so1b_300dpi.png', dpi=300)
ds_sst.sel(lon=slice(100,200),lat=slice(20,70)).hi.plot()
dy = da_sst2[:,:,23:30].mean(dim=['orbit'])
#okay this is super kludgy, but the low res doesn't interpolate across -180/180 right, it doesn't wrap
#so here i move everything to 0-360, then interpolate, then move back to -180/180
tem = ds_sst.lo.rename({'latlo':'lat','lonlo':'lon'})
tem.coords['lon'] = np.mod(tem['lon'], 360)
tem = tem.sortby(tem.lon)
dytem = dy.copy(deep=True)
dytem.coords['lon'] = np.mod(dytem['lon'], 360)
dytem = dytem.sortby(dytem.lon)
tem2 = tem.interp(lat=dytem.lat,lon=dytem.lon,method='nearest')
tem2.coords['lon'] = (tem2.coords['lon'] + 180) % 360 - 180
tem2 = tem2.sortby(tem2.lon)
tem = tem2
dylo = tem
dylo = dylo.where(ds_topo2.etopo_depth<0) #mask land regions
dylo = dylo.where(dylo>0,np.nan)
#dy2 = dy.where(np.isfinite(dy),tem.data)
#dy2.sel(lon=slice(100,180),lat=slice(0,80)).plot()
#ig = plt.figure(figsize=(10, 8))
#dy = dy2
dy = dy.where(ds_topo2.etopo_depth<0) #mask land regions
dy = dy.where(dy>0,np.nan)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
ax.imshow(img2, origin='upper', extent=img_extent, transform=ccrs.PlateCarree())
dylo.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='gray',vmin=-400,vmax=400,add_colorbar=False)
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='coolwarm',vmin=-400,vmax=400,add_colorbar=False)
ax.set_global()
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
#ax.set_extent([0,90,-90,90], crs=ccrs.PlateCarree())
#ax.set_ylim(0,90)
plt.savefig(adir_figs+'clayson_so1c.png', dpi=300)
import cartopy.feature as cfeature
land_50m = cfeature.NaturalEarthFeature('physical', 'land', '50m',
edgecolor='face',
facecolor=cfeature.COLORS['land'])
dytem.lat
```
# NEW FIGURES with degraded resolution below here
```
#NEW FIGURE
lolon,lolat = np.arange(0,360), np.arange(-90,90)
lolon2,lolat = np.arange(-180,180), np.arange(-90,90)
dy = da_sst2[:,:,23:30].mean(dim=['orbit'])
#okay this is super kludgy, but the low res doesn't interpolate across -180/180 right, it doesn't wrap
#so here i move everything to 0-360, then interpolate, then move back to -180/180
tem = ds_sst.lo.rename({'latlo':'lat','lonlo':'lon'})
tem.coords['lon'] = np.mod(tem['lon'], 360)
tem = tem.sortby(tem.lon)
dytem = dy.copy(deep=True)
dytem.coords['lon'] = np.mod(dytem['lon'], 360)
dytem = dytem.sortby(dytem.lon)
tem2 = tem.interp(lat=lolat,lon=lolon,method='nearest')
#tem2 = tem.interp(lat=dytem.lat,lon=dytem.lon,method='nearest')
tem2.coords['lon'] = (tem2.coords['lon'] + 180) % 360 - 180
tem2 = tem2.sortby(tem2.lon)
tem = tem2
dylo = tem
lotopo = ds_topo2.interp(lat=lolat,lon=lolon2,method='nearest')
dylo = dylo.where(lotopo.etopo_depth<0) #mask land regions
dylo = dylo.where(dylo>0,np.nan)
dy = dy.where(ds_topo2.etopo_depth<0) #mask land regions
dy = dy.where(dy>0,np.nan)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
dylo.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='gray_r',vmin=-400,vmax=400,add_colorbar=False)
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='jet',vmin=-400,vmax=400,add_colorbar=False)
ax.set_global()
ax.add_feature(cfeature.LAND, color='silver')
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
plt.savefig(adir_figs+'clayson_so1c_300dpi.png', dpi=300)
plt.savefig(adir_figs+'clayson_so1c_300dpi.pdf', dpi=300)
lolon,lolat = np.arange(0,360), np.arange(-90,90)
lolon2,lolat = np.arange(-180,180), np.arange(-90,90)
dy = da_sst2[:,:,6:12].mean(dim=['orbit'])
#okay this is super kludgy, but the low res doesn't interpolate across -180/180 right, it doesn't wrap
#so here i move everything to 0-360, then interpolate, then move back to -180/180
tem = ds_sst.lo.rename({'latlo':'lat','lonlo':'lon'})
tem.coords['lon'] = np.mod(tem['lon'], 360)
tem = tem.sortby(tem.lon)
dytem = dy.copy(deep=True)
dytem.coords['lon'] = np.mod(dytem['lon'], 360)
dytem = dytem.sortby(dytem.lon)
tem2 = tem.interp(lat=lolat,lon=lolon,method='nearest')
#tem2 = tem.interp(lat=dytem.lat,lon=dytem.lon,method='nearest')
tem2.coords['lon'] = (tem2.coords['lon'] + 180) % 360 - 180
tem2 = tem2.sortby(tem2.lon)
tem = tem2
dylo = tem
lotopo = ds_topo2.interp(lat=lolat,lon=lolon2,method='nearest')
dylo = dylo.where(lotopo.etopo_depth<0) #mask land regions
dylo = dylo.where(dylo>0,np.nan)
dy = dy.where(ds_topo2.etopo_depth<0) #mask land regions
dy = dy.where(dy>0,np.nan)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
dylo.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='gray_r',vmin=-400,vmax=400,add_colorbar=False)
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='jet',vmin=-400,vmax=400,add_colorbar=False)
ax.set_global()
ax.add_feature(cfeature.LAND, color='silver')
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
plt.savefig(adir_figs+'clayson_so1a_300dpi.png', dpi=300)
plt.savefig(adir_figs+'clayson_so1a_300dpi.pdf', dpi=300)
lolon,lolat = np.arange(0,360), np.arange(-90,90)
lolon2,lolat = np.arange(-180,180), np.arange(-90,90)
dy = da_sst2[:,:,40:48].mean(dim=['orbit'])
#okay this is super kludgy, but the low res doesn't interpolate across -180/180 right, it doesn't wrap
#so here i move everything to 0-360, then interpolate, then move back to -180/180
tem = ds_sst.lo.rename({'latlo':'lat','lonlo':'lon'})
tem.coords['lon'] = np.mod(tem['lon'], 360)
tem = tem.sortby(tem.lon)
dytem = dy.copy(deep=True)
dytem.coords['lon'] = np.mod(dytem['lon'], 360)
dytem = dytem.sortby(dytem.lon)
tem2 = tem.interp(lat=lolat,lon=lolon,method='nearest')
#tem2 = tem.interp(lat=dytem.lat,lon=dytem.lon,method='nearest')
tem2.coords['lon'] = (tem2.coords['lon'] + 180) % 360 - 180
tem2 = tem2.sortby(tem2.lon)
tem = tem2
dylo = tem
lotopo = ds_topo2.interp(lat=lolat,lon=lolon2,method='nearest')
dylo = dylo.where(lotopo.etopo_depth<0) #mask land regions
dylo = dylo.where(dylo>0,np.nan)
dy = dy.where(ds_topo2.etopo_depth<0) #mask land regions
dy = dy.where(dy>0,np.nan)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
dylo.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='gray_r',vmin=-400,vmax=400,add_colorbar=False)
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='jet',vmin=-400,vmax=400,add_colorbar=False)
ax.set_global()
ax.add_feature(cfeature.LAND, color='silver')
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
plt.savefig(adir_figs+'clayson_so1b_300dpi.png', dpi=300)
plt.savefig(adir_figs+'clayson_so1b_300dpi.pdf', dpi=300)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
ax.imshow(img2, origin='upper', extent=img_extent, transform=ccrs.PlateCarree())
dylo.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='gray_r',vmin=-400,vmax=400,add_colorbar=False)
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='jet',vmin=-400,vmax=400,add_colorbar=False)
ax.set_global()
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
#ax.set_extent([0,90,-90,90], crs=ccrs.PlateCarree())
#ax.set_ylim(0,90)
plt.savefig(adir_figs+'clayson_so1c_300dpi.pdf', dpi=300)
```
# shannon data in figure here
```
# now lets add in some real data to the orbits
fname = adir_data + 'clayson_fluxes_hilo.nc'
ds_sst = xr.open_dataset(fname)
#interpolate onto sat data
ds_sst = ds_sst.interp(lat=np.arange(-90,90,0.1),lon=np.arange(-180,180,0.1))
grid_def_lons, grid_def_lats = np.arange(-180,180,0.1), np.arange(-90,90,0.1)
da_sst=[]
for i in range(48):
tem = np.where(da2[:,:,i]==1,ds_sst.hi[2,:,:].data,np.nan)
tem = np.expand_dims(tem,2)
tem = xr.DataArray(tem,name='sst',
coords={'lat':grid_def_lats,'lon':grid_def_lons,'orbit':[i]},
dims=('lat','lon','orbit'))
da_sst.append(tem)
da_sst2 = xr.concat(da_sst, dim='orbit')
#NEW FIGURE
lolon,lolat = np.arange(0,360), np.arange(-90,90)
lolon2,lolat = np.arange(-180,180), np.arange(-90,90)
dy = da_sst2[:,:,23:30].mean(dim=['orbit'])
dylo = ds_sst.lo[3,:,:]#.rename({'lat':'lat','lon':'lon'})
dylo = dylo.where(ds_topo2.etopo_depth<0) #mask land regions
dylo = dylo.where(dylo>0,np.nan)
dy = dy.where(ds_topo2.etopo_depth<0) #mask land regions
dy = dy.where(dy>0,np.nan)
ax = plt.subplot(111,projection=ccrs.Orthographic(180, 0))
dylo.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='gray_r',vmin=-400,vmax=400,add_colorbar=False)
dy.plot(ax=ax, transform=ccrs.PlateCarree(),cmap='jet',vmin=-400,vmax=400,add_colorbar=False)
ax.set_global()
ax.add_feature(cfeature.LAND, color='silver')
global_extent = ax.get_extent(crs=ccrs.PlateCarree())
ax.set_extent(global_extent[:2] + (0, 90), crs=ccrs.PlateCarree())
plt.savefig(adir_figs+'clayson_so1c2_300dpi.png', dpi=300)
#plt.savefig(adir_figs+'clayson_so1c2_300dpi.pdf', dpi=300)
```
| github_jupyter |
# Black Scholes Model
In this notebook we illustrate the basic properties of the Black Scholes model.
The notebook is structured as follows:
1. Black-Scholes model code
2. Analysis of value function
3. Analysis of Greeks, i.e. sensitivities to model parameters
## Black-Scholes Model Code
We use a couple of standard Python modules.
```
import numpy as np
from scipy.stats import norm
from scipy.optimize import brentq
import plotly.express as px
import plotly.graph_objects as go
```
As a basic building block we implement the Black formula.
$$
\begin{aligned}
\text{Black}\left(F,K,\nu,\phi\right) &=\phi\,\left[F\,\Phi\left(\phi d_{1}\right)-K\,\Phi\left(\phi d_{2}\right)\right],\\
d_{1,2}&=\frac{\log\left(F/K\right)}{\nu}\pm\frac{\nu}{2}.
\end{aligned}
$$
```
def BlackOverK(moneyness, nu, callOrPut):
d1 = np.log(moneyness) / nu + nu / 2.0
d2 = d1 - nu
return callOrPut * (moneyness*norm.cdf(callOrPut*d1)-norm.cdf(callOrPut*d2))
def Black(forward, strike, nu, callOrPut):
if nu<1.0e-12: # assume zero
return np.maximum(callOrPut*(forward-strike),0.0) # intrinsic value
return strike * BlackOverK(forward/strike,nu,callOrPut)
def BlackImpliedVol(price, strike, forward, T, callOrPut):
def objective(nu):
return Black(forward, strike, nu, callOrPut) - price
return brentq(objective,0.01*np.sqrt(T), 1.00*np.sqrt(T), xtol=1.0e-8) / np.sqrt(T)
def BlackVega(strike, forward, sigma, T):
stdDev = sigma*np.sqrt(T)
d1 = np.log(forward/strike) / stdDev + stdDev / 2.0
return forward * norm.pdf(d1) * np.sqrt(T)
```
## Analysis of Value Function
$$
v(s,T) = e^{-rT}\,\text{Black}\left(s\,e^{rT},K,\sigma\sqrt{T},\phi\right),
$$
```
def BlackScholesPrice(underlying, strike, rate, sigma, T, callOrPut):
df = np.exp(-rate*T)
nu = sigma*np.sqrt(T)
return df * Black(underlying/df, strike, nu, callOrPut)
```
We need to specify some sensible model and product parameters.
```
r = 0.01 # 1% risk-free rate is a sensible choice in current low-interest rate market environment
sigma = 0.15 # typical values for annualised equity volatility is between 10% - 25%
K = 1.0 # the strike should be in the order of the underlying asset; we will assume S~O(1)
phi = 1.0 # call or put
```
We want to see the value function for a grid of maturities $[0,T_{end}]$ and underlying risky asset prices $(0, S_{max}]$.
```
T = np.linspace(0.0, 2.0, 201)
S = np.linspace(0.01, 2.0, 200)
```
Now, we can calculate the call option prices.
```
v = lambda s, t : BlackScholesPrice(s, K, r, sigma, t, phi)
v_sT = np.array([ v(S,t) for t in T ]).transpose()
print(v_sT.shape)
fig = go.Figure(data=[go.Surface(x=T, y=S, z=v_sT)])
fig.update_layout(
title='Black-Scholes Value Function',
scene = dict(
xaxis = dict(
title = 'T',
),
yaxis = dict(
title = 's',
),
zaxis = dict(
title = 'v',
),
),
width=1200, height=800, autosize=False,
margin=dict(l=65, r=50, b=65, t=90),
)
fig.show()
```
## Analysis of Greeks
Greeks represent sensitivities of the value function with respect to changes in the model parameters.
### Delta
$$
\Delta_{BS}(s,T)=\frac{d}{ds}v(s,T) = \phi\,\Phi\left(\phi d_{1}\right).
$$
```
def BlackScholesDelta(underlying, strike, rate, sigma, T, callOrPut):
moneyness = np.exp(rate*T) * underlying / strike
nu = sigma * np.sqrt(T)
d1 = np.log(moneyness) / nu + nu / 2.0
return callOrPut * norm.cdf(callOrPut * d1)
```
We calculate the Delta for a range of underlyings and times.
```
T = np.linspace(0.01, 2.0, 200)
S = np.linspace(0.01, 2.0, 200)
Delta = lambda s, t : BlackScholesDelta(s, K, r, sigma, t, phi)
dv_ds = np.array([ Delta(S,t) for t in T ]).transpose()
print(dv_ds.shape)
# Check Delta via finite differences
eps = 1.0e-4
Delta_FD = lambda s, t : (BlackScholesPrice(s+eps, K, r, sigma, t, phi) - BlackScholesPrice(s-eps, K, r, sigma, t, phi))/2/eps
dv_ds_FD = np.array([ Delta_FD(S,t) for t in T ]).transpose()
print(np.max(np.abs(dv_ds-dv_ds_FD)))
```
And we plot the resulting sensitivity.
```
fig = go.Figure(data=[go.Surface(x=T, y=S, z=dv_ds)])
fig.update_layout(
title='Black-Scholes Delta',
scene = dict(
xaxis = dict(
title = 'T',
),
yaxis = dict(
title = 's',
),
zaxis = dict(
title = 'Delta',
),
),
width=1200, height=800, autosize=False,
margin=dict(l=65, r=50, b=65, t=90),
)
fig.show()
```
### Gamma
$$
\Gamma_{BS} = \frac{d}{ds}\Delta_{BS}(s,T)=\frac{d^{2}}{ds^{2}}v(s,T) = \frac{\Phi'\left(d_{1}\right)}{s\,\sigma\sqrt{T}}.
$$
```
def BlackScholesGamma(underlying, strike, rate, sigma, T, callOrPut):
moneyness = np.exp(rate*T) * underlying / strike
nu = sigma * np.sqrt(T)
d1 = np.log(moneyness) / nu + nu / 2.0
return norm.pdf(d1) / underlying / nu
```
We calculate the Gamma for a range of underlyings and times.
```
T = np.linspace(0.1, 2.0, 200)
S = np.linspace(0.01, 2.0, 200)
Gamma = lambda s, t : BlackScholesGamma(s, K, r, sigma, t, phi)
d2v_ds2 = np.array([ Gamma(S,t) for t in T ]).transpose()
print(d2v_ds2.shape)
# Check Gamma via finite differences
eps = 1.0e-4
Gamma_FD = lambda s, t : (BlackScholesPrice(s+eps, K, r, sigma, t, phi) - 2 * BlackScholesPrice(s, K, r, sigma, t, phi) + BlackScholesPrice(s-eps, K, r, sigma, t, phi))/eps**2
d2v_ds2_FD = np.array([ Gamma_FD(S,t) for t in T ]).transpose()
print(np.max(np.abs(d2v_ds2 - d2v_ds2_FD)))
fig = go.Figure(data=[go.Surface(x=T, y=S, z=d2v_ds2)])
fig.update_layout(
title='Black-Scholes Gamma',
scene = dict(
xaxis = dict(
title = 'T',
),
yaxis = dict(
title = 's',
),
zaxis = dict(
title = 'Gamma',
),
),
width=1200, height=800, autosize=False,
margin=dict(l=65, r=50, b=65, t=90),
)
fig.show()
```
### Theta
$$
\Theta_{BS}(s,T)=\frac{d}{dT}v(s,T) = \frac{s\,\Phi'\left(d_{1}\right)\,\sigma}{2\,\sqrt{T}}+\phi\,r\,K\,e^{-rT}\,\Phi\left(\phi d_{2}\right)
$$
```
def BlackScholesTheta(underlying, strike, rate, sigma, T, callOrPut):
moneyness = np.exp(rate*T) * underlying / strike
nu = sigma * np.sqrt(T)
d1 = np.log(moneyness) / nu + nu / 2.0
d2 = d1 - nu
return underlying * norm.pdf(d1) * sigma / 2 / np.sqrt(T) + \
callOrPut * rate * strike * np.exp(-rate*T) * norm.cdf(callOrPut * d2)
```
We calculate the Theta for a range of underlyings and times.
```
T = np.linspace(0.1, 2.0, 200)
S = np.linspace(0.01, 2.0, 200)
Theta = lambda s, t : BlackScholesTheta(s, K, r, sigma, t, phi)
dv_dT = np.array([ Theta(S,t) for t in T ]).transpose()
print(dv_dT.shape)
# Check Theta via finite differences
eps = 1.0e-4
Theta_FD = lambda s, t : (BlackScholesPrice(s, K, r, sigma, t+eps, phi) - BlackScholesPrice(s, K, r, sigma, t-eps, phi))/2/eps
dv_dT_FD = np.array([ Theta_FD(S,t) for t in T ]).transpose()
print(np.max(np.abs(dv_dT - dv_dT_FD)))
fig = go.Figure(data=[go.Surface(x=T, y=S, z=dv_dT)])
fig.update_layout(
title='Black-Scholes Theta',
scene = dict(
xaxis = dict(
title = 'T',
),
yaxis = dict(
title = 's',
),
zaxis = dict(
title = 'Theta',
),
),
width=1200, height=800, autosize=False,
margin=dict(l=65, r=50, b=65, t=90),
)
fig.show()
```
### Black-Scholes PDE
We calculate the linear operator
$$
{\cal L}\left[v\right]=-\frac{dv}{dT}+r\,s\,\frac{dv}{ds}+\frac{1}{2}\,\sigma^{2}\,s^{2}\,\frac{d^{2}v}{ds^{2}}-r\,v.
$$
And verify that ${\cal L}\left[v\right]=0$.
```
T = np.linspace(0.1, 2.0, 200)
S = np.linspace(0.01, 2.0, 200)
L_v = lambda s, T : -Theta(s,T) + r * s * Delta(s,T) + 0.5 * sigma**2 * s**2 * Gamma(s,T) - r * v(s,T)
L_v_sT = np.array([ L_v(S,t) for t in T ]).transpose()
print(L_v_sT.shape)
fig = go.Figure(data=[go.Surface(x=T, y=S, z=L_v_sT)])
fig.update_layout(
title='Black-Scholes Operator',
scene = dict(
xaxis = dict(
title = 'T',
),
yaxis = dict(
title = 's',
),
zaxis = dict(
title = 'L[v]',
),
),
width=1200, height=800, autosize=False,
margin=dict(l=65, r=50, b=65, t=90),
)
fig.show()
```
### Rho
$$
\varrho_{BS}(s,T)=\frac{d}{dr}v(s,T) = \phi\,K\,T\,e^{-rT}\,\Phi\left(\phi d_{2}\right).
$$
```
def BlackScholesRho(underlying, strike, rate, sigma, T, callOrPut):
moneyness = np.exp(rate*T) * underlying / strike
nu = sigma * np.sqrt(T)
d1 = np.log(moneyness) / nu + nu / 2.0
d2 = d1 - nu
return callOrPut * strike * T * np.exp(-rate*T) * norm.cdf(callOrPut * d2)
```
We calculate the Theta for a range of underlyings and times.
```
T = np.linspace(0.01, 2.0, 200)
S = np.linspace(0.01, 2.0, 200)
Rho = lambda s, t : BlackScholesRho(s, K, r, sigma, t, phi)
dv_dr = np.array([ Rho(S,t) for t in T ]).transpose()
print(dv_dr.shape)
# Check Rho via finite differences
eps = 1.0e-6
Rho_FD = lambda s, t : (BlackScholesPrice(s, K, r+eps, sigma, t, phi) - BlackScholesPrice(s, K, r-eps, sigma, t, phi))/2/eps
dv_dr_FD = np.array([ Rho_FD(S,t) for t in T ]).transpose()
print(np.max(np.abs(dv_dr - dv_dr_FD)))
fig = go.Figure(data=[go.Surface(x=T, y=S, z=dv_dr)])
fig.update_layout(
title='Black-Scholes Rho',
scene = dict(
xaxis = dict(
title = 'T',
),
yaxis = dict(
title = 's',
),
zaxis = dict(
title = 'Rho',
),
),
width=1200, height=800, autosize=False,
margin=dict(l=65, r=50, b=65, t=90),
)
fig.show()
```
### Vega
$$
\text{Vega}_{BS}(s,T)=\frac{d}{d\sigma}v(s,T) = s\,\Phi'\left(d_{1}\right)\sqrt{T}
$$
```
def BlackScholesVega(underlying, strike, rate, sigma, T, callOrPut):
moneyness = np.exp(rate*T) * underlying / strike
nu = sigma * np.sqrt(T)
d1 = np.log(moneyness) / nu + nu / 2.0
return underlying * norm.pdf(d1) * np.sqrt(T)
```
We calculate the Theta for a range of underlyings and times.
```
T = np.linspace(0.01, 2.0, 200)
S = np.linspace(0.01, 2.0, 200)
Vega = lambda s, t : BlackScholesVega(s, K, r, sigma, t, phi)
dv_dsigma = np.array([ Vega(S,t) for t in T ]).transpose()
print(dv_dr.shape)
# Check Vega via finite differences
eps = 1.0e-6
Vega_FD = lambda s, t : (BlackScholesPrice(s, K, r, sigma+eps, t, phi) - BlackScholesPrice(s, K, r, sigma-eps, t, phi))/2/eps
dv_dsigma_FD = np.array([ Vega_FD(S,t) for t in T ]).transpose()
print(np.max(np.abs(dv_dsigma - dv_dsigma_FD)))
fig = go.Figure(data=[go.Surface(x=T, y=S, z=dv_dsigma)])
fig.update_layout(
title='Black-Scholes Vega',
scene = dict(
xaxis = dict(
title = 'T',
),
yaxis = dict(
title = 's',
),
zaxis = dict(
title = 'Vega',
),
),
width=1200, height=800, autosize=False,
margin=dict(l=65, r=50, b=65, t=90),
)
fig.show()
```
# Implied Volatility Analysis
We add an analysis of market-implied volatilities.
```
S0 = 1.0 # initial asset price
T = 1.4
putStrikes = [ 0.60, 0.70, 0.80, 0.90, 1.00 ]
putPrices = [ 0.0642, 0.0943, 0.1310, 0.1761, 0.2286 ]
callStrikes = [ 1.00, 1.10, 1.20, 1.30, 1.40 ]
callPrices = [ 0.2204, 0.1788, 0.1444, 0.1157, 0.0929 ]
```
We can use strike $K=1$ and put-call parity to calculate the implied risk-free rate $r$,
$$
r = -\frac{\log\left(1+\pi_{BS}\left(C^{put}\right)-\pi_{BS}\left(C^{call}\right)\right)}{T}
$$
```
r = - np.log(1 + putPrices[-1] - callPrices[0])/T
r
```
Next, we can calculate implied volatilities for puts and calls.
```
F = np.exp(r*T) * S0
putFwdPrices = [ np.exp(r*T)*p for p in putPrices ]
callFwdPrices = [ np.exp(r*T)*p for p in callPrices ]
putVols = [ BlackImpliedVol(p,K,F,T,-1) for p, K in zip(putFwdPrices, putStrikes) ]
callVols = [ BlackImpliedVol(p,K,F,T,+1) for p, K in zip(callFwdPrices,callStrikes) ]
print(putVols[-1])
print(callVols[0])
sigma = 0.5 * (putVols[-1] + callVols[0])
```
We calculate the corresponding Black-Scholes model prices.
```
bsPut = [ BlackScholesPrice(S0,K,r,sigma,T,-1) for K in putStrikes ]
bsCall = [ BlackScholesPrice(S0,K,r,sigma,T,+1) for K in callStrikes ]
print('Puts:')
for K, P in zip(putStrikes,bsPut):
print(' %4.2f %6.4f' % (K,P))
print('Calls:')
for K, P in zip(callStrikes,bsCall):
print(' %4.2f %6.4f' % (K,P))
```
Also, we plot the resulting impled volatility smile
```
fig = go.Figure()
fig.add_trace(go.Scatter(x=putStrikes, y=putVols, name='put' ))
fig.add_trace(go.Scatter(x=callStrikes, y=callVols, name='call'))
fig.update_layout(
title='Implied Black-Scholes Volatility, T=%.2f' % T,
xaxis_title="Strike K",
yaxis_title="Implied Volatility",
width=1200, height=800, autosize=False,
margin=dict(l=65, r=50, b=65, t=90),
)
fig.show()
```
| github_jupyter |
```
from argparse import Namespace
import contextlib
import copy
import math
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from dataclasses import dataclass, field
from omegaconf import MISSING, II, open_dict
from typing import Any, Optional
from fairseq import checkpoint_utils, tasks, utils
from fairseq.dataclass import FairseqDataclass
from fairseq.dataclass.utils import convert_namespace_to_omegaconf
from fairseq.tasks import FairseqTask
from fairseq.models import (
BaseFairseqModel,
FairseqEncoder,
FairseqEncoderDecoderModel,
FairseqIncrementalDecoder,
register_model,
)
from fairseq.models.wav2vec.wav2vec2 import MASKING_DISTRIBUTION_CHOICES
from fairseq.modules import (
LayerNorm,
PositionalEmbedding,
TransformerDecoderLayer,
)
from Wav2Vec2Model import Wav2Vec2Config, Wav2Vec2Model
def convert_to_custom_config(cfg):
# Input : cfg; Config for wav2vec2 model
config = Wav2Vec2Config()
conv_layer_config = config.conv_layer_setting
encoder_config = config.encoder_setting
encoder_layer_config = encoder_config.layer_setting
# Feature Extractor Config
conv_layer_config.extractor_mode = cfg.extractor_mode
conv_layer_config.conv_feature_layers = cfg.conv_feature_layers
conv_layer_config.conv_bias = cfg.conv_bias
conv_layer_config.conv_dropout = 0.0 # by default
# Encoder Layer each Config
encoder_layer_config.encoder_embed_dim = cfg.encoder_embed_dim
encoder_layer_config.encoder_ffn_embed_dim = cfg.encoder_ffn_embed_dim
encoder_layer_config.encoder_attention_heads = cfg.encoder_attention_heads
encoder_layer_config.dropout = cfg.dropout
encoder_layer_config.attention_dropout = cfg.attention_dropout
encoder_layer_config.activation_dropout = cfg.activation_dropout
encoder_layer_config.activation_fn = cfg.activation_fn
encoder_layer_config.layer_norm_first = cfg.layer_norm_first
# Encoder Config
encoder_config.layer_setting = encoder_layer_config
encoder_config.encoder_layers = cfg.encoder_layers
encoder_config.conv_pos = cfg.conv_pos
encoder_config.conv_pos_groups = cfg.conv_pos_groups
encoder_config.encoder_layerdrop = cfg.encoder_layerdrop
# Wav2vec2 Model Config
config.conv_layer_setting = conv_layer_config
config.encoder_setting = encoder_config
config.dropout_input = cfg.dropout_input
config.dropout_features = cfg.dropout_features
config.final_dim = cfg.final_dim
config.logit_temp = cfg.logit_temp
config.quantize_targets = cfg.quantize_targets
config.quantize_input = cfg.quantize_input
config.same_quantizer = cfg.same_quantizer
config.target_glu = cfg.target_glu
config.feature_grad_mult = cfg.feature_grad_mult
config.quantizer_depth = cfg.quantizer_depth
config.quantizer_factor = cfg.quantizer_factor
config.latent_vars = cfg.latent_vars
config.latent_groups = cfg.latent_groups
config.latent_dim = cfg.latent_dim
config.mask_length = cfg.mask_length
config.mask_prob = cfg.mask_prob
config.mask_selection = cfg.mask_selection
config.mask_other = cfg.mask_other
config.no_mask_overlap = cfg.no_mask_overlap
config.mask_channel_length = cfg.mask_channel_length
config.mask_min_space = cfg.mask_min_space
config.mask_channel_prob = cfg.mask_channel_prob
config.mask_channel_before = cfg.mask_channel_before
config.mask_channel_selection = cfg.mask_channel_selection
config.mask_channel_other = cfg.mask_channel_other
config.no_mask_channel_overlap = cfg.no_mask_channel_overlap
config.mask_channel_min_space = cfg.mask_channel_min_space
config.num_negatives = cfg.num_negatives
config.negatives_from_everywhere = cfg.negatives_from_everywhere
config.cross_sample_negatives = cfg.cross_sample_negatives
config.codebook_negatives = cfg.codebook_negatives
config.latent_temp = cfg.latent_temp
return config
def Embedding(num_embeddings, embedding_dim, padding_idx):
m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
nn.init.constant_(m.weight[padding_idx], 0)
return m
def Linear(in_features, out_features, bias=True):
m = nn.Linear(in_features, out_features, bias)
nn.init.xavier_uniform_(m.weight)
if bias:
nn.init.constant_(m.bias, 0.0)
return m
@dataclass
class Wav2Vec2AsrConfig(FairseqDataclass):
# Parameter settings for fine-tuning
w2v_path: str = field(
default=MISSING, metadata={"help": "path to wav2vec 2.0 model"}
)
no_pretrained_weights: bool = field(
default=False, metadata={"help": "if true, does not load pretrained weights"}
)
dropout_input: float = field(
default=0.0,
metadata={"help": "dropout to apply to the input (after feat extr)"},
)
final_dropout: float = field(
default=0.0,
metadata={"help": "dropout after transformer and before final projection"},
)
dropout: float = field(
default=0.0, metadata={"help": "dropout probability inside wav2vec 2.0 model"}
)
attention_dropout: float = field(
default=0.0,
metadata={
"help": "dropout probability for attention weights inside wav2vec 2.0 model"
},
)
activation_dropout: float = field(
default=0.0,
metadata={
"help": "dropout probability after activation in FFN inside wav2vec 2.0 model"
},
)
conv_feature_layers: Optional[str] = field(
default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]",
metadata={
"help": (
"string describing convolutional feature extraction "
"layers in form of a python list that contains "
"[(dim, kernel_size, stride), ...]"
),
},
)
encoder_embed_dim: Optional[int] = field(
default=768, metadata={"help": "encoder embedding dimension"}
)
# masking
apply_mask: bool = field(
default=False, metadata={"help": "apply masking during fine-tuning"}
)
mask_length: int = field(
default=10, metadata={"help": "repeat the mask indices multiple times"}
)
mask_prob: float = field(
default=0.5,
metadata={
"help": "probability of replacing a token with mask (normalized by length)"
},
)
mask_selection: MASKING_DISTRIBUTION_CHOICES = field(
default="static", metadata={"help": "how to choose masks"}
)
mask_other: float = field(
default=0,
metadata={
"help": "secondary mask argument (used for more complex distributions), "
"see help in compute_mask_indices"
},
)
no_mask_overlap: bool = field(
default=False, metadata={"help": "whether to allow masks to overlap"}
)
mask_min_space: Optional[int] = field(
default=1,
metadata={"help": "min space between spans (if no overlap is enabled)"},
)
# channel masking
mask_channel_length: int = field(
default=10, metadata={"help": "length of the mask for features (channels)"}
)
mask_channel_prob: float = field(
default=0.0, metadata={"help": "probability of replacing a feature with 0"}
)
mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field(
default="static",
metadata={"help": "how to choose mask length for channel masking"},
)
mask_channel_other: float = field(
default=0,
metadata={
"help": "secondary mask argument (used for more complex distributions), "
"see help in compute_mask_indicesh"
},
)
no_mask_channel_overlap: bool = field(
default=False, metadata={"help": "whether to allow channel masks to overlap"}
)
freeze_finetune_updates: int = field(
default=0, metadata={"help": "dont finetune wav2vec for this many updates"}
)
feature_grad_mult: float = field(
default=0.0, metadata={"help": "reset feature grad mult in wav2vec 2.0 to this"}
)
layerdrop: float = field(
default=0.0, metadata={"help": "probability of dropping a layer in wav2vec 2.0"}
)
mask_channel_min_space: Optional[int] = field(
default=1,
metadata={"help": "min space between spans (if no overlap is enabled)"},
)
mask_channel_before: bool = False
normalize: bool = II("task.normalize")
data: str = II("task.data")
# this holds the loaded wav2vec args
w2v_args: Any = None
class Wav2VecEncoder(FairseqEncoder):
def __init__(self, cfg: Wav2Vec2AsrConfig, output_size=None):
self.apply_mask = cfg.apply_mask
arg_overrides = {
"dropout": cfg.dropout,
"activation_dropout": cfg.activation_dropout,
"dropout_input": cfg.dropout_input,
"attention_dropout": cfg.attention_dropout,
"mask_length": cfg.mask_length,
"mask_prob": cfg.mask_prob,
"mask_selection": cfg.mask_selection,
"mask_other": cfg.mask_other,
"no_mask_overlap": cfg.no_mask_overlap,
"mask_channel_length": cfg.mask_channel_length,
"mask_channel_prob": cfg.mask_channel_prob,
"mask_channel_before": cfg.mask_channel_before,
"mask_channel_selection": cfg.mask_channel_selection,
"mask_channel_other": cfg.mask_channel_other,
"no_mask_channel_overlap": cfg.no_mask_channel_overlap,
"encoder_layerdrop": cfg.layerdrop,
"feature_grad_mult": cfg.feature_grad_mult,
}
if cfg.w2v_args is None:
state = checkpoint_utils.load_checkpoint_to_cpu(cfg.w2v_path, arg_overrides)
# Get the config of loaed w2v model
w2v_args = state.get("cfg", None)
if w2v_args is None:
w2v_args = convert_namespace_to_omegaconf(state["args"])
w2v_args.criterion = None
w2v_args.lr_scheduler = None
cfg.w2v_args = w2v_args
else:
state = None
w2v_args = cfg.w2v_args
if isinstance(w2v_args, Namespace):
cfg.w2v_args = w2v_args = convert_namespace_to_omegaconf(w2v_args)
# w2v_args.task -> Config for pre-training
# cfg -> Config for fine-tuning
assert cfg.normalize == w2v_args.task.normalize, (
"Fine-tuning works best when data normalization is the same. "
"Please check that --normalize is set or unset for both pre-training and here"
)
# Here, data for fine-tuning maybe...
w2v_args.task.data = cfg.data
# Does not support for loading fine-tuned parameters yet
if w2v_args.model._name == 'wav2vec_ctc':
w2v_config = w2v_args.model.w2v_args.model
elif w2v_args.model._name == 'wav2vec2':
w2v_config = w2v_args.model
else:
w2v_config = None
w2v_config = convert_to_custom_config(w2v_config)
task = tasks.setup_task(w2v_args.task)
#model = task.build_model(w2v_args.model)
model = Wav2Vec2Model(w2v_config)
if state is not None and not cfg.no_pretrained_weights:
model.load_state_dict(state["model"], strict=True)
model.remove_pretraining_modules()
super().__init__(task.source_dictionary)
d = w2v_args.model.encoder_embed_dim
self.w2v_model = model
self.final_dropout = nn.Dropout(cfg.final_dropout)
self.freeze_finetune_updates = cfg.freeze_finetune_updates
self.num_updates = 0
targ_d = None
self.proj = None
if output_size is not None:
targ_d = output_size
elif getattr(cfg, "decoder_embed_dim", d) != d:
targ_d = cfg.decoder_embed_dim
if targ_d is not None:
self.proj = Linear(d, targ_d)
def set_num_updates(self, num_updates):
"""Set the number of parameters updates."""
super().set_num_updates(num_updates)
self.num_updates = num_updates
def forward(self, source, padding_mask, **kwargs):
w2v_args = {
"source": source,
"padding_mask": padding_mask,
"mask": self.apply_mask and self.training,
}
ft = self.freeze_finetune_updates <= self.num_updates
with torch.no_grad() if not ft else contextlib.ExitStack():
res = self.w2v_model.extract_features(**w2v_args)
x = res["x"]
padding_mask = res["padding_mask"]
# B x T x C -> T x B x C
x = x.transpose(0, 1)
x = self.final_dropout(x)
if self.proj:
x = self.proj(x)
return {
"encoder_out": x, # T x B x C
"padding_mask": padding_mask, # B x T,
"layer_results": res["layer_results"],
}
def forward_torchscript(self, net_input):
if torch.jit.is_scripting():
return self.forward(net_input["source"], net_input["padding_mask"])
else:
return self.forward_non_torchscript(net_input)
def reorder_encoder_out(self, encoder_out, new_order):
if encoder_out["encoder_out"] is not None:
encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select(
1, new_order
)
if encoder_out["padding_mask"] is not None:
encoder_out["padding_mask"] = encoder_out[
"padding_mask"
].index_select(0, new_order)
return encoder_out
config = Wav2Vec2AsrConfig()
# Must get parameters without fine-tuning
# or you need your own dictionary
config.w2v_path = "/home/kangwook/fairseq/jkw/parameters/libri960_big.pt"
config.normalize = False
config
# 2nd argument is vocabulary size
model = Wav2VecEncoder(config, 30)
model
from torchinfo import summary
summary(model.w2v_model, (1, 900))
```
| github_jupyter |
```
%run -i ../python/common.py
UC_SKIPTERMS=True
%run -i ../python/ln_preamble.py
```
# SLS Lecture 8 : Writing some simple assembly programs
Spend some time writing some very simple assembly programs and learn to use the debugger so that we have enough skills to explore how things work. We will be repeat various things in more detail in future lectures.
- Write `popcnt` in assemble code
- use gdb to play with the popcnt program
- Write a simple `add` in assembly code
- use gdb to play with the add program
- using the cpu as a glorified calculator
- first pass at CPU support for "numbers"
- What happens if we let our programs continue
- how do we successfully "halt/end" our execution
- `int3` trap
- tells OS to return control to debugger
- more generally how can we make a Kernel/System Call
- revisit `add` programs adding exits
- `int3`
- `exit` syscall
- Implicitly use our shell, editor, Make and Git knowledge to do the above
## Writing a `popcnt` assembly program
- Write a one instruction assembly program
1. first using .byte
2. using intel assembly instruction
- Use gdb to explore how this instruction works
- learn to use gdb to set register values
- and how to execute and re-execute an instruction
```
display(Markdown(FileCodeBox(
file="../src/popcnt_bb.S",
lang="gas",
title="<b>CODE: asm - The 'popcnt' assembly program",
h="100%",
w="107em"
)))
display(Markdown('''
Here is a fully commented version of the same code.
'''))
display(Markdown(FileCodeBox(
file="../src/popcnt.S",
lang="gas",
title="<b>CODE: asm - The commented 'popcnt' assembly program",
h="100%",
w="107em"
)))
display(showET("Editor"))
```
We can use the `.byte` directive to set the values in memory to anything we like
eg.
``` gas
.byte 0xF3, 0x48, 0x0F, 0xB8, 0xD8
```
But of course the real value is that we could have also simply written
``` gas
popcnt rax, rbx
```
```
display(showBT())
display(Markdown(FileCodeBox(
file="popcnt_build.sh",
lang="shell",
# title="<b>NOTES: on building popcnt",
h="100%",
w="100%")))
display(showDT("Debugger"))
display(Markdown(FileCodeBox(
file="popcnt_gdb.txt",
lang="shell",
title="",
h="100%",
w="100%")))
```
## Writing an `add` assembly program
- re-enforce the steps to creating and debugging an assembly program
- begin to explore CPU support for working with "numbers"
- cpu as a calculator
- Lets work with the `add` instruction in a similar way that we did with `popcnt`
- explore the results of adding with binary, hex, unsigned and signed values
- explore overflow
- then make the program a little more complex:
``` gas
movabs rbx, 0xdeadbeefdeadbeef
mov rax, 1
add rax, rbx
```
- lets use some more cool features of the intel instruction set
``` gas
rdrand rbx
mov rax, 1
add rax, rbx
popcnt rbx, rax
```
- lets get a brief glimpse at how to use memory locations for the value
``` gas
.intel_syntax noprefix
.data
x: .quad 142
y: .quad 4200
sum: .quad
.text
.global _start
_start:
mov rax, QWORD PTR x
add rax, QWORD PTR y
mov QWORD PTR sum, rax
int3
```
- try replacing add with `imul`, `and`, `or`, `xor`
```
display(showET("Editor"))
display(Markdown(FileCodeBox(
file="../src/add.S",
lang="gas",
title="",
h="100%",
w="100%"
)))
display(showBT())
display(Markdown(FileCodeBox(
file="add_build.sh",
lang="shell",
# title="<b>NOTES: on building add",
h="100%",
w="100%")))
display(showDT())
display(Markdown(FileCodeBox(
file="add_gdb.txt",
lang="shell",
title="",
h="100%",
w="100%")))
```
## Ending / Exiting our Program/Process
- What happens if we run our programs outside of the debugger?
- why does this happen?
```
#display(showET())
#display(showBT())
display(showDT())
```
### How can we avoid this
1. TRAP: Use an instruction that tells the OS to
- stop the process and give control back to the debuggger
- if no debugger is running just kill process and signal shell
- Instruction: `int3`:
- Opcode: `0xCC`
- Description: `Interrupt 3 — trap to debugger`
2. Call OS Kernel Exit Process call
- This is an example of calling an OS Kernel call to have the kernel do something for your process
- We will look at this more but for the moment here is what is necessary to call `exit`
- pass return value to Kernel
- exit/terminate process
### Interrupt 3 `int3` -- trap to debugger
<img src="../images/int3mp.png">
```
display(Markdown(FileCodeBox(
file="../src/int3.S",
lang="gas",
title="",
h="100%",
w="100%"
)))
```
### Exit -- An OS service to terminate a process
To exit your process and return an exit value
- requires a call to the OS!
On Intel the instruction is `syscall`
<img src="../images/syscallmp.png">
### The OS System Calls
Each OS Kernel provides a set of calls that an a process can invoke using the `syscall` instruction on an Intel based computer
The Linux Kernel supports a very large number of system calls each is identified by a unique number that must be placed in `RAX` prior to executing the `syscall` instruction. Additional arguments are passed in by setting other registers.
With each version of the Kernel the table of calls changes. Here is one site that provides a list
```
display(IFrame("https://filippo.io/linux-syscall-table/", height=600, width="100%"))
```
- From the above we can see that the `exit` system call number is `60`
- reading some man pages `man syscall` and `man syscalls` we find that
- we must place `60` in `rax`
- and that the value we want to return in `rdi`
```
display(Markdown(FileCodeBox(
file="../src/exit_bb_bb.S",
lang="gas",
title="",
h="100%",
w="100%"
)))
```
Operating system code usually provides files that you can include in your code so that you don't have to hardcode magic numbers like `60` for exit. In Linux you can add the following file `#include <asm/unistd_64.h>` to get all the system call numbers. You can then use `__NR_exit` to mean the number for the exit system call.
eg.
``` gas
mov rax,__NR_exit # exit system call number
mov rdi,0 # UNIX success value is 0
syscall # call OS. This will not return
```
#### Here is a fully documented fancy version
```
display(Markdown(
'''
A commented version that avoids "magic" numbers.
''' +
FileCodeBox(
file="../src/exit.S",
lang="gas",
title="",
h="100%",
w="200%"
)))
```
| github_jupyter |
# DSCI 572 Lab 4
```
import numpy as np
import pandas as pd
import os
from sklearn.model_selection import train_test_split
from scipy.signal import convolve2d
import matplotlib.pyplot as plt
%matplotlib inline
```
To install scikit-image, use
```
conda install -c conda-forge scikit-image
```
or
```
pip install scikit-image
```
```
from skimage.color import rgb2gray
from skimage.transform import resize
plt.rcParams['font.size'] = 16
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Dense, Dropout, Flatten, Activation, Conv2D, MaxPooling2D, GlobalAveragePooling2D
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.preprocessing.image import img_to_array, load_img
from tensorflow.keras import utils
from tensorflow.keras.applications.inception_v3 import InceptionV3
```
## Instructions
rubric={mechanics:20}
Follow the [general lab instructions](https://ubc-mds.github.io/resources_pages/general_lab_instructions/).
## Exercise 1: convolutions
For each of the filters given below, convolve the given image (or a different image of your choice) with the given filter and discuss why the results look the way they do.
You can perform 2D convolutions using [`scipy.signal.convolve2d`](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.convolve2d.html).
The suggested image size is around 100x100 pixels; if the image is too big, it will be hard to see the changes by eye using the very small filters given below. If you want to make an image smaller, try [scipy.misc.imresize](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.misc.imresize.html). This will be a lot faster than seam carving :)
Note: depending on your versions of various packages, you might get warnings when you run the code. It's OK.
```
def preprocess_image(filename):
img = plt.imread(filename) # read in the image
img = resize(img, (100,100), mode='reflect') # resize it if you want
return rgb2gray(img) # make it grayscale
def show_conv(img, filt):
plt.figure(figsize=(8,16))
plt.subplot(1,2,1)
plt.imshow(img, cmap='gray')
plt.xticks(())
plt.yticks(())
plt.title("original")
I_filt = convolve2d(img,filt, boundary='symm', mode='same')
I_filt = np.maximum(0, I_filt) # set negative values to 0, for visualization purposes
I_filt = np.minimum(1, I_filt) # set values greater than 1 to 1, for visualization purposes
plt.subplot(1,2,2)
if np.sum(filt) == 0: # a trick to make the images easier to see, not part of the "math"
plt.imshow(I_filt/np.max(I_filt), cmap='gray')
else:
plt.imshow(I_filt, cmap='gray')
plt.xticks(())
plt.yticks(())
plt.title("filtered")
return I_filt
img = preprocess_image("milad_cropped.png")
```
**Example** (you don't need to do this one)
```
ft = 0.1*np.ones(10)[None]
print(ft.shape)
print(ft)
res = show_conv(img, ft)
```
**Example answer:** The filter is a horizontal bar all containing all $0.1$s. Therefore I would expect a blurring in the horizontal direction, meaning the _vertical_ edges get blurred (because these are the ones that change rapidly in the horizontal direction). This seems to be happening in the result.
#### 1(a)
rubric={reasoning:5}
```
ft = 0.1*np.ones(10)[:,None]
print(ft.shape)
print(ft)
res = show_conv(img, ft)
```
#### 1(b)
rubric={reasoning:5}
```
ft = np.zeros((5,5))
ft[2,2] = 1
print(ft.shape)
print(ft)
res = show_conv(img, ft)
```
#### 1(c)
rubric={reasoning:5}
```
ft = 0.01*np.ones((10,10))
print(ft.shape)
res = show_conv(img, ft)
```
#### 1(d)
rubric={reasoning:5}
```
ft = -np.ones((3,3))/8
ft[1,1] = 1
print(ft.shape)
print(ft)
res6 = show_conv(img, ft)
```
#### (optional) 1(e)
rubric={reasoning:1}
Earlier in this course we talked about gradients and numerical differentiation. Think about part (d) above: does this have anything to do with the topics from earlier on? Can you relate these edge detection operations to "derivatives" or "gradients"?
Also, by the way, back in the seam carving lab of DSCI 512 we gave you a function that calculated the "energy" of an image, and we then looked for low energy seams. Here's the code we gave you:
```
from scipy.ndimage.filters import convolve
def energy(image):
dy = np.array([-1, 0, 1])[:,None,None]
dx = np.array([-1, 0, 1])[None,:,None]
energy_img = convolve(image, dx)**2 + convolve(image, dy)**2
return np.sum(energy_img, axis=2)
```
(There's no particular reason I switched from [`scipy.ndimage.filters.convolve`](https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.filters.convolve.html) to [`scipy.signal.convolve2d`](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.convolve2d.html); they perform the same function for our purposes.) I thought you might enjoy looking back at this formerly mysterious code with your newfound knowledge. And it's also a bit of a hint: the seam carving energy function looked for "edges" or "changes" or ... derivatives! The above actually calculates the magnitude squared of the "gradient" at every point. The whole thing should make sense now as well -- when seam carving we wanted to remove pixels for which there wasn't much going on in the immediate vicinity.
## Exercise 2. Convolutional networks for MNIST
Sorry to continue with MNIST so long. It's just _THE_ classic data set for this stuff.
Below is some code that trains a convnet on MNIST. The code is adapted from the book [Deep Learning with Python](https://machinelearningmastery.com/deep-learning-with-python/) with permission from the author.
```
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][channels][width][height]
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
# one hot encode outputs
y_train = utils.to_categorical(y_train)
y_test = utils.to_categorical(y_test)
num_classes = y_test.shape[1]
# take a subset of the data for speed
subset_size = 10000
X_train = X_train[:subset_size]
y_train = y_train[:subset_size]
# define a simple CNN model
def build_mnist_CNN():
mnist_model = Sequential()
mnist_model.add(Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu'))
mnist_model.add(MaxPooling2D(pool_size=(2, 2)))
mnist_model.add(Dropout(0.2))
mnist_model.add(Flatten())
mnist_model.add(Dense(128, activation='relu'))
mnist_model.add(Dense(num_classes, activation='softmax'))
# Compile model
mnist_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return mnist_model
mnist_model = build_mnist_cnn()
# Fit the model
mnist_model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=256)
# Final evaluation of the model
scores = mnist_model.evaluate(X_test, y_test, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))
```
#### 2(a)
rubric={reasoning:15}
Run the code above. How does it compare to your fully-connected ("Dense") neural net from lab 3? Discuss in 2-3 sentences. (Keep in mind that here we're only using a subset of the training data for speed.)
#### (optional) 2(b)
rubric={reasoning:1}
Let's assess what happens if we permute the rows of the images (both both training and testing). Below we permute the images, retrain the network, and re-evaluate the network. The accuracy is now lower. But we used the same data, just shuffled - can you explain why this operation hurt the accuracy?
```
perm = np.random.permutation(X_train.shape[1])
perm
n_plots = 3
for i in range(n_plots):
ind = np.random.randint(X_train.shape[0])
plt.subplot(2,2,1)
plt.imshow(X_train[ind,...,0], cmap='gray');
plt.title("Original");
plt.subplot(2,2,2)
plt.imshow(X_train[ind,perm,:,0], cmap='gray');
plt.title("Permuted");
plt.show()
```
Above: this is what a permuted training example looks like, with its rows shuffled.
```
mnist_model_perm = build_mnist_CNN()
# Fit the model
mnist_model_perm.fit(X_train[:,perm], y_train, validation_data=(X_test[:,perm], y_test), epochs=10, batch_size=256)
# Final evaluation of the model
scores = mnist_model_perm.evaluate(X_test[:,perm], y_test, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))
```
#### 2(c)
rubric={reasoning:30}
You will now deploy Keras/TensorFlow on the cloud using [Kaggle Kernels](https://www.kaggle.com/kernels). This will allow you to train on a GPU and assess the benefits of training neural networks on GPUs. Kaggle Kernels offers 30 hours of free GPU usage per account. This should be much more than adequate for this lab.
Note: last year we used [Google Colab](https://colab.research.google.com/) instead of Kaggle Kernels. That would have been fine for this exercise - they are roughly equivalent. But later in the lab, when we want to access a Kaggle dataset, Kaggle Kernels are way more convenient! (Furthermore... two years ago we used Amazon AWS and that was truly a huge hassle because they wouldn't recognize your @alumni.ubc.ca email addresses as "student email addresses".)
Follow these steps:
1. Save this Jupyter notebook so that it contains your latest work. Also push it to GitHub to be safe.
2. Go to https://www.kaggle.com/kernels
3. Make an account if you don't have one
4. Select New Notebook
7. Create
8. File->Upload notebook
9. Upload this notebook itself, lab4.ipynb, which you just saved.
5. On the right-hand side, go to Settings.
1. Make sure Internet is enabled.
1. Make sure GPU is enabled.
**SUGGESTION:** once you're done all your work on Kaggle (which means this exercise and the next one), you can download the notebook from Kaggle and overwrite your local version. That way any work you did on Kaggle won't be lost. (They allow working directly off a notebook on GitHub, but that feature won't work for us since we're using github.ubc.ca.)
Now, run the same MNIST experiment as above but on a Kaggle Kernel with the GPU active.
1. How much faster is it (as a ratio) to run the exact same code on the GPU vs. your laptop?
2. Notice the code above takes a subset of 10,000 training examples for speed. With the speed of the GPU, you should now use the full 60,000 training examples on AWS. Report your performance after 10 epochs when training on the full data set. How does it compare to the validation error you were able to get on your local machine (which presumably required using the smaller training set to run in reasonable time)?
3. Again, compare to the fully connected network from lab 3.
## Exercise 3: Transfer learning
In this exercise we will work on the concept of _transfer learning_, in which we'll use a model trained on one task as a starting point for learning to perform another task.
A natural question is, why is transfer learning helpful to us? Why can't we just train a model with the second task's objectives from the beginning?
A key motivation is the difficulty in obtaining labeled data: ususally we need a whole lot of data in order to solve complex problems, and it can be hard to collect the data. (Another motivation is the time and effort -- both computational time and human time -- needed to train the original model. Someone did the work already, so we don't have to.)
In this exercise we'll apply transfer learning to fine-grained image classification. Here, the goal is to recognize different subclasses within a higher-level class. In our case, the higher level question could be, "Is this a dog?" and the fine-grained question is, "What breed of dog is this?"
We will use the [Kaggle dog breed identification](https://www.kaggle.com/c/dog-breed-identification/data) dataset.
In the dataset, each dog image is labeled according to one of 120 breeds. We'll start with a pre-trained model that was already trained on a more high-level image classification task, namely the famous [ImageNet dataset](http://www.image-net.org/). You can see some sample ImageNet images [here](http://image-net.org/explore?wnid=n04516672).
We'll consider three approaches to the dog breed classification problem:
1. No transfer learning, just end-to-end training of a CNN directly for the dog breed classification task.
2. Use the pre-trained model as a feature extractor; then, add new layers in order to train it with the dog-breed dataset.
3. Some fine tuning of the weights of the pre-trained model (instead of freezing them all and only adding new layers, is in approach 2).
Attribution: In designing this exercise, we took some inspiration from [here](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html). But I think our version is more interesting because the classes in our new task are not part of the original task.
### Preliminaries
I am assuming you already have your Kaggle Kernel set up as in the previous exercise, with the GPU and Internet enabled. Next, you will need to add the dataset to your Kaggle Kernel. (FYI: this is the part that is so much easier with Kaggle Kernels than Google Colab, where we had to install the Kaggle API on the Colab instance, set up key-based authentication, and then upload many GB worth of data from one cloud to the other, which turned out to work fine on ubcsecure wifi but not on eduroam wifi... lessons learned!)
- Go to https://www.kaggle.com/c/dog-breed-identification/rules, make sure you're signed in to Kaggle, and accept the competition rules.
- In your Kaggle Kernel, press "+ Add Data" at the upper-right.
- From the tabs at the top, select "Competition Data" (do not skip this step!)
- Search for "dog breed identification" in the search box. It should be the first result.
- Press "Add". Note: this will cause your kernel to restart.
- When asked if you want code to load the data, you can select "No" - I already have the code for you in this notebook, below.
### What you should do
As with the previous exercise, you should do this on the GPU on Kaggle. Your task for now is to read along and, **whenever there are code cells below, you should run them as you go along.** There will be some questions interspersed in the document, **which you should answer**.
Next, we take only the first 2000 samples from the original dataset. We want to simulate having only a small labeled dataset available to us, and see the effect of transfer learning.
```
data = pd.read_csv('../input/dog-breed-identification/labels.csv')
data = data[:2000]
data['image_path'] = data.apply( lambda row: (os.path.join("../input/dog-breed-identification/train", row["id"] + ".jpg") ), axis=1)
data.head()
```
Above: you can see some of the breeds that we're predicting.
```
target_labels = data['breed']
total_classes = len(set(target_labels))
print("number of dog breeds:", total_classes)
# read images from the image directory.
images = np.array([img_to_array(
load_img(img, target_size=(256,256))
) for img in data['image_path'].values.tolist()])
images.shape
```
Above: we have 2000 images, each of size $256 \times 256$ and with 3 colour channels.
```
images = images.astype('float32')/255.0
```
Above: it's very important to scale the images!
```
plt.imshow(images[0]);
plt.grid(True);
plt.xticks([]);
plt.yticks([]);
plt.title("Breed = " + target_labels[0]);
```
Above: this is a sample image from the dog breed data set.
```
X_train, X_valid, y_train, y_valid = train_test_split(images, target_labels,
stratify=np.array(target_labels),
random_state=42)
print(X_train.shape)
print(X_valid.shape)
```
#### 3(a)
rubric={reasoning:10}
Before we start, do some EDA to assess whether there is serious class imbalance in the training data. What training accuracy would you get with `DummyClassifier`? Briefly discuss your results.
#### 3(b)
rubric={reasoning:5}
How many training examples do we have per breed of dog, roughly? In the context of other classification tasks we've done in MDS, do you consider this to be a lot or a little?
Next, we do the one-hot encoding.
```
Y_train = pd.get_dummies(y_train.reset_index(drop=True)).values
Y_valid = pd.get_dummies(y_valid.reset_index(drop=True)).values
print(Y_train.shape)
print(Y_valid.shape)
# Note: it would be better to use keras.utils.to_categorical, or something else like that,
# just in case one of the classes is absent in one of the two sets.
# But this works for now.
```
### Approach 1
Now, we try Approach 1, which is training an end-to-end CNN on the dog breed classification task.
```
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(256, 256, 3)))
model.add(Activation('relu')) # this is just different syntax for specifying the activation function
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(total_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
history = model.fit(X_train, Y_train, epochs=10, validation_data=(X_valid, Y_valid))
# FYI: it's often a good idea to save your weights after training or during training.
# But you don't have to here.
# model.save_weights('my_conv_net.h5')
```
#### 3(c)
rubric={reasoning:1}
What do you think of the results? Are you impressed?
### Approach 2
Here we load a pre-trained model and add some layers on top. The syntax is not what you're used to - that's OK, don't worry about it.
```
# Get the InceptionV3 model trained on the ImageNet data set
base_inception = InceptionV3(weights='imagenet', include_top=False, input_shape=(256, 256, 3))
```
Note the `include_top=False`. This throws away the last layer. It wasn't useful to us anyway. ImageNet has 1000 classes, but we're not interested in those classes. Another way to think of it is that the original model is a crazy feature extractor plus logistic regression for the 1000 ImageNet classes. We are using the feature extractor and discarding the logistic regression part.
```
top_block = base_inception.output
top_block = GlobalAveragePooling2D()(top_block) # pool over height/width to reduce number of parameters
top_block = Dense(256, activation='relu')(top_block) # add a Dense layer
predictions = Dense(total_classes, activation='softmax')(top_block) # add another Dense layer
model_transfer = Model(inputs=base_inception.input, outputs=predictions)
```
Above: the syntax is not what you're used to - that's OK, don't worry about it. If you want to know more, see [this documentation](https://keras.io/models/model/). However, at a high level we're grabbing the base model, doing some pooling, and then adding two new dense layers at the top.
```
for layer in base_inception.layers:
layer.trainable = False
```
Above: this is a key step. We "freeze" the layers of the base model, so that only our two new Dense layers at the top are trainable. That means we only update the weights in the new top layers - all the other weights (the ones from the base model) are fixed ("frozen") during training.
```
model_transfer.compile(Adam(lr=.001), loss='categorical_crossentropy', metrics=['accuracy'])
model_transfer.summary() # run me if you dare
```
Above: that's a lot of layers!
```
history = model_transfer.fit(X_train, Y_train, validation_data=(X_valid, Y_valid), epochs=10)
```
#### 3(d)
rubric={reasoning:1}
How does this result compare to the "from scratch" CNN?
### Approach 3
Below, we un-freeze the last "15" layers, which is really only the last one or two layers, since the list of Keras layer objects doesn't really correspond to our idea of a layer (see `model.summary()`).
```
for i, layer in enumerate(reversed(model_transfer.layers)):
layer.trainable = True
# print(layer)
if i > 15:
break
# compile the model with a SGD/momentum optimizer and a very slow learning rate.
model_transfer.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# fine-tune the unfrozen layers
history = model_transfer.fit(X_train, Y_train, validation_data=(X_valid, Y_valid), epochs=10)
```
#### (optional) 3(e)
rubric={reasoning:1}
Un-freezing some of the layers seems to have a small effect here. Was it actually useful at all, or could we have achieved the same results by just training our top layers for more epochs?
#### 3(f)
rubric={reasoning:5}
In Lab 3 we noticed that unlike scikit-learn's `fit`, Keras's `fit` doesn't re-initialize the weights, but rather continues on from where you were. In the above code, we benefitted from this. Briefly describe how/why this behaviour was useful to us.
#### 3(g)
rubric={reasoning:10}
Brainstorm 3 other applications of this type of transfer learning, where you use a pre-trained network plus some modifications. In each case, what is the original task and what is the new task? (It's OK if you don't actually have access to a pre-trained network to do the original task; we're just brainstorming here.)
#### (optional) 3(h)
rubric={reasoning:3}
There are two perspectives on what we did in Approach 2: one is that we froze most of the layers and just fine-tuned the last layers. The other perspective is that we used a pre-trained feature extractor and then just used a simple model on top. In the above we added 2 layers on top, but if we added just one layer on top then it would just be a softmax logistic regression. Following this second perspective, can you get reasonable results by chaining together the feature extractor and a multi-class scikit-learn `LogisticRegression`? Perhaps this would be a good use case for a scikit-learn `Pipeline`?
WARNING: I have not tried this myself, so there is a chance things will go wrong. If you get something to work, please let me know - I'm curious!
(You are now done with your Kaggle Kernel. If you were editing the file there, you should download it to your local machine before closing the Kaggle Kernel!)
## Exercise 4: Pondering
#### 4(a)
rubric={reasoning:10}
When we previously worked on the handwritten digits dataset, we did something quite silly: we "flattened" images into vectors; for example $28\times 28$ MNIST images became vectors of length $28\times 28 = 784$. This is arguably insane! One reason it's insane is that we were completely discarding the "spatial information" contained in the image and just pretended we had 784 different features, whereas convnets preserve the 2D structure and take 2D convolutions. But there is another, related reason it's a bad idea to just flatten the images... what would go wrong if we tried to use fully connected nets on larger images, like $1000 \times 1000$ pixels?
#### 4(b)
rubric={reasoning:10}
For each of the following, would increasing this quantity typically increase, decrease, or have no effect on the number of parameters of the model?
1. Dropout probability, e.g. `0.2`
2. Filter size, e.g. `(5,5)`
3. Number of filters, e.g. `32`
#### 4(c)
rubric={reasoning:10}
For each of the following, would increasing this quantity typically increase, decrease, or have no effect on the training error? No need to explain.
1. Dropout probability, e.g. `0.2`
2. Filter size, e.g. `(5,5)`
3. Number of filter, e.g. `32`
#### 4(d)
rubric={reasoning:15}
What are the pros/cons of neural nets, vs. approaches previously learned (for both regression and classification)? Choose one method from a previous course (561, 571, 573) and compare it with what you've done in deep learning. Write a paragraph summarizing the results.
-----------------
All the rest are optional; if you want to be done, you're done!
#### (optional) 4(e)
rubric={reasoning:1}
The code below shows that the MNIST model from Exercise 2 has 592,074 parameters. Explain where this number comes from by going layer by layer and accounting for all the parameters.
```
mnist_model.summary()
```
#### (optional) 4(f)
rubric={reasoning:1}
Consider this CNN architecture:
```
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.summary()
```
Now, we remove (comment out) pooling from the _first_ convolutional layer:
```
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(28, 28, 1), activation='relu'))
# model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.summary()
```
Why does this change increase the number of parameters in the 3rd (Dense) layer, but not in the 2nd (Conv2D) layer?
#### (optional) 4(g)
rubric={reasoning:1}
In the code above, the data is transformed to `float32` type. In lecture we discussed floating point representations. The main advantage of using 32-bit floating point numbers (versus 64-bit) is computational speed, but there's a disadvantage in terms of accuracy. Think about dropout and what it does/accomplishes. Did thinking about dropout alleviate your concerns about the potential pitfalls of using a smaller floating point represntation? Briefly discuss.
#### (optional) 4(h)
rubric={reasoning:1}
If you had access to 1000 GPUs, do you think you could get 1000x performance? If not, why? What are the limitations/bottlenecks?
## (optional) Exercise 5: setting priorities and time-management skills
rubric={reasoning:100}
_Rationale: admission into the MDS program is very competitive. This is great for me, because I enjoy working with such motivated and talented individuals. However, I conjecture that this same admissions process may also select for people with unrealistically high expectations of themselves. For example, I have noticed some students feel they must do all the optional questions even if they aren't particularly interested in the topic or don't really have the time to do them. If that sounds like you, this optional exercise was created for you! My hope is that none of you attempts, let alone completes, this silly exercise. In skipping it, you will need to forego the 100 bonus points.$^*$ I hope that by doing so, you will feel that it's perfectly fine to triage and skip certain lab questions: nothing bad happens! I believe this skill --- or perhaps you could call it a mindset --- will be extremely important for your success and wellbeing in the long-term. Doing everything perfectly is simply not possible forever, and when that time comes, it is important that you can set priorities. I have seen many people be ineffective at their jobs because they cannot skip the unimportant things. And now, the task..._
Do the following before the lab deadline:
- 400 push-ups
- 300 sit-ups
- recite the alphabet backwards 200 times
- memorize the names of 100 countries
- ask someone the following question: "What is $\int_1^\text{cabin} \frac{1}{x}\text{d}x$?"
You must include, with your submission, incontrovertible evidence that you did each of these things. Your word of honour will not be sufficient.
$^*$_Also, please note that all lab grades in MDS are capped at 100%. That is, you can increase your grade up to 100% with optional questions, but not beyond 100%._
#### (optional) 5(b)
rubric={reasoning:1}
If you completed the first part of this exercise, write a brief reflection in 2-3 sentences. What did you learn about setting priorities and time-management skills?
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams.update({
"text.usetex": True,
"font.family": "sans-serif",
"font.sans-serif": ["Helvetica Neue"],
"font.size": 28,
# "contour.negative_linestyle": 'solid',
})
# Define function
x_min = -24
x_max = 24
y_min = -13.5
y_max = 13.5
x = np.linspace([x_min, y_min], [x_max, y_max], 100)
gamma = 20
A = np.array([[1, 0], [0, gamma]])
def f_plot(*args):
x = np.array([x_i for x_i in args])
return f(x)
def f(x):
return .5 * x.T.dot(A.dot(x))
def df(x):
return A.dot(x)
f_vec = np.vectorize(f_plot)
X1, X2 = np.meshgrid(x[:, 0], x[:, 1])
x_star = np.array([0, 0])
# Fixed step size
for t in [0.01, 0.05, 0.1, 0.15, 2 / (1 + 20)]:
# Gradient descent
x = np.array([gamma, 1])
x_hist = [x]
for k in range(1000):
if np.linalg.norm(df(x)) < 1e-02:
print("t = %.2e, converged in %d iterations" % (t, k))
break
dx = -df(x)
# x = line_search(x, dx)
x = x + t * dx
x_hist.append(x)
# Plot
fig, ax = plt.subplots(figsize=(16, 9))
# Contour
cs = plt.contour(X1, X2, f_vec(X1, X2), colors='k')
# ax.clabel(cs, fontsize=18, inline=True)
# Gradient descent
ax.plot(*zip(*x_hist), linestyle='--', marker='o',
markersize=10, markerfacecolor='none', color='k')
# Optimal solution
ax.scatter(*zip(x_star), marker='*', s=600, color='k')
ax.set_xlim([x_min, x_max])
ax.set_ylim([y_min, y_max])
ax.set_xticks([])
ax.set_yticks([])
ax.set_aspect('equal', adjustable='box')
plt.tight_layout()
plt.savefig("gradient_descent_%.4f.pdf" % t)
# Line search
def line_search(x, dx, alpha=0.5, beta=0.9):
t = 1
for k in range(200):
f_next = f(x + t * dx)
f_extrap = f(x) + alpha * t * df(x).T.dot(dx)
if f_next <= f_extrap:
return x + t * dx
t *= beta
x = np.array([gamma, 1])
x_hist = [x]
for k in range(1000):
if np.linalg.norm(df(x)) < 1e-02:
print("Line search converged in %d iterations" % (k))
break
dx = -df(x)
x = line_search(x, dx)
x_hist.append(x)
# Plot
fig, ax = plt.subplots(figsize=(16, 9))
# Contour
cs = plt.contour(X1, X2, f_vec(X1, X2), colors='k')
# ax.clabel(cs, fontsize=18, inline=True)
# Gradient descent
ax.plot(*zip(*x_hist), linestyle='--', marker='o',
markersize=10, markerfacecolor='none', color='k')
# Optimal solution
ax.scatter(*zip(x_star), marker='*', s=600, color='k')
ax.set_xlim([x_min, x_max])
ax.set_ylim([y_min, y_max])
ax.set_xticks([])
ax.set_yticks([])
ax.set_aspect('equal', adjustable='box')
plt.tight_layout()
plt.savefig("gradient_descent_line_search.pdf")
```
| github_jupyter |
## 3DCORE with THUX
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from mpl_toolkits.mplot3d import axes3d
from matplotlib.colors import LightSource
from matplotlib import cm
import heliopy
import astropy
import datetime
from datetime import timedelta
import astropy.constants as const
from sunpy.time import parse_time
import heliopy.spice as spice
import heliopy.data.spice as spicedata
import seaborn as sns
import glob
from scipy.io import readsav
import os
import copy
#these are our own packages
import py3dcore #'1.1.3'
import heliosat
!pwd
#load THUX example wind
wsa_thux=np.loadtxt('data/thux/wsa-vmap-19apr2020-v2.txt')
#wsa_apr_thux=np.loadtxt('data/thux/wsa-planets-missions-19apr2020.txt',skiprows=1)
#convert matlab time to matplotlib time
#wsa_apr_thux_datetime=[None]*len(wsa_apr_thux)
#for p in np.arange(len(wsa_apr_thux)):
# wsa_apr_thux_datetime[p]= datetime.datetime.fromordinal(wsa_apr_thux[p,0].astype(int) ) + \
# datetime.timedelta(days=wsa_apr_thux[p,0]%1) - datetime.timedelta(days = 366)
#vEarth vMercury vVenus vMars vBepi vPSP vSOLO vSTEREOA vSTEREOB
#wsa_apr_thux_earth=wsa_apr_thux[:,1]
#wsa_apr_thux_solo=wsa_apr_thux[:,7]
#wsa_apr_thux_sta=wsa_apr_thux[:,8]
def plot_bgsw(ax,cbarax):
#for rotation
#k=0
#CR 2229 2020 Mar 28 0853 start
#k=14
#rotSun = 27.2753
#rotAngle = (2 * np.pi / rotSun) * k #+np.pi/2
#rotAngle = (2 * np.pi / rotSun) * k
thetaLen=180
rLen=425
r_sun = 695700.
au = 149597870.
startBGSW = 5
#grid
angle = np.deg2rad(np.arange(0, 362, 362 / thetaLen)) #+ np.deg2rad(-90) #+ rotAngle
radius = np.arange(startBGSW, rLen + startBGSW) / au * r_sun
#thetaBGSW, rBGSW = np.meshgrid(angle, radius)
t1, r1 = np.meshgrid(angle, radius)
#convert theta r to x y, so cartesian grid
X = (r1 * np.cos(t1)).T
Y = (r1 * np.sin(t1)).T
#Z = np.zeros(X.shape)
Z = np.flip(wsa_thux,axis=1).T
##### THUX
levels = np.arange(np.round(np.min(wsa_thux)), np.max(wsa_thux), 5)
#ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)
#cf = ax.contourf(X.T, Y.T, np.flip(wsa_thux,axis=1).T,levels,
# cmap=plt.cm.get_cmap('coolwarm'),vmin=np.min(wsa_thux),
# vmax=np.max(wsa_thux),alpha=0.6,antialiased=True, zdir='z',offset=-300)
cf = ax.contourf(X, Y, Z, levels, offset=0, cmap=cm.coolwarm,vmin=np.min(wsa_thux),vmax=np.max(wsa_thux),alpha=0.6,antialiased=True,zdir='z')
ax.set_xlim(-1.5, 1.5)
ax.set_ylim(-1.5, 1.5)
ax.set_zlim(-1.5, 1.5)
#cf = ax.contour(thetaBGSW.T, rBGSW.T, np.zeros(rBGSW.T.shape),zdir='z',np.flip(wsa_thux,axis=1).T,levels,
# cmap=plt.cm.get_cmap('coolwarm'),vmin=np.min(wsa_thux),
# vmax=np.max(wsa_thux),alpha=0.6,antialiased=True)
# This is the fix for the white lines between contour levels
#for c in cf.collections:
# c.set_edgecolor('face')
# c.set_linewidth( 0.1 )
cax = plt.axes(cbarax)
cbar = plt.colorbar(cf, cax=cax, ticks=np.arange(200, 600, 50),orientation="horizontal")
ticklabs = cbar.ax.get_yticklabels()
cbar.ax.set_yticklabels(ticklabs, fontsize=15)
cbar.set_label('Solar wind speed [km/s]', fontsize=15)
ax.grid(False)
plt.figure(1,figsize=(10, 10),dpi=100)
ax1 = plt.subplot2grid((1, 1), (0, 0),projection='3d')
cbarax1=[0.2, 0.05, 0.6, 0.02]
plot_bgsw(ax1,cbarax1)
#plot_bgsw(ax1,cbarax1)
```
## 3DCORE functions
```
def plot_configure(ax, **kwargs):
view_azim = kwargs.pop("view_azim", -25)
view_elev = kwargs.pop("view_elev", 25)
view_radius = kwargs.pop("view_radius", .5)
ax.view_init(azim=view_azim, elev=view_elev)
ax.set_xlim([-view_radius, view_radius])
ax.set_ylim([-view_radius, view_radius])
ax.set_zlim([-view_radius, view_radius])
ax.set_axis_off()
def plot_3dcore(ax, obj, t_snap, **kwargs):
kwargs["alpha"] = kwargs.pop("alpha", .05)
kwargs["color"] = kwargs.pop("color", "k")
kwargs["lw"] = kwargs.pop("lw", 1)
#ax.scatter(0, 0, 0, color="y", s=500)
model_obj.propagate(t_snap)
wf_model = model_obj.visualize_wireframe(index=0)
ax.plot_wireframe(*wf_model.T, **kwargs)
def plot_3dcore_field(ax, obj, step_size=0.005, q0=[1, .1, np.pi/2],**kwargs):
#initial point is q0
q0i =np.array(q0, dtype=np.float32).astype(np.float32)
fl = model_obj.visualize_fieldline_dpsi(q0i, dpsi=2*np.pi-0.01, step_size=step_size)
ax.plot(*fl.T, **kwargs)
def plot_traj(ax, sat, t_snap, frame="HEEQ", traj_pos=True, traj_major=1, traj_minor=None, **kwargs):
kwargs["alpha"] = kwargs.pop("alpha", 1)
kwargs["color"] = kwargs.pop("color", "k")
kwargs["lw"] = kwargs.pop("lw", 1)
kwargs["s"] = kwargs.pop("s", 5)
inst = getattr(heliosat, sat)()
_s = kwargs.pop("s")
if traj_pos:
pos = inst.trajectory(t_snap, frame)
ax.scatter(*pos.T, s=_s, **kwargs)
if traj_major and traj_major > 0:
traj = inst.trajectory([t_snap + datetime.timedelta(hours=i) for i in range(-traj_major, traj_major)], frame)
ax.plot(*traj.T, **kwargs)
if traj_minor and traj_minor > 0:
traj = inst.trajectory([t_snap + datetime.timedelta(hours=i) for i in range(-traj_minor, traj_minor)], frame)
if "ls" in kwargs:
kwargs.pop("ls")
_ls = "--"
_lw = kwargs.pop("lw") / 2
ax.plot(*traj.T, ls=_ls, lw=_lw, **kwargs)
def plot_circle(ax,dist,**kwargs):
thetac = np.linspace(0, 2 * np.pi, 100)
xc=dist*np.sin(thetac)
yc=dist*np.cos(thetac)
zc=0
ax.plot(xc,yc,zc,ls='--',color='black',lw=0.3,**kwargs)
def plot_satellite(ax,satpos1,**kwargs):
xc=satpos1[0]*np.cos(np.radians(satpos1[1]))
yc=satpos1[0]*np.sin(np.radians(satpos1[1]))
zc=0
#print(xc,yc,zc)
ax.scatter3D(xc,yc,zc,**kwargs)
def measure(obj, satpos1, t0, t1, frame="HEEQ", bframe="HEE", satparams=None):
#print(obj)
print('input')
print(t0,' / ', t1, frame, bframe)
#if satparams:
# inst = getattr(heliosat, sat)(satparams)
#else:
# inst = getattr(heliosat, sat)()
#print(inst)
#time resolution in seconds
#t_s = [datetime.datetime.fromtimestamp(_) for _ in np.array(list(range(int(t0.timestamp()), int(t1.timestamp()))))]
#position of spacecraft
#o_s = inst.trajectory(t_s, frame=frame)
#time resolution in hours
res_in_days=1/24.
t_s = []
while t0 < t1:
t_s.append(t0)
t0 += timedelta(days=res_in_days)
print('data points',len(t_s))
#generate position from satpos - always constant
o_s=np.zeros([len(t_s),3])
o_s[:,0]=satpos1[0] #R in AU
o_s[:,1]=np.radians(satpos1[1]) #longitude
o_s[:,2]=np.radians(satpos1[2]) #latitude
#print(t_s)
#print(o_s)
if satparams:
b = heliosat.spice.transform_frame([satparams] * len(t_s), np.array(obj.sim_fields(t_s, o_s))[:, 0, :], frame, bframe)
else:
b = heliosat.spice.transform_frame(t_s, np.array(obj.sim_fields(t_s, o_s))[:, 0, :], frame, bframe)
b[b == 0] = np.nan
return t_s, np.sqrt(np.sum(b**2, axis=1)), b, o_s
earth_color='blue'
solo_color='orange'
venus_color='mediumseagreen'
mercury_color='grey'
psp_color='black'
sta_color='red'
bepi_color='coral'
```
## set 3DCORE model
```
t_launch = datetime.datetime(2021, 9, 7, 18,0,0)
#2020 Dec 7: COR2 15 solar radii 18 UT
iparams_arr = np.array([[
0, # time offset
-10, # l_1 (logitude) HEEQ
0,#-20, # l_2 (latitude)
20, # o (inclination, orientation)
0.2, # d_1au (frontal width at 1AU)
3, # delta (cross-section aspect ratio)
15, # r_0 (initialization distance in solar radii)
1670, # v_0 (initial velocty in)
4, # tau (magnetic field twist)
1.0, # b_s (magnetic field scaling parameter)
25, # b_1au (magnetic field strength at 1au)
0.5, # Gamma (solar wind drag coefficient)
400, # v_sw (solar wind speed)
0 # sigma (measurement noise)
]], dtype=np.float32)
model_obj = py3dcore.models.ThinTorusGH3DCOREModel(t_launch, runs=1, use_gpu=False)
model_obj.update_iparams(iparams_arr, seed=42)
#measurement times
tm0 = t_launch + datetime.timedelta(days=1)
tm1 = t_launch + datetime.timedelta(days=2.5)
tm2 = t_launch + datetime.timedelta(days=5.0)
#colors for 3dplots
c0 = 'mediumseagreen'
c1 = "xkcd:red"
c2 = "xkcd:blue"
#colors for components in plots
cbt = "xkcd:black"
cbx = "xkcd:magenta"
cby = "xkcd:orange"
cbz = "xkcd:azure"
############# define synthetic satellite positions - semi-circle at 1 AU, from -90 to +90 longitude
lonstart=-90
lonstep=5
lonend=90
lonend=lonend+lonstep
satpos=np.zeros(len(np.arange(lonstart,lonend,lonstep)),dtype=[('r',float),('lon', float),('lat', float)])
#convert to recarray
satpos = satpos.view(np.recarray)
##### set position
satpos.r=1.0
satpos.lon=np.arange(lonstart,lonend,lonstep)
satpos.lat=0.0
print(satpos.r, satpos.lon)
#another satpos definition for a semi circle at 0.5 AU
satpos2=copy.deepcopy(satpos)
satpos2.r=0.5
```
## make plot
```
#use either
#%matplotlib
#%matplotlib inline
#matplotlib.use('Qt5Agg')
#matplotlib.use('Agg')
#%matplotlib inline
sns.set_context("talk")
#sns.set_style('whitegrid',{'grid.linestyle': '--'})
sns.set_style("ticks",{'grid.linestyle': '--'})
fsize=15
fig=plt.figure(1,figsize=(15,12),dpi=100)
ax = fig.add_subplot(111, projection='3d')
#plot_configure(ax, view_azim=0, view_elev=0, view_radius=0.8)
#in other planes
plot_configure(ax, view_azim=-60, view_elev=40, view_radius=0.9)
#plot_configure(ax, view_azim=0, view_elev=0, view_radius=0.6)
##plot sun
#define sun here so it does not need to be recalculated every time
scale=695510/149597870.700*1 #Rs in km, AU in km
# sphere with radius Rs in AU
u, v = np.mgrid[0:2*np.pi:40j, 0:np.pi:30j]
xsun = np.cos(u)*np.sin(v)*scale
ysun = np.sin(u)*np.sin(v)*scale
zsun = np.cos(v)*scale
#draw Sun
ls = LightSource(azdeg=140, altdeg=40)
ax.plot_surface(x, y, z, rstride=1, cstride=1, color='yellow',lightsource=ls, linewidth=0, antialiased=True,zorder=5)
#####plot solar wind
plot_bgsw(ax,cbarax1)
########## 3dcore plots
#plot_3dcore(ax, model_obj, tm0, color=c1)
#plot_3dcore_field(ax, model_obj, color=c1, step_size=0.005, lw=1.1, ls="-",q0=np.array([1, .1, np.pi/2]))
plot_3dcore(ax, model_obj, tm1, color='k',alpha=0.1)
plot_3dcore_field(ax, model_obj, color='b', step_size=0.01, lw=2.5, ls="-",q0=np.array([1, .1, np.pi/2]))
plot_3dcore_field(ax, model_obj, color='r', step_size=0.01, lw=2.5, ls="-",q0=np.array([0.2, .3, np.pi/2]))
plot_3dcore_field(ax, model_obj, color='g', step_size=0.01, lw=2.5, ls="-",q0=np.array([0.5, .5, np.pi/2]))
plot_3dcore_field(ax, model_obj, color='y', step_size=0.01, lw=2.5, ls="-",q0=np.array([0.8, .7, np.pi/2]))
#plot_3dcore_field(ax, model_obj, color='g', step_size=0.01, lw=1.1, ls="-")
############# satellite plots
#plot_traj(ax, "Earth", tm1, frame="HEEQ", color=c1)
#for i in np.arange(0,len(satpos)):
# plot_satellite(ax,satpos[i],color='black',alpha=0.9)
# plot_satellite(ax,satpos2[i],color='red',alpha=0.9)
plot_satellite(ax,satpos[18],color='blue',alpha=0.9)
plot_traj(ax, "Venus", tm1, frame="HEEQ", color=venus_color,alpha=1.0,s=5,zorder=5)
plot_traj(ax, "Mercury", tm1, frame="HEEQ", color=mercury_color,s=5)
plot_traj(ax, "SOLO",tm1, frame="HEEQ", color=solo_color,s=30)
plot_traj(ax, "PSP", tm1, frame="HEEQ", color=psp_color,s=30)
plot_traj(ax, "STA", tm1, frame="HEEQ", color=sta_color,s=30)
plot_traj(ax, "BEPI", tm1, frame="HEEQ", color=bepi_color,s=30)
##########cosmetics
#approximate Sun Earth line
ax.plot([0,1],[0,0],[0,0],ls='-',color='black',lw=0.3)
plot_circle(ax,0.5)
plot_circle(ax,1.0)
plot_circle(ax,1.5)
limits=0.7
ax.set_xlim(-limits,limits)
ax.set_ylim(-limits, limits)
ax.set_zlim(-limits, limits)
plt.tight_layout()
plt.savefig('results/test_thux_3dcore/test.pdf')
plt.savefig('results/test_thux_3dcore/test.png',dpi=200)
############################### measure magnetic field
print()
#18 is middle
satposindex=18
print('current satpos measured is ', satposindex)
print(satpos[satposindex])
t1, btot1, bxyz1, os1 = measure(model_obj, satpos[satposindex], tm1 - datetime.timedelta(days=3), tm1 + datetime.timedelta(days=15))
################################################
sns.set_context('talk')
sns.set_style('whitegrid')
fig = plt.figure(figsize=(15, 10),dpi=50)
ax1 = fig.add_subplot(111)
ax1.set_title('Satellite position R= 1.0 AU, longitude '+str(satpos.lon[satposindex])+' GSE')
ax1.plot(t1, btot1, color=cbt, label="$|B|$")
ax1.plot(t1, -bxyz1[:, 0], color=cbx, label="$B_x$")
ax1.plot(t1, -bxyz1[:, 1], color=cby, label="$B_y$")
ax1.plot(t1, bxyz1[:, 2], color=cbz, label="$B_z$")
ax1.legend(loc="lower left", fontsize=20,ncol=4)
ax1.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%b %d %H'))
ax1.set_ylabel('B [nT] GSE')
plt.ylim(-20,30)
plt.xlim(datetime.datetime(2020,12,10,0,0,0),datetime.datetime(2020,12,11,12,0,0))
#ax1.plot(noaa.time,noaa.bt,color=cbt)
#ax1.plot(noaa.time,noaa.bx,color=cbx)
#ax1.plot(noaa.time,noaa.by,color=cby)
#ax1.plot(noaa.time,noaa.bz,color=cbz)
plt.tight_layout()
plt.savefig('test_measure_1.png', dpi=50)
plt.savefig('test_measure_1.pdf', dpi=50)
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import kenlm
from tqdm import tqdm
import fastText
import pandas as pd
from bleu import *
import torch, os
#bert classifier
from tqdm import trange
from pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE
from pytorch_pretrained_bert.modeling import BertForSequenceClassification, BertConfig, WEIGHTS_NAME, CONFIG_NAME
from pytorch_pretrained_bert.tokenization import BertTokenizer
model_cls = BertForSequenceClassification.from_pretrained("./bert_classifier/amazon", num_labels=2)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
model_cls.to('cuda')
model_cls.eval()
max_seq_len=70
sm = torch.nn.Softmax(dim=-1)
def evaluate_dev_set(input_sentences, labels, bs=32):
"""
To evaluate whole dataset and return accuracy
"""
ids = []
segment_ids = []
input_masks = []
pred_lt = []
for sen in input_sentences:
text_tokens = tokenizer.tokenize(sen)
if len(text_tokens) >= max_seq_len - 2:
text_tokens = text_tokens[:max_seq_len - 3]
tokens = ["[CLS]"] + text_tokens + ["[SEP]"]
temp_ids = tokenizer.convert_tokens_to_ids(tokens)
input_mask = [1] * len(temp_ids)
segment_id = [0] * len(temp_ids)
padding = [0] * (max_seq_len - len(temp_ids))
temp_ids += padding
input_mask += padding
segment_id += padding
ids.append(temp_ids)
input_masks.append(input_mask)
segment_ids.append(segment_id)
ids = torch.tensor(ids).to('cuda')
segment_ids = torch.tensor(segment_ids).to('cuda')
input_masks = torch.tensor(input_masks).to('cuda')
steps = len(ids) // bs
for i in trange(steps+1):
if i == steps:
temp_ids = ids[i * bs : len(ids)]
temp_segment_ids = segment_ids[i * bs: len(ids)]
temp_input_masks = input_masks[i * bs: len(ids)]
else:
temp_ids = ids[i * bs : i * bs + bs]
temp_segment_ids = segment_ids[i * bs: i * bs + bs]
temp_input_masks = input_masks[i * bs: i * bs + bs]
with torch.no_grad():
preds = sm(model_cls(temp_ids, temp_segment_ids, temp_input_masks))
#preds = preds.view(-1,bs)
try:
args = torch.argmax(preds, dim=-1)
pred_lt.extend(args.tolist())
except RuntimeError:
pass
accuracy = sum(np.array(pred_lt) == np.array(labels)) / len(labels)
return accuracy, pred_lt
from pytorch_pretrained_bert import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
import logging
logging.basicConfig(level=logging.INFO)
lm_tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
lm_model = GPT2LMHeadModel.from_pretrained('gpt2')
path = os.path.join(os.getcwd(), "GPT2/amazon_language_model_1.bin")
lm_model_state_dict = torch.load(path)
lm_model.load_state_dict(lm_model_state_dict)
lm_model.to(device)
lm_model.eval()
lm_loss = torch.nn.CrossEntropyLoss(ignore_index=-1, reduction='none')
def calculate_ppl_gpt2(sentence_batch, bs=16):
# tokenize the sentences
tokenized_ids = [None for i in range(len(sentence_batch))]
ppl = [None for i in range(len(sentence_batch))]
for i in range(len(sentence_batch)):
tokenized_ids[i] = lm_tokenizer.encode(sentence_batch[i])
sen_lengths = [len(x) for x in tokenized_ids]
max_sen_length = max(sen_lengths)
n_batch = len(sentence_batch)
input_ids = np.zeros( shape=(n_batch, max_sen_length), dtype=np.int64)
lm_labels = np.full(shape=(n_batch, max_sen_length), fill_value=-1)
for i, tokens in enumerate(tokenized_ids):
input_ids[i, :len(tokens)] = tokens
lm_labels[i, :len(tokens)-1] = tokens[1:]
input_ids = torch.tensor(input_ids)#.to(device)
lm_labels = torch.tensor(lm_labels)#.to(device)
steps = n_batch // bs
for i in range(steps+1):
if i == steps:
temp_input_ids = input_ids[i * bs : n_batch]
temp_lm_labels = lm_labels[i * bs : n_batch]
temp_sen_lengths = sen_lengths[i * bs : n_batch]
else:
temp_input_ids = input_ids[i * bs : i * bs + bs]
temp_lm_labels = lm_labels[i * bs : i * bs + bs]
temp_sen_lengths = sen_lengths[i * bs : i * bs + bs]
temp_input_ids = temp_input_ids.to('cuda')
temp_lm_labels = temp_lm_labels.to('cuda')
with torch.no_grad():
lm_pred = lm_model(temp_input_ids)
loss_val = lm_loss(lm_pred[0].view(-1, lm_pred[0].size(-1)), temp_lm_labels.view(-1))
normalized_loss = loss_val.view(len(temp_input_ids),-1).sum(dim= -1) / torch.tensor(temp_sen_lengths, dtype=torch.float32).to(device)
tmp_ppl = torch.exp(normalized_loss)
ppl[i * bs: i * bs + len(temp_input_ids)] = tmp_ppl.tolist()
return ppl
#fasttext classifier
classifier_model = fastText.load_model('fasttextmodel/amazon_model.bin')
#kenlm lm
kenlm_lm = kenlm.Model('kenlmmodel/amazon.arpa')
df = pd.read_csv('amazon_all_model_prediction_1.csv', header = None)
label = 0
label_str = '__label__0'
list_sentences = df[1:len(df)].values.tolist()
list_sentences_source = []
list_sentences_human = []
for list_sentance in list_sentences:
if(pd.isnull(list_sentance[0])):
list_sentences_source.append(" ")
else:
list_sentences_source.append(list_sentance[0])
if(pd.isnull(list_sentance[-1])):
list_sentences_human.append(" ")
else:
list_sentences_human.append(list_sentance[-1])
matrics1 = []
for i in tqdm(range(0, len(list_sentences[0]))):
bleu_s = 0
bleu_r = 0
fasttext_c = 0
kenlm_ppl = 0
gpt2_ppl = 0
sentences = []
for j in range(0, len(list_sentences)):
if(pd.isnull(list_sentences[j][i])):
sentences.append(" ")
continue
sentences.append(list_sentences[j][i])
fasttext_labels = classifier_model.predict(sentences)
total_sentences = len(sentences)
bleu_s = get_bleu(list_sentences_source, sentences)
bleu_r = get_bleu(list_sentences_human, sentences)
for _, sentence in enumerate(sentences):
if(fasttext_labels[0][_][0]==label_str):
fasttext_c += 1
kenlm_ppl += kenlm_lm.perplexity(sentence)
labels_list = [label] * len(sentences)
bert_accuracy, pred_label_list = evaluate_dev_set(sentences, labels_list)
ppl_list_gpt2 = calculate_ppl_gpt2(sentences)
for j in range(0, len(ppl_list_gpt2)):
gpt2_ppl += ppl_list_gpt2[j]
matrics1.append([bleu_s , bleu_r , fasttext_c/total_sentences , kenlm_ppl/total_sentences, bert_accuracy, gpt2_ppl/len(ppl_list_gpt2)])
df = pd.read_csv('amazon_all_model_prediction_0.csv', header = None)
label = 1
label_str = '__label__1'
list_sentences = df[1:len(df)].values.tolist()
list_sentences_source = []
list_sentences_human = []
for list_sentance in list_sentences:
if(pd.isnull(list_sentance[0])):
list_sentences_source.append(" ")
else:
list_sentences_source.append(list_sentance[0])
if(pd.isnull(list_sentance[-1])):
list_sentences_human.append(" ")
else:
list_sentences_human.append(list_sentance[-1])
matrics0 = []
for i in tqdm(range(0, len(list_sentences[0]))):
bleu_s = 0
bleu_r = 0
fasttext_c = 0
kenlm_ppl = 0
gpt2_ppl = 0
sentences = []
for j in range(0, len(list_sentences)):
if(pd.isnull(list_sentences[j][i])):
sentences.append(" ")
continue
sentences.append(list_sentences[j][i])
fasttext_labels = classifier_model.predict(sentences)
total_sentences = len(sentences)
bleu_s = get_bleu(list_sentences_source, sentences)
bleu_r = get_bleu(list_sentences_human, sentences)
for _, sentence in enumerate(sentences):
if(fasttext_labels[0][_][0]==label_str):
fasttext_c += 1
kenlm_ppl += kenlm_lm.perplexity(sentence)
labels_list = [label] * len(sentences)
bert_accuracy, pred_label_list = evaluate_dev_set(sentences, labels_list)
ppl_list_gpt2 = calculate_ppl_gpt2(sentences)
for j in range(0, len(ppl_list_gpt2)):
gpt2_ppl += ppl_list_gpt2[j]
matrics0.append([bleu_s , bleu_r , fasttext_c/total_sentences , kenlm_ppl/total_sentences, bert_accuracy, gpt2_ppl/len(ppl_list_gpt2)])
[print(i) for i in matrics0]
[print(i) for i in matrics1]
matricsavg = (np.array(matrics0)+np.array(matrics1))/2
df_res0 = pd.DataFrame(matrics0, columns=['BLEU_source','BLEU_human','fasttext_classifier','klm_ppl', 'BERT_classifier', 'gpt2_ppl'])
df_res1 = pd.DataFrame(matrics1, columns=['BLEU_source','BLEU_human','fasttext_classifier','klm_ppl', 'BERT_classifier', 'gpt2_ppl'])
df_resavg = pd.DataFrame(matricsavg, columns=['BLEU_source','BLEU_human','fasttext_classifier','klm_ppl', 'BERT_classifier', 'gpt2_ppl'])
models_list = df[0:1].values.tolist()
#df_res.insert(loc=0, column='GLEU_score', value=gleu_list)
df_res0.insert(loc=0, column='model', value=models_list[0])
df_res1.insert(loc=0, column='model', value=models_list[0])
df_resavg.insert(loc=0, column='model', value=models_list[0])
df_res0
df_res0.to_csv('matrics/amazon/matrics_amazon_all_model_prediction_0.csv')
df_res1.to_csv('matrics/amazon/matrics_amazon_all_model_prediction_1.csv')
df_resavg.to_csv('matrics/amazon/matrics_amazon_all_model_prediction_avg.csv')
```
| github_jupyter |
## Collaborative filtering using Python
Alright, so let's do it! We have some Python code that will use Pandas, and all the various other tools at our disposal, to create movie recommendations with a surprisingly little amount of code.
The first thing we're going to do is show you item-based collaborative filtering in practice. So, we'll build up *people who watched also watched* basically, you know, *people who rated things highly also rated this thing highly*, so building up these movie to movie relationships. So, we're going to base it on real data that we got from the MovieLens project. So, if you go to MovieLens.org, there's actually an open movie recommender system there, where people can rate movies and get recommendations for new movies.
And, they make all the underlying data publicly available for researchers like us. So, we're going to use some real movie ratings data-it is a little bit dated, it's like 10 years old, so keep that in mind, but it is real behavior data that we're going to be working with finally here. And, we will use that to compute similarities between movies. And, that data in and of itself is useful. You can use that data to say *people who liked also liked*. So, let's say I'm looking at a web page for a movie. the system can then say: *if you liked this movie, and given that you're looking at it you're probably interested in it, then you might also like these movies*. And that's a form of a recommender system right there, even though we don't even know who you are.
Now, it is real-world data, so we're going to encounter some real-world problems with it. Our initial set of results aren't going to look good, so we're going to spend a little bit of extra time trying to figure out why, which is a lot of what you spend your time doing as a data scientist-correct those problems, and go back and run it again until we get results that makes sense.
And finally, we'll actually do item-based collaborative filtering in its entirety, where we actually recommend movies to individuals based on their own behavior. So, let's do this, let's get started!
## Finding movie similarities
Let's apply the concept of item-based collaborative filtering. To start with, movie similarities-figure out what movies are similar to other movies. In particular, we'll try to figure out what movies are similar to Star Wars, based on user rating data, and we'll see what we get out of it. Let's dive in!
Okay so, let's go ahead and compute the first half of item-based collaborative filtering, which is finding similarities between items.
```
import pandas as pd
r_cols = ['user_id','movie_id','rating']
ratings = pd.read_csv('u.data.csv',names=r_cols, usecols=range(3))
m_cols = ['movie_id','title']
movies = pd.read_csv('u.item.csv',names=m_cols, sep='|',usecols=range(2))
#print(ratings.head())
ratings = pd.merge(movies,ratings)
ratings.to_csv('ratings.csv')
movies.head()
```
In this case, we're going to be looking at similarities between movies, based on user behavior. And, we're going to be using some real movie rating data from the GroupLens project. GroupLens.org provides real movie ratings data, by real people who are using the MovieLens.org website to rate movies and get recommendations back for new movies that they want to watch.
We have included the data files that you need from the GroupLens dataset with the course materials, and the first thing we need to do is import those into a Pandas DataFrame, and we're really going to see the full power of Pandas in this example. It's pretty cool stuff!
Let's add a `ratings.head()` command and then run those cells. What we end up with is something like the following table. That was pretty quick!
```
ratings.head()
```
We end up with a new DataFrame that contains the `user_id` and rating for each movie that a user rated, and we have both the `movie_id` and the `title` that we can read and see what it really is. So, the way to read this is `user_id` number `308` rated the `Toy Story (1995)` movie 4 stars, `user_id` number `287` rated the `Toy Story (1995)` movie 5 stars, and so on and so forth. And, if we were to keep looking at more and more of this DataFrame, we'd see different ratings for different movies as we go through it.
Now the real magic of Pandas comes in. So, what we really want is to look at relationships between movies based on all the users that watched each pair of movies, so we need, at the end, a matrix of every movie, and every user, and all the ratings that every user gave to every movie. The `pivot_table` command in Pandas can do that for us. It can basically construct a new table from a given DataFrame, pretty much any way that you want it. For this, we can use the following code:
```
movieRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values=['rating'])
movieRatings.head()
```
It's kind of amazing how that just put it all together for us. Now, you'll see some NaN values, which stands for **Not a Number**, and its just how Pandas indicates a missing value. So, the way to interpret this is, `user_id` number 1, for example, did not watch the movie `1-900 (1994)`, but `user_id` number 1 did watch `101 Dalmatians (1996)` and rated it 2 stars. The `user_id` number 1 also watched `12 Angry Men (1957)` and rated it 5 stars, but did not watch the movie 2 Days in `the Valley (1996)`, for example, okay? So, what we end up with here is a sparse matrix basically, that contains every user, and every movie, and at every intersection where a user rated a movie there's a rating value.
So, you can see now, we can very easily extract vectors of every movie that our user watched, and we can also extract vectors of every user that rated a given movie, which is what we want. So, that's useful for both user-based and item-based collaborative filtering, right? If I wanted to find relationships between users, I could look at correlations between these user rows, but if I want to find correlations between movies, for item-based collaborative filtering, I can look at correlations between columns based on the user behavior. So, this is where the real *flipping things on its head for user versus item-based similarities* comes into play.
Now, we're going with item-based collaborative filtering, so we want to extract columns, to do this let's run the following code:
```
starWarsRatings = movieRatings['rating','Star Wars (1977)']
starWarsRatings.head()
```
Now, with the help of that, let's go ahead and extract all the users who rated Star Wars (1977):
And, we can see most people have, in fact, watched and rated `Star Wars (1977)` and everyone liked it, at least in this little sample that we took from the head of the DataFrame. So, we end up with a resulting set of user IDs and their ratings for `Star Wars (1977)`. The user ID 3 did not rate `Star Wars (1977)` so we have a `NaN` value, indicating a missing value there, but that's okay. We want to make sure that we preserve those missing values so we can directly compare columns from different movies. So, how do we do that?
## The corrwith function
```
corrMatrix = movieRatings.corr(method='pearson',min_periods=100) #pearson is the corr method
corrMatrix.head()
```
Well, Pandas keeps making it easy for us, and has a corrwith function that you can see in the following code that we can use:
```
movieRatings1 = movieRatings['rating']
movieRatings1
similarMovies = movieRatings1.corrwith(starWarsRatings)
# print(similarMovies.shape)
similarMovies = similarMovies.dropna()
df = pd.DataFrame(similarMovies)
similarMovies.sort_values(ascending=False)
# print(similarMovies.shape)
```
That code will go ahead and correlate a given column with every other column in the DataFrame, and compute the correlation scores and give that back to us. So, what we're doing here is using corrwith on the entire movieRatings DataFrame, that's that entire matrix of user movie ratings, correlating it with just the starWarsRatings column, and then dropping all of the missing results with dropna. So, that just leaves us with items that had a correlation, where there was more than one person that viewed it, and we create a new DataFrame based on those results and then display the top 10 results. So again, just to recap:
1. We're going to build the correlation score between Star Wars and every other movie.
2. Drop all the NaN values, so that we only have movie similarities that actually exist, where more than one person rated it.
3. And, we're going to construct a new DataFrame from the results and look at the top 10 results.
We ended up with this result of correlation scores between each individual movie for Star Wars and we can see, for example, a surprisingly high correlation score with the movie `Til There Was You (1997)`, a negative correlation with the movie `1-900 (1994)`, and a very weak correlation with `101 Dalmatians (1996)`.
Now, all we should have to do is sort this by similarity score, and we should have the top movie similarities for Star Wars, right? Let's go ahead and do that.
Just call sort_values on the resulting DataFrame, again Pandas makes it really easy, and we can say `ascending=False`, to actually get it sorted in reverse order by correlation score. So, let's do that:
Okay, so `Star Wars (1977)` came out pretty close to top, because it is similar to itself, but what's all this other stuff? What the heck? We can see in the preceding output, some movies such as: `Full Speed (1996)`, `Man of the Year (1995)`, `The Outlaw (1943)`. These are all, you know, fairly obscure movies, that most of them I've never even heard of, and yet they have perfect correlations with Star Wars. That's kinda weird! So, obviously we're doing something wrong here. What could it be?
Well, it turns out there's a perfectly reasonable explanation, and this is a good lesson in why you always need to examine your results when you're done with any sort of data science task-question the results, because often there's something you missed, there might be something you need to clean in your data, there might be something you did wrong. But you should also always look skeptically at your results, don't just take them on faith, okay? If you do so, you're going to get in trouble, because if I were to actually present these as recommendations to people who liked Star Wars, I would get fired. Don't get fired! Pay attention to your results! So, let's dive into what went wrong in our next section.
## Improving the results of movie similarities
Let's figure out what went wrong with our movie similarities there. We went through all this exciting work to compute correlation scores between movies based on their user ratings vectors, and the result we got kind of sucked. so just to remind you, we looked for movies that are similar to Star Wars using that technique, and we ended up with a bunch of weird recommendation at the top that had a perfect correlation.
And, most of them were very obscure movies. So, What do you think might be going on there? Well, one think that might make sense is, Let's say we have a lot of people wathch Star Wars and some other obscure film. We'd end up with a good correlation between these two movies because they're tied together by Star Wars, but at the end of the day, do we really want to base our recommendations on the behaviour of one or two people that watch some obscure movie?
Probably not! I mean yes, the two people in the world, or whatever it is, that watch the movie Full Speed, and both liked it in addition to Star Wars, maybe that is a good recommendation for them, but it's probably not a good recommendation to the rest of the world. We need to have some sort of confidence level in our similarities by enforcing a minimum boundary of how many people watched a given movie. We can't make a judgement that a given movie is good just based on the behaviour of one or two people.
So, let's try to put that insight into action using the following code.
```
import numpy as np
movieStats = ratings.groupby('title').agg({'rating':[np.size,np.mean]})
movieStats.head()
```
What we're going to do is try to identify the movies that weren't actually rated by many people and we'll just throw them out and see what we get. So, to do that we're going to take our original ratings DataFrame and we're going to say `groupby('title')`, again Pandas has all sorts of magic in it. And, this will basically construct a new DataFrame that aggregates together all the rows for a given title into one row.
We can say that we want to aggregate specifically on the rating, and we want to show both the size, the number of ratings for each movie, and the mean average score, the mean rating for that movie. So, when we do that, we end up with something like the above.
This is telling us, for example, for the movie `101 Dalmatians (1996)`, `109` people rated that movie and their average rating was 2.9 stars, so not that great of a score really! So, if we just eyeball this data, we can say okay well, movies that I consider obscure, like `187 (1997)`, had `41` ratings, but `101 Dalmatians (1996)`, I've heard of that, you know `12 Angry Men (1957)`, I've heard of that. It seems like there's sort of a natural cutoff value at around `100` ratings, where maybe that's the magic value where things start to make sense.
Let's go ahead and get rid of movies rated by fewer than 100 people, and yes, you know I'm kind of doing this intuitively at this point. As we'll talk about later, there are more principled ways of doing this, where you could actually experiment and do train/test experiments on different threshold values, to find the one that actually performs the best. But initially, let's just use our common sense and filter out movies that were rated by fewer than 100 people. Again, Pandas makes that really easy to do.
Let's figure it out with the following example:
```
#movieStats.loc['Star Wars (1977)']
popularMovies = movieStats['rating']['size']>=100
movieStats = movieStats[popularMovies].sort_values([('rating','mean')],ascending=False)
# movieStats = movieStats[movieStats.index != 'Star Wars (1977)'] #drop row having Star Wars(1977)
movieStats
```
What we have here is a list of movies that were rated by more than 100 people, sorted by their average rating score, and this in itself is a recommender system. These are highly-rated popular movies. A `Close Shave (1995)`, apparently, was a really good movie and a lot of people watched it and they really liked it.
So again, this is a very old dataset, from the late 90s, so even though you might not be familiar with the film A `Close Shave (1995)`, it might be worth going back and rediscovering it; add it to your Netflix! `Schindler's List (1993)` not a big surprise there, that comes up on the top of most top movies lists. The `Wrong Trousers (1993)`, another example of an obscure film that apparently was really good and was also pretty popular. So, some interesting discoveries there already, just by doing that.
Things look a little bit better now, so let's go ahead and basically make our new DataFrame of Star Wars recommendations, movies similar to Star Wars, where we only base it on movies that appear in this new DataFrame. So, we're going to use the `join` operation, to go ahead and join our original `similarMovies` DataFrame to this new DataFrame of only movies that have greater than 100 ratings, okay?
```
df = movieStats.join(pd.DataFrame(similarMovies,columns=['similarity']))
df.head()
```
In this code, we create a new DataFrame based on similarMovies where we extract the similarity column, join that with our movieStats DataFrame, which is our popularMovies DataFrame, and we look at the combined results. And, there we go with that output!
Now we have, restricted only to movies that are rated by more than 100 people, the similarity score to Star Wars. So, now all we need to do is sort that using the following code:
```
df.sort_values(['similarity'],ascending=False)[:15]
```
This is starting to look a little bit better! So, `Star Wars (1977)` comes out on top because it's similar to itself, The `Empire Strikes Back (1980)` is number 2, `Return of the Jedi (1983)` is number 3, `Raiders of the Lost Ark (1981)`, number 4. You know, it's still not perfect, but these make a lot more sense, right? So, you would expect the three Star Wars films from the original trilogy to be similar to each other, this data goes back to before the next three films, and `Raiders of the Lost Ark (1981)` is also a very similar movie to Star Wars in style, and it comes out as number 4. So, I'm starting to feel a little bit better about these results. There's still room for improvement, but hey! We got some results that make sense, whoo-hoo!
Now, ideally, we'd also filter out Star Wars, you don't want to be looking at similarities to the movie itself that you started from, but we'll worry about that later! So, if you want to play with this a little bit more, like I said 100 was sort of an arbitrary cutoff for the minimum number of ratings. If you do want to experiment with different cutoff values, I encourage you to go back and do so. See what that does to the results. You know, you can see in the preceding table that the results that we really like actually had much more than 100 ratings in common. So, we end up with `Austin Powers: International Man of Mystery (1997)` coming in there pretty high with only 130 ratings so maybe 100 isn't high enough! `Pinocchio (1940)` snuck in at 101, not very similar to Star Wars, so, you might want to consider an even higher threshold there and see what it does.
**Note:** Please keep in mind too, this is a very small, limited dataset that we used for experimentation purposes, and it's based on very old data, so you're only going to see older movies. So, interpreting these results intuitively might be a little bit challenging as a result, but not bad results
```
k = movieStats.index is not 'Star Wars (1977)'
k
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 500)
movieStats.loc['Miracle on 34th Street (1994)']
```
## Understanding movie recommendation with an example
So, what we do with this data? Well, what we want to do is recommend movies for people. The way we do that is we look at all the ratings for a given person, find movies similar to the stuff that they rated, and those are candidates for recommendations to that person.
Let's start by creating a fake person to create recommendations for. I've actually already added a fake user by hand, ID number 0, to the MovieLens dataset that we're processing. You can see that user with the following code:
```
myRatings = movieRatings.loc[0].dropna()
myRatings
```
That kind of represents someone like me, who loved Star Wars and The Empire Strikes Back, but hated the movie Gone with the Wind. So, this represents someone who really loves Star Wars, but does not like old style, romantic dramas, okay? So, I gave a rating of 5 star to `The Empire Strikes Back (1980)` and `Star Wars (1977)`, and a rating of 1 star to `Gone with the Wind (1939)`. So, I'm going to try to find recommendations for this fictitious user. So, how do I do that? Well, let's start by creating a series called simCandidates and I'm going to go through every movie that I rated.
```
simCandidates = pd.Series()
for i in range(0,len(myRatings.index)):
print("Adding sims for ",myRatings.index[i],"...")
sims = corrMatrix[myRatings.index[i]].dropna()
sims = sims.map(lambda x: x*myRatings[i])
print(sims)
simCandidates = simCandidates.append(sims)
print('\nsorting..\n')
simCandidates.sort_values(inplace=True,ascending=False)
print(simCandidates.head(10))
```
For i in range 0 through the number of ratings that I have in `myRatings`, I am going to add up similar movies to the ones that I rated. So, I'm going to take that `corrMatrix` DataFrame, that magical one that has all of the movie similarities, and I am going to create a correlation matrix with `myRatings`, drop any missing values, and then I am going to scale that resulting correlation score by how well I rated that movie.
So, the idea here is I'm going to go through all the similarities for The Empire Strikes Back, for example, and I will scale it all by 5, because I really liked The Empire Strikes Back. But, when I go through and get the similarities for Gone with the Wind, I'm only going to scale those by 1, because I did not like Gone with the Wind. So, this will give more strength to movies that are similar to movies that I liked, and less strength to movies that are similar to movies that I did not like, okay? So, I just go through and build up this list of similarity candidates, recommendation candidates if you will, sort the results and print them out. Let's see what we get above
Hey, those don't look too bad, right? So, obviously The `Empire Strikes Back (1980)` and `Star Wars (1977)` come out on top, because I like those movies explicitly, I already watched them and rated them. But, bubbling up to the top of the list is `Return of the Jedi (1983)`, which we would expect and `Raiders of the Lost Ark (1981)`.
Let's start to refine these results a little bit more. We're seeing that we're getting duplicate values back. If we have a movie that was similar to more than one movie that I rated, it will come back more than once in the results, so we want to combine those together. If I do in fact have the same movie, maybe that should get added up together into a combined, stronger recommendation score. Return of the Jedi, for example, was similar to both Star Wars and The Empire Strikes Back. How would we do that?
## Using the groupby command to combine rows
We'll go ahead and explore that. We're going to use the groupby command again to group together all of the rows that are for the same movie. Next, we will sum up their correlation score and look at the results:
```
simCandidates = simCandidates.groupby(simCandidates.index).sum()
simCandidates.sort_values(inplace=True,ascending=False)
simCandidates.head(10)
```
Hey, this is looking really good!
So `Return of the Jedi (1983)` comes out way on top, as it should, with a score of 7, `Raiders of the Lost Ark (1981)` a close second at 5, and then we start to get to `Indiana Jones and the Last Crusade (1989)`, and some more movies, `The Bridge on the River Kwai (1957)`, `Back to the Future (1985)`,`The Sting (1973)`. These are all movies that I would actually enjoy watching! You know, I actually do like old-school Disney movies too, so `Cinderella (1950)` isn't as crazy as it might seem.
The last thing we need to do is filter out the movies that I've already rated, because it doesn't make sense to recommend movies you've already seen.
## Removing entries with the drop command
So, I can quickly drop any rows that happen to be in my original ratings series using the following code:
```
filteredSims = simCandidates.drop(myRatings.index)
filteredSims.head(10)
```
And there we have it! `Return of the Jedi (1983)`, `Raiders of the Lost Ark (1981)`, `Indiana Jones` and `the Last Crusade (1989)`, all the top results for my fictitious user, and they all make sense. I'm seeing a few family-friendly films, you know, `Cinderella (1950)`, `The Wizard of Oz (1939)`, `Dumbo (1941)`, creeping in, probably based on the presence of Gone with the Wind in there, even though it was weighted downward it's still in there, and still being counted. And, there we have our results, so. There you have it! Pretty cool!
We have actually generated recommendations for a given user and we could do that for any user in our entire DataFrame. So, go ahead and play with that if you want to. I also want to talk about how you can actually get your hands dirty a little bit more, and play with these results; try to improve upon them.
There's a bit of an art to this, you know, you need to keep iterating and trying different ideas and different techniques until you get better and better results, and you can do this pretty much forever. I mean, I made a whole career out of it. So, I don't expect you to spend the next, you know, 10 years trying to refine this like I did, but there are some simple things you can do, so let's talk about that.
| github_jupyter |
<a id='top'></a>
# Log completion by ML regression
- Typical and useful Pandas
- Data exploration using Matplotlib
- Basic steps for data cleaning
- **Exercise: Find problem in specific well log data.**
- Feature engineering
- Setup scikit-learn workflow
- Making X and y
- Choosing a model
- Classification vs Regression
- Evaluating model performance
- Parameter selection and tuning
- GridSearch
- Add more data / remove data
## More Pandas
---
Load Numpy, Pandas and Matplotlib
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
```
Define the name of the file to be loaded and use Pandas to read it. Note that the name can be a PATH pointing at the file.
```
datafile = '../data/training_DataFrame.csv'
```
Pandas expects by default a column on the file to be an index for each row of values. For this example, column 1 (index = 0) is that column.
```
wells = pd.read_csv(datafile, index_col=0)
```
# Data Exploration and cleaning
Before feeding our machines with data to learn from, it's important to make sure that we feed them the best possible data. Pandas has a few methods to explore the contents of the data. The `head()` method shows the top rows of the DataFrame.
```
wells.head()
```
Another useful Pandas method is `describe()`, which compile useful statistics of each numeric column in the `DataFrame`.
```
wells.describe()
```
Note how the `count` row is not the same for all columns? This means that there are some values that Pandas doesn't think they are numbers! (Could be missing values or `NaN`s). There are many strategies to deal with missing data but for this excercise we're just going to ignore the rows that contain these bad values.
```
wells = wells.dropna()
wells.describe()
```
Now every column in the `DataFrame` should contain the same number of elements and now we can focus on the statistics themselves. Look at each log property, do those `mean`, `min` and `max` look OK? `ILD` shouldn't have negative values. Let's take them out of our set:
```
wells = wells[wells.ILD > 0]
wells.describe()
```
Another typical first approach to explore the data is to study the distribution of values in the dataset...
```
ax = wells.hist(column="RHOB", figsize=(8,6), bins=20)
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>
That distribution doesn't seem right. Can you exclude the `DataFrame` values for which `RHOB` is higher than `1800`?
</li>
<p>
</ul>
</div>
```
# Put your code here
#!--
wells = wells[wells.RHOB > 1800]
#--!
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>
Explore the rest of the `DataFrame`. Do all distributions look OK?
</li>
<p>
</ul>
</div>
Seaborn has a few tricks to display histograms better
```
import seaborn as sns
wells.ILD.values
sns.distplot(wells['ILD'])
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>
Calculate the `log` of ILD and store it in the `DataFrame`
</li>
<p>
</ul>
</div>
```
# Put your code here
#!--
wells['log_ILD'] = np.log10(wells['ILD'])
axs = wells['log_ILD'].hist(bins=20)
#--!
wells = wells[wells.DPHI > 0]
sns.distplot(wells.DPHI)
```
# Load testing data
```
w_train = wells.copy()
w_test = pd.read_csv('../data/testing_DataFrame.csv', index_col=0)
w_test_complete = pd.read_csv('../data/testing_DataFrame_complete.csv', index_col=0)
w_test.head()
w_test.describe()
w_test = w_test[w_test.DPHI > 0]
w_test_complete = w_test_complete[w_test_complete.DPHI > 0]
w_test.describe()
```
Let's start testing our training pipeline with a subset of wells. We can come back to this and change the number of wells we include, to see how it affects the result.
```
w_train = w_train[w_train.well_ID < 25]
# Make X and y
X = w_train[['Depth','GR','ILD','NPHI']].as_matrix()
y = w_train['RHOB'].values
X.shape
```
Set up the testing matrix of features we want to use to predict the missing `RHOB`
```
X_test = w_test[['Depth','GR','ILD','NPHI']].as_matrix()
```
We will display the predicted vs. true results for a test well
```
well_id = 81
```
# Available scikit-learn models to choose from:
http://scikit-learn.org/stable/supervised_learning.html
# Linear Regression
A first simple approach is to apply a linear model
```
from sklearn import linear_model
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X,y)
# Make predictions using the testing set
y_test_LR = regr.predict(X_test)
# add a new column to data frame that already exists
w_test_complete['RHOB_pred_LinReg'] = y_test_LR
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_LinReg, my_well.Depth,'r')
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>
Complete the following code to test the different classifiers similar to the Linear Regression case
</li>
<p>
</ul>
</div>
# Decision Tree Regressor
```
from sklearn import tree
clf = tree.DecisionTreeRegressor()
#--!
clf = clf.fit(X, y)
y_test_DTR = clf.predict(X_test)
#--!
# add a new column to data frame that already exists and plot the results
#!--
w_test_complete['RHOB_pred_DTR'] = y_test_DTR
w_test_complete.head()
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_DTR, my_well.Depth,'r')
#--!
```
# Nearest Neighbours
```
from sklearn.neighbors import KNeighborsRegressor
nbrs = KNeighborsRegressor()
#!--
nbrs.fit(X, y)
y_test_KNN = nbrs.predict(X_test)
#--!
# add a new column to data frame that already exists and plot the results
#!--
w_test_complete['RHOB_pred_KNN'] = y_test_KNN
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_KNN, my_well.Depth,'r')
#--!
```
# Gradient Boosting Ensemble Regressor
```
import numpy as np
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import GradientBoostingRegressor
#!--
est = GradientBoostingRegressor(n_estimators=100, learning_rate=0.05,
max_depth=5, random_state=0, loss='ls')
est.fit(X, y)
y_test_GBT = est.predict(X_test)
w_test_complete['RHOB_pred_GBT'] = y_test_GBT
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_GBT, my_well.Depth,'r')
#--!
```
# Evaluation Metrics
Although it's good to see how the plots look, a more generalized way to determine how good a model is at predicting data
http://scikit-learn.org/stable/model_selection.html#model-selection
"Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called overfitting. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a test set X_test, y_test. Note that the word “experiment” is not intended to denote academic use only, because even in commercial settings machine learning usually starts out experimentally."
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(est, X_test, w_test_complete.RHOB, cv=5, scoring='neg_mean_squared_error')
scores
```
## Regression metrics
[TOP](#top)
http://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics
```
from sklearn.metrics import explained_variance_score
print(explained_variance_score(my_well.RHOB, my_well.RHOB_pred_LinReg))
print(explained_variance_score(my_well.RHOB, my_well.RHOB_pred_DTR))
print(explained_variance_score(my_well.RHOB, my_well.RHOB_pred_KNN))
from sklearn.metrics import mean_squared_error
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_LinReg))
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_DTR))
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_KNN))
```
# Feature Engineering
What can we do to help our classifier?
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>
Create a function using `np.convolve` to smooth a log curve and return the smoothed version to add to the `DataFrame`
</li>
<p>
</ul>
</div>
```
#!--
def smooth(y, box_len=10):
box = np.ones(box_len)/box_len
y_smooth = np.convolve(y, box, mode='same')
return y_smooth
#--!
w_train.columns
w_train["s_NPHI"] = smooth(w_train["NPHI"].values, box_len=50)
w_train["well_ID"].unique()
idx_test_well = 0
plt.plot(w_train[w_train.well_ID == idx_test_well]["NPHI"])
plt.plot(w_train[w_train.well_ID == idx_test_well]["s_NPHI"])
w_test["s_NPHI"] = smooth(w_test["NPHI"].values, box_len=50)
X_test = w_test[['Depth','GR','ILD','NPHI','s_NPHI']].as_matrix()
# s_NPHI will be the smoothed array!
X = w_train[['Depth','GR','ILD','NPHI','s_NPHI']].as_matrix()
#!--
est = GradientBoostingRegressor(n_estimators=100, learning_rate=0.05,
max_depth=5, random_state=0, loss='ls')
est.fit(X, y)
y_test_GBT = est.predict(X_test)
w_test_complete['RHOB_pred_GBT'] = y_test_GBT
my_well = w_test_complete[w_test_complete.well_ID==well_id]
plt.figure(figsize=(3,10))
plt.plot(my_well.RHOB, my_well.Depth, 'k')
plt.plot(my_well.RHOB_pred_GBT, my_well.Depth,'r')
#--!
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_GBT))
from sklearn.metrics import mean_squared_error
print(mean_squared_error(my_well.RHOB, my_well.RHOB_pred_GBT))
```
<hr />
<p style="color:gray">©2017 Agile Geoscience. Licensed CC-BY.</p>
| github_jupyter |
# FairWorkflows execution demo
## Define the steps of your workflow
Each step should be its own function. Mark the function as such with the @fairstep decorator.
```
%cd ..
from fairworkflows import is_fairworkflow, is_fairstep, FairStep, FairWorkflow
@is_fairstep(label='Addition')
def add(a:float, b:float) -> float:
"""Adding up numbers!"""
return a + b
@is_fairstep(label='Verify', is_manual_task=True)
def verify(a: float) -> bool:
"""Confirm that you like this result"""
pass
@is_fairstep(label='square')
def square(a: float, confirmed:bool) -> float:
"""Only square a if the result has been confirmed."""
if confirmed:
return a * a
return a
```
## Define your workflow using @fairworkflow
Now write a function which describes your workflow. Mark this function with the @fairworkflow decorator.
```
@is_fairworkflow(label='My Workflow')
def my_workflow(in1, in2):
"""
Add two numbers together and confirm the result
"""
t1 = add(in1, in2)
t2 = verify(t1)
t3 = square(t1, t2)
return t3
```
## Create an instance of your workflow and display it
```
fw = FairWorkflow.from_function(my_workflow)
type(fw)
```
## Execute your workflow using .execute()
Set num_threads greater than 1 if you wish to exploit parallelisation in your workflow. The retrospective provenance is also returned as a (nano) Publication object, that can optionally be published.
```
result, prov = fw.execute(1, 4)
result
```
### Retrospective prov
A WorkflowRetroProv object is returned along with the result of the execution.
```
type(prov)
print(prov)
```
### Retrospective prov for each step
You can iterate through a WorkflowRetroProv object to get the StepRetroProv objects for each step. Print these to see the RDF they contain (input/output variable values, start and end datetime of the step's execution etc.)
```
for sp in prov:
print(sp)
```
### Publish the retrospective provenance
You can use the .publish_as_nanopub() method as with FairStep and FairWorkflow objects. This publishes a nanopub per step and one for the whole workflow, mirroring the prospective RDF.
```
prov.publish_as_nanopub(use_test_server=True)
```
The last nanopub (whose URI ends in #fairworkflowprov) contains the links to all of the individual step retrospective provenances.
## Provide semantic annotations for input and output variables
If you wish to specify semantic types for the inputs/outputs to a step, you can do so in the arguments to the decorator.
For example, if you have an input parameter 'a', you can write a='http://www.example.org/distance' to assign that (semantic) type to a. As output of functions is not named in python, you can specify the same but with 'out1', 'out2' etc. See the following example:
```
@is_fairstep(label='Addition', a='http://www.example.org/distance', returns='http://www.example.org/mass')
def add(a:float, b:float) -> float:
return a + b
```
If we now look at the RDF generated for the step, we will see that input parameter 'a' and the step output ('out1') both have the (additional) semantic types specified.
```
# ACTIONS:
# Add language and version to nanopubs (i.e. what the description is written in)
print(add._fairstep)
```
### Specify more than one semantic type for a parameter
You can provide a list of URIs if you want to specify several semantic types for e.g. parameter 'a':
```
@is_fairstep(label='Addition', a=['http://www.example.org/distance', 'http://www.example.org/number'])
def another_step(a:float, b:float) -> float:
"""Add two numbers together"""
return a + b
print(another_step._fairstep)
```
You can check the programming language that was used for writing the step:
```
print(another_step._fairstep.language)
```
## Semantic types for function producing multiple outputs
Provide 'out' with a tuple of the same length as the number of function outputs. You can use None for any you do not wish to assign a particular semantic type to.
```
from typing import Tuple
@is_fairstep(label='Addition and subtraction', returns=('http://www.example.org/distance', 'http://www.example.org/number'))
def another_step(a:float, b:float) -> Tuple[float, float]:
return a + b, a - b
print(another_step._fairstep)
```
As before, you may provide a list of URIs for each output. If you do not want to provide semantic types for a particular output, simply pass None:
```
from typing import Tuple
@is_fairstep(label='Addition and subtraction', returns=(['http://www.example.org/distance', 'http://www.example.org/number'], None))
def another_step(a:float, b:float) -> Tuple[float, float]:
"""This step returns an addition and a subtraction of its inputs"""
return a + b, a - b
print(another_step._fairstep)
```
| github_jupyter |
#### Fancy indexing and index tricks
NumPy offers more indexing facilities than regular Python sequences. In addition to indexing by integers and slices, as we saw before, arrays can be indexed by arrays of integers and arrays of booleans.
##### Indexing with Arrays of Indices¶
```
import numpy as np
a = np.arange(12)**2 # the first 12 square numbers
i = np.array( [ 1,1,3,8,5,6 ] ) # an array of indices
print(a[i] ,"# the elements of a at the positions i")
j = np.array( [ [ 3, 4], [ 9, 7 ] ] ) # a bidimensional array of indices
a[j] # the same shape as j
```
When the indexed array a is multidimensional, a single array of indices refers to the first dimension of a. The following example shows this behavior by converting an image of labels into a color image using a palette.
```
palette = np.array( [ [0,0,0], # black
[255,0,0], # red
[0,255,0], # green
[0,0,255], # blue
[255,255,255] ] ) # white
palette
import matplotlib.pyplot as plt
plt.imshow(palette)
image = np.array( [ [ 0, 6, 2, 8 ], # each value corresponds to a color in the palette
... [ 5, 3, 4, 0 ] ] )
print(image)
import matplotlib.pyplot as plt
plt.imshow(image)
palette[image] # the (2,4,3) color image
```
We can also give indexes for more than one dimension. The arrays of indices for each dimension must have the same shape.
```
a = np.arange(12).reshape(3,4);a
i = np.array( [ [0,1], # indices for the first dim of a
... [1,2] ] )
i
j = np.array( [ [2,1], # indices for the second dim
... [3,3] ] )
j
a[i,j] # i and j must have equal shape
a[i,2]
a[:,j] # i.e., a[ : , j]
```
Naturally, we can put i and j in a sequence (say a list) and then do the indexing with the list.
```
l = [i,j]
l
a[l]
time = np.linspace(20, 145, 5) # time scale
data = np.sin(np.arange(20)).reshape(5,4) # 4 time-dependent series
ind = data.argmax(axis=0) # index of the maxima for each series
ind
time_max = time[ind] # times corresponding to the maxima
time_max
a = np.arange(5)
print(a)
a[[1,3,4]] = 0
print(a)
```
### Indexing with Boolean Arrays
When we index arrays with arrays of (integer) indices we are providing the list of indices to pick. With boolean indices the approach is different; we explicitly choose which items in the array we want and which ones we don’t.
The most natural way one can think of for boolean indexing is to use boolean arrays that have the same shape as the original array:
```
a = np.arange(12).reshape(3,4)
b = a > 4
b
a[b] # 1d array with the selected elements
```
#### The ix_() function
The ix_ function can be used to combine different vectors so as to obtain the result for each n-uplet. For example, if you want to compute all the a+b*c for all the triplets taken from each of the vectors a, b and c:
```
a = np.array([2,3,4,5])
b = np.array([8,5,4])
c = np.array([5,4,6,8,3])
ax,bx,cx = np.ix_(a,b,c)
print(ax)
cx
bx
ax.shape, bx.shape, cx.shape
result = ax+bx*cx
result
result[3,2,4]
a[3]+b[2]*c[4]
```
You could also implement the reduce as follows:
```
def ufunc_reduce(ufct, *vectors):
... vs = np.ix_(*vectors)
... r = ufct.identity
... for v in vs:
... r = ufct(r,v)
... return r
ufunc_reduce(np.add,a,b,c)
```
| github_jupyter |
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
from matplotlib.ticker import ScalarFormatter
import math
```
This notebook assumes you have completed the notebook [Introduction of sine waves](TDS_Introduction-sine_waves.ipynb). This notebook follows the same pattern of time domain waveform generation: instantaneous frequency -> angle step -> total angle -> time domain waveform.
Our goal is to track features of different acoustic impedance in material using a low power time domain waveform. Time delay spectrometry (TDS) is one implementation of this goal. To understand TDS we need to understand the waveform which is used by TDS called a chirp. A chirp is a sinusoid that is constantly varying in frequency. The chirp is generated by integrating a varying angle step which is derived from an instantaneous frequency profile. We will generate a chirp in this notebook. An overview of this technique is given [here](https://www.youtube.com/watch?v=RQplkt0bw_c).
The angle of the chirp can be found by integrating the instantaneous frequency:
\begin{equation}
f(t)=\frac{f_{end}-f_{start}}{T_c}t + f_{start}
\end{equation}
\begin{equation}
\Delta\phi(t) = 2\pi f(t)\Delta t
\end{equation}
\begin{equation}
\phi (t)=\int_{}^{} \Delta\phi(t) = \int_{}^{} 2\pi f(t) dt = \int_{}^{}\frac{f_{end}-f_{start}}{T_c}tdt + \int_{}^{}f_{start}dt
\end{equation}
\begin{equation}
\phi (t)= \frac{f_{end}-f_{start}}{T_c}\int_{}^{}tdt + f_{start}\int_{}^{}dt
\end{equation}
\begin{equation}
\phi (t)= \frac{f_{end}-f_{start}}{T_c}\frac{t^2}{2} + f_{start}t
\end{equation}
This gives the time series value of
\begin{equation}
x(t) = e^{j\phi (t)} = e^{j(\frac{f_{end}-f_{start}}{T_c}\frac{t^2}{2} + f_{start}t)}
\end{equation}
But the formula for angle requires squaring time which will cause numeric errors as the time increases. Another approach is to implement the formula for angle as a cummulative summation.
\begin{equation}
\phi_{sum} (N)=\sum_{k=1}^{N} \Delta\phi(k) = \sum_{k=1}^{N} 2\pi f(k) t_s = \sum_{k=1}^{N}(\frac{f_{end}-f_{start}}{T_c}k + f_{start})t_s
\end{equation}
This allow for the angle always stay between 0 and two pi by subtracting two phi whenever the angle exceeds the value. We will work with the cummlative sum of angle, but then compare it to the integral to determine how accurate the cummulative sum is.
```
#max free 8 points per sample
#Tc is the max depth we are interested in
Tc_sec=0.00003
f_start_Hz=3e5
#talk about difference and similarity of sine wave example, answer why not 32 samples
f_stop_Hz=16e5
#We choose 8 samples per cycle at the maximum frequency to not require steep pulse shaping filter profiles on the output of the
#digital to analog converter
samplesPerCycle=8
fs=f_stop_Hz*samplesPerCycle
ts=1/fs
total_samples= math.ceil(fs*Tc_sec)
n = np.arange(0,total_samples, step=1, dtype=np.float64)
t_sec=n*ts
t_usec = t_sec *1e6
#This is the frequency of the chirp over time. We assume linear change in frequency
chirp_freq_slope_HzPerSec=(f_stop_Hz-f_start_Hz)/Tc_sec
#Compute the instantaneous frequency which is a linear function
chirp_instantaneous_freq_Hz=chirp_freq_slope_HzPerSec*t_sec+f_start_Hz
chirp_instantaneous_angular_freq_radPerSec=2*np.pi*chirp_instantaneous_freq_Hz
#Since frequency is a change in phase the we can plot it as a phase step
chirp_phase_step_rad=chirp_instantaneous_angular_freq_radPerSec*ts
#The phase step can be summed (or integrated) to produce the total phase which is the phase value
#for each point in time for the chirp function
chirp_phase_rad=np.cumsum(chirp_phase_step_rad)
#The time domain chirp function
chirp = np.exp(1j*chirp_phase_rad)
#We can see, unlike the complex exponential, the chirp's instantaneous frequency is linearly increasing.
#This corresponds with the linearly increasing phase step.
fig, ax = plt.subplots(2, 1, sharex=True,figsize = [8, 8])
lns1=ax[0].plot(t_usec,chirp_instantaneous_freq_Hz,linewidth=4, label='instantanous frequency');
ax[0].set_title('Comparing the instantaneous frequency and phase step')
ax[0].set_ylabel('instantaneous frequency (Hz)')
axt = ax[0].twinx()
lns2=axt.plot(t_usec,chirp_phase_step_rad,linewidth=2,color='black', linestyle=':', label='phase step');
axt.set_ylabel('phase step (rad)')
#ref: https://stackoverflow.com/questions/5484922/secondary-axis-with-twinx-how-to-add-to-legend
lns = lns1+lns2
labs = [l.get_label() for l in lns]
ax[0].legend(lns, labs, loc=0)
#We see that summing or integrating the linearly increasing phase step gives a quadratic function of total phase.
ax[1].plot(t_usec,chirp_phase_rad,linewidth=4,label='chirp');
ax[1].plot([t_usec[0], t_usec[-1]],[chirp_phase_rad[0], chirp_phase_rad[-1]],linewidth=1, linestyle=':',label='linear (x=y)');
ax[1].set_title('Cumulative quandratic phase function of chirp')
ax[1].set_xlabel('time ($\mu$sec)')
ax[1].set_ylabel('total phase (rad)')
ax[1].legend();
#The complex exponential of each phase value gives us the time domain chirp signal.
#We have highlighted the beginning and end of the chirp where it starts at a low frequency and linearly increases to a high frequency
samplesToShowSlow=np.arange(5*samplesPerCycle,dtype=np.int32)
samplesToShowFast=np.flip(np.ceil(t_sec.shape[0]).astype(np.int32) - np.arange(5*samplesPerCycle,dtype=np.int32))-1
fig2 = plt.figure(constrained_layout=True,figsize = [8, 6])
gs = fig2.add_gridspec(2, 3)
f2_ax1 = fig2.add_subplot(gs[0, :])
f2_ax2 = fig2.add_subplot(gs[1, :])
f2_ax1.plot(t_usec,chirp_phase_rad, color='#27A4A3', label='chirp');
f2_ax1.plot(t_usec[samplesToShowSlow],chirp_phase_rad[samplesToShowSlow],color=(1,0,0),linewidth=4, label='slow');
f2_ax1.plot(t_usec[samplesToShowFast],chirp_phase_rad[samplesToShowFast],color=(0,0,1),linewidth=4, label='fast');
f2_ax1.set_title('Cumulative quandratic phase function of chirp')
f2_ax1.set_xlabel('time ($\mu$sec)')
f2_ax1.set_ylabel('total phase (rad)')
f2_ax1.legend();
f2_ax2.plot(t_usec,np.real(chirp),color='#27A4A3', label='real');
f2_ax2.plot(t_usec,np.imag(chirp),color='#27A4A3', linestyle=':', label='imag');
f2_ax2.plot(t_usec[samplesToShowSlow],np.real(chirp[samplesToShowSlow]),color=(1,0,0));
f2_ax2.plot(t_usec[samplesToShowSlow],np.imag(chirp[samplesToShowSlow]),color=(1,0,0), linestyle=':');
f2_ax2.plot(t_usec[samplesToShowFast],np.real(chirp[samplesToShowFast]),color=(0,0,1));
f2_ax2.plot(t_usec[samplesToShowFast],np.imag(chirp[samplesToShowFast]),color=(0,0,1), linestyle=':');
f2_ax2.set_title('Time domain chirp')
f2_ax2.set_xlabel('time ($\mu$sec)')
f2_ax2.set_ylabel('amplitude')
f2_ax2.get_xaxis().get_major_formatter().set_useOffset(False)
f2_ax2.legend();
#With perfect integration we have
#This is the frequency of the chirp over time. We assume linear change in frequency
chirp_freq_slope_HzPerSec=(f_stop_Hz-f_start_Hz)/Tc_sec
#Compute the instantaneous frequency which is a linear function
chirp_phase_continous_time_rad=2*np.pi*(chirp_freq_slope_HzPerSec/2*np.power(t_sec,2)+f_start_Hz*t_sec)
chirp = np.exp(1j*chirp_phase_continous_time_rad)
#The complex exponential of each phase value gives us the time domain chirp signal.
#We have highlighted the beginning and end of the chirp where it starts at a low frequency and linearly increases to a high frequency
fig2 = plt.figure(constrained_layout=True,figsize = [8, 6])
gs = fig2.add_gridspec(2, 3)
f2_ax1 = fig2.add_subplot(gs[0, :])
f2_ax2 = fig2.add_subplot(gs[1, :])
f2_ax1.plot(t_usec,chirp_phase_rad, color='#27A4A3', label='chirp');
f2_ax1.plot(t_usec,chirp_phase_continous_time_rad,color=(1,0,0),linewidth=4, linestyle=':', label='chirp continuous');
f2_ax1.set_title('Cumulative quandratic phase function of chirp')
f2_ax1.set_xlabel('time ($\mu$sec)')
f2_ax1.set_ylabel('total phase (rad)')
f2_ax1.legend();
f2_ax2.plot(t_usec,chirp_phase_rad-chirp_phase_continous_time_rad, color='#27A4A3', label='chirp');
f2_ax2.set_title('Cumulative quandratic phase function of chirp')
f2_ax2.set_xlabel('time ($\mu$sec)')
f2_ax2.set_ylabel('total phase (rad)')
f2_ax2.legend();
```
We examine the error
\begin{equation}
\phi_{sum} (N)=\sum_{k=1}^{N} \Delta\phi(k) = \sum_{k=1}^{N} 2\pi f(k) t_s = \sum_{k=1}^{N}\left(\frac{f_{end}-f_{start}}{T_c}k + f_{start}\right)t_s
\end{equation}
To analyze the error we collect the phase terms into A and
\begin{equation}
A = \left(\frac{f_{end}-f_{start}}{T_c}\right) t_s
\end{equation}
\begin{equation}
B = f_{start} t_s
\end{equation}
This gives a summation of
\begin{equation}
\phi_{sum} (N)= \sum_{k=1}^{N} 2\pi f(k) t_s = \sum_{k=1}^{N}\left(Ak + B\right)
\end{equation}
Which allows us to write
\begin{equation}
\phi_{sum} (N)= \sum_{k=1}^{N}\left(Ak\right) + \sum_{k=1}^{N}\left(B\right) = A\sum_{k=1}^{N}k + BN
\end{equation}
We solve the below summation by recognizing it is half the area of a rectangle with sides N and N+1 so
\begin{equation}
\sum_{k=1}^{N}k = \frac{(N+1)N}{2}
\end{equation}
This formula can be visually illustrated by the graphic
<img src="img/sum_proof.png" width="260" height="260" />
So collecting the terms we eliminate the sum with
\begin{equation}
\phi_{sum} (N)= A\frac{(N+1)N}{2} + BN =\frac{A}{2}N^2 + \frac{A+2B}{2}N
\end{equation}
Using the same A and B we can write the integral of instantaneous frequency as
\begin{equation}
\phi (t)= \frac{f_{end}-f_{start}}{T_c}\frac{t^2}{2} + f_{start}t =\frac{A}{2t_s}t^2 + \frac{B}{t_s}t
\end{equation}
We can also relate N and t by t = Nt_s which lets us rewrite $$ \phi (t) $$ as
\begin{equation}
\phi (N)= \frac{A}{2t_s}\left(Nt_s\right)^2 + \frac{B}{t_s}(Nt_s)= \frac{At_s}{2}N^2 + BN
\end{equation}
Now we can compute the error which is:
\begin{equation}
\phi (N) - \phi_{sum} (N)= \left(\frac{At_s}{2}N^2 + BN\right) - \left(\frac{A}{2}N^2 + \frac{A+2B}{2}N\right)
\end{equation}
This simplifies to
\begin{equation}
\phi (N) - \phi_{sum} (N)= \left(\frac{At_s}{2}N^2 + BN\right) - \left(\frac{A}{2}N^2 + \frac{A+2B}{2}N\right)
\end{equation}
| github_jupyter |
# ResNet-50 Inference with FINN on Alveo
This notebook demonstrates the functionality of a FINN-based, full dataflow ResNet-50 implemented in Alveo U250. The characteristics of the network are the following:
- residual blocks at 1-bit weights, 2/4-bit activations
- first convolution and last (fully connected) layer use 8-bit weights
- all parameters stored on-chip in BRAM/LUTRAM/URAM
- single DDR controller (DDR0) utilized for input and output
We validate the network against ImageNet. We use the PYNQ APIs for retrieving and recording power information which is then displayed in real-time.
## Set up Accelerator with PYNQ
We load the Alveo accelerator and print its memory-mapped registers:
```
import pynq
ol=pynq.Overlay("resnet50.xclbin")
accelerator=ol.resnet50_1
print(accelerator.register_map)
```
Next we create a data buffer in the Alveo PLRAM memory to hold the weights of the Fully Connected Layer:
```
import numpy as np
#allocate a buffer for FC weights, targeting the Alveo PLRAM
fcbuf = pynq.allocate((1000,2048), dtype=np.int8, target=ol.PLRAM0)
```
Load the weight from a CSV file and push them to the accelerator buffer:
```
#load Weights from file into the PYNQ buffer
fcweights = np.genfromtxt("fcweights.csv", delimiter=',', dtype=np.int8)
#csv reader erroneously adds one extra element to the end, so remove, then reshape
fcweights = fcweights[:-1].reshape(1000,2048)
fcbuf[:] = fcweights
#Move the data to the Alveo DDR
fcbuf.sync_to_device()
```
## Single Image Inference
In this example we perform inference on each of the images in a `pictures` folder and display the top predicted class overlaid onto the image. The code assumes the existence of this `pictures` folder, where you should put the images you want to classificate. There is no restriction on the images that you can use.
```
import shutil
import wget
import os
import glob
from itertools import chain
import cv2
import matplotlib.pyplot as plt
image_list = list(chain.from_iterable([glob.glob('pictures/*.%s' % ext) for ext in ["jpg","gif","png","tga"]]))
#get imagenet classes from file
import pickle
classes = pickle.load(open("labels.pkl",'rb'))
def infer_once(filename):
inbuf = pynq.allocate((224,224,3), dtype=np.int8, target=ol.bank0)
outbuf = pynq.allocate((5,), dtype=np.uint32, target=ol.bank0)
#preprocess image
img = cv2.resize(cv2.imread(filename), (224,224))
#transfer to accelerator
inbuf[:] = img
inbuf.sync_to_device()
#do inference
accelerator.call(inbuf, outbuf, fcbuf, 1)
#get results
outbuf.sync_from_device()
results = np.copy(outbuf)
return results
inf_results = []
for img in image_list:
inf_output = infer_once(img)
inf_result = [classes[i] for i in inf_output]
inf_results.append(inf_result)
plt.figure(figsize=(20,10))
columns = 3
for i, image in enumerate(image_list):
plt.subplot(len(image_list) / columns + 1, columns, i + 1)
top_class = inf_results[i][0].split(',', 1)[0]
display_image = cv2.cvtColor(cv2.resize(cv2.imread(image),(224,224)), cv2.COLOR_BGR2RGB)
plt.imshow(cv2.putText(display_image, top_class, (10,20), cv2.FONT_HERSHEY_TRIPLEX, 0.7, (255,255,255)))
```
## Plot Accelerator Board Power with PYNQ
We first set up data acquisition using PYNQ's PMBus API
```
import plotly
import plotly.graph_objs as go
import pandas as pd
from pynq import pmbus
import time
rails = pmbus.get_xrt_sysfs_rails(pynq.pl_server.Device.active_device)
#We create a recorder monitoring the three rails that have power measurement on Alveo.
#Total board power is obtained by summing together the PCI Express and Auxilliary 12V rails.
#While some current is also drawn over the PCIe 5V rail this is negligible compared to the 12V rails and isn't recorded.
#We also measure the VCC_INT power which is the primary supply to the FPGA.
recorder = pmbus.DataRecorder(rails["12v_aux"].power,
rails["12v_pex"].power,
rails["vccint"].power)
f = recorder.frame
powers = pd.DataFrame(index=f.index)
powers['board_power'] = f['12v_aux_power'] + f['12v_pex_power']
powers['fpga_power'] = f['vccint_power']
#Now we need to specify the layout for the graph. In this case it will be a simple Line/Scatter plot,
#autoranging on both axes with the Y axis having 0 at the bottom.
layout = {
'xaxis': {
'title': 'Time (s)'
},
'yaxis': {
'title': 'Power (W)',
'rangemode': 'tozero',
'autorange': True
}
}
#Plotly expects data in a specific format, namely an array of plotting objects.
#This helper function will update the data in a plot based.
#Th e `DataRecorder` stores the recording in a Pandas dataframe object with a time-based index.
#This makes it easy to pull out the results for a certain time range and compute a moving average.
#In this case we are going to give a 5-second moving average of the results as well as the raw input.
def update_data(frame, start, end, plot):
ranged = frame[start:end]
average_ranged = frame[start-pd.tseries.offsets.Second(5):end]
rolling = (average_ranged['12v_aux_power'] + average_ranged['12v_pex_power']).rolling(
pd.tseries.offsets.Second(5)
).mean()[ranged.index]
powers = pd.DataFrame(index=ranged.index)
powers['board_power'] = ranged['12v_aux_power'] + ranged['12v_pex_power']
powers['rolling'] = rolling
data = [
go.Scatter(x=powers.index, y=powers['board_power'], name="Board Power"),
go.Scatter(x=powers.index, y=powers['rolling'], name="5 Second Avg")
]
plot.update(data=data)
#Next we create an show the plot object, initially there will be no data to display but this plot will be updated after we start the recording.
#Once the plot is running it is possible to right click on it to pop out the graph into a separate window.
plot = go.FigureWidget(layout=layout)
plot
```
Next we create a dynamically-updating power graph:
```
recorder.record(0.1)
#In order to continue updating the graph we need a thread running in the background.
#The following thread will call our update function twice a second to display the most recently collected minute of data.
do_update = True
def thread_func():
while do_update:
now = pd.Timestamp.fromtimestamp(time.time())
past = now - pd.tseries.offsets.Second(60)
update_data(recorder.frame, past, now, plot)
time.sleep(0.5)
from threading import Thread
t = Thread(target=thread_func)
t.start()
```
To manually stop the power graph:
```
do_update = False
recorder.stop()
```
## Synthetic Throughput Test
We execute inference of a configurable-size batch of images, without data movement. We measure the latency and throughput.
```
import ipywidgets as widgets
from IPython.display import clear_output
bs = widgets.IntSlider(
value=128,
min=1,
max=1000,
step=1,
description='Batch Size:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
fps = widgets.IntProgress(min=0, max=2500, description='FPS: ')
latency = widgets.FloatProgress(min=0, max=0.1, description='Latency (ms): ')
button = widgets.Button(description='Stop')
stop_running = False
def on_button_clicked(_):
global stop_running
stop_running = True
# linking button and function together using a button's method
button.on_click(on_button_clicked)
out_fps = widgets.Text()
out_latency = widgets.Text()
ui_top = widgets.HBox([button, bs])
ui_bottom = widgets.HBox([fps, out_fps, latency, out_latency])
ui = widgets.VBox([ui_top, ui_bottom])
display(ui)
import time
import threading
def benchmark_synthetic():
import pynq
ibuf = pynq.allocate((1000,3,224,224), dtype=np.int8, target=ol.bank0)
obuf = pynq.allocate((1000,5), dtype=np.uint32, target=ol.bank0)
while True:
if stop_running:
print("Stopping")
return
duration = time.monotonic()
accelerator.call(ibuf, obuf, fcbuf, bs.value)
duration = time.monotonic() - duration
fps.value = int(bs.value/duration)
latency.value = duration
out_fps.value = str(fps.value)
out_latency.value = '%.2f' % (duration * 1000)
t = threading.Thread(target=benchmark_synthetic)
t.start()
```
| github_jupyter |
```
import twitter
import os
import yaml
import re
import time
import tweepy
import pandas as pd
from textblob import TextBlob
from collections import Counter
import pickle
credentials = yaml.load(open(os.path.expanduser('~/.ssh/api_credentials.yml')))
```
# Try Tweepy
```
#!/usr/bin/env python
# encoding: utf-8
import tweepy #https://github.com/tweepy/tweepy
import csv
#Twitter API credentials
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""
def get_all_tweets(screen_name, api):
"""Download the last 3240 tweets from a user. Do text processign to remove URLs and the retweets from a user.
Adapted from https://gist.github.com/yanofsky/5436496"""
#Twitter only allows access to a users most recent 3240 tweets with this method
#authorize twitter, initialize tweepy
#initialize a list to hold all the tweepy Tweets
alltweets = []
#make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name = screen_name,count=200)
#save most recent tweets
alltweets.extend(new_tweets)
#save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
#all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)
#save most recent tweets
alltweets.extend(new_tweets)
#update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
print(f'Finished getting tweets for {screen_name}')
cleaned_text = [re.sub(r'http[s]?:\/\/.*[\W]*', '', i.text, flags=re.MULTILINE) for i in alltweets] # remove urls
cleaned_text = [re.sub(r'@[\w]*', '', i, flags=re.MULTILINE) for i in cleaned_text] # remove the @twitter mentions
cleaned_text = [re.sub(r'RT.*','', i, flags=re.MULTILINE) for i in cleaned_text] # delete the retweets
#transform the tweepy tweets into a 2D array that will populate the csv
outtweets = [[tweet.id_str, tweet.created_at, cleaned_text[idx]] for idx,tweet in enumerate(alltweets)]
return pd.DataFrame(outtweets,columns=["id","created_at","text"])
auth = tweepy.OAuthHandler(credentials['twitter']['consumer_key'], credentials['twitter']['consumer_secret'],)
auth.set_access_token(credentials['twitter']['token'], credentials['twitter']['token_secret'])
api = tweepy.API(auth,wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
tweets_df = get_all_tweets('ericries', api)
def scrub_text(tweets_df, return_bag_of_words = False, num_chars = 10):
"""Takes in a tweets DF and returns a list of sentences .
Also creates the bag of words from the text to be used at a later time.
num_chars specifies the minimum number of characters per sentence"""
bag_of_words = None
all_sentences = []
for row in tweets_df.iterrows():
blob = TextBlob(row[1]['text'])
blob = blob.lower()
blobl = blob.strip()# remove whitespace
for sent in blob.sentences: ## append each sentence
if len(sent) < num_chars: # sentences need to have at least ten characters
pass
else:
all_sentences.append(str(sent)+" ")
tokens = blob.tokenize()
if bag_of_words == None:
bag_of_words = Counter(tokens)
else:
words = Counter(tokens)
for k,v in words.items():
if k in bag_of_words:
bag_of_words[k]+=v
else:
bag_of_words[k]=v
if return_bag_of_words == True:
return all_sentences, bag_of_words
else:
return all_sentences
vc1_pitchbook_df = pd.read_csv( "../data/processed/PitchBook_CA_VCInvest=1.csv")
vc1_pitchbook_df[vc1_pitchbook_df['Primary Contact']=='Michael Leonard']
username_errors = []
# get text from the Twitter handle of these founder
for idx,row in enumerate(vc1_pitchbook_df.iterrows()):
total_rows = len(vc1_pitchbook_df)
founder = row[1]['Primary Contact']
company = row[1]['Company Name']
twitter_username = row[1]['Twitter_Username']
try:
tweets = get_all_tweets(twitter_username )
scrubbed_tweets = scrub_text(tweets)
with open(f"../data/raw/founders_tweets/vc_invest=1/{company}-{founder}-{twitter_username}", "wb") as output_file:
pickle.dump(scrubbed_tweets, output_file, protocol=pickle.HIGHEST_PROTOCOL)
except: # not authorized to see this user's timeline
username_errors.append(founder) ## eventually drop these usernames
print(f"{idx/total_rows:%} percent finished")
username_errors
scrubbed_tweets
```
# Python -Twitter
```
text = api.GetUserTimeline(screen_name='ashady',count =300, include_rts=False)
cleaned_text = [re.sub(r'http[s]?:\/\/.*[\W]*', '', i.text, flags=re.MULTILINE) for i in text] # remove the urls
cleaned_text = [re.sub(r'@[\w]*', '', i, flags=re.MULTILINE) for i in cleaned_text] # remove the @twitter mentions
cleaned_text
api.GetFollowers(screen_name='ashady')
def twitter_text(username):
"""Return the last 200 tweets' text from the given username"""
text = api.GetUserTimeline(screen_name=username,count =300, include_rts=False)
cleaned_text = [re.sub(r'http[s]?:\/\/.*[\W]*', '', i.text, flags=re.MULTILINE) for i in text] # remove the urls
cleaned_text = [re.sub(r'@[\w]*', '', i, flags=re.MULTILINE) for i in cleaned_text] # remove the @twitter mentions
return cleaned_text
twitter_text('olivercameron')
def twitter_followers(username):
"""REturn the number of followers for the given user. Need to sleep to not exceed the rate limit."""
for num in range(1_000_000):
if num %500 ==0:
time.sleep(5)
```
| github_jupyter |
## 绘制数组
```
import matplotlib.pyplot as plt
import numpy as np
a = np.zeros([2, 3])
print(a)
a[0, 0] = 1
a[0, 1] = 2
a[1, 1] = 4
a[1, 2] = 1
plt.imshow(a, interpolation="nearest") # 创建绘图
```
## 神经网络框架代码
- 构建一个神经网络类
- 包含3个函数
1. 初始化函数, 设定输入层节点,隐藏节点和输出层节点的数量
2. 训练,学习给定训练集样本后,优化权重
3. 查询,给定输出,从输出节点给出答案
```
import numpy as np
a = np.random.rand(3, 4) - 0.5 # 产生3*4的值在-0.5-0.5的随机数组
print(type(a))
print(a)
b = np.array([1, 2, 3, 4, 5], ndmin=2) # 将列表转换为数组
print(type(b))
print(b)
"""
权重是神经网络的固有部分,与神经网络共存亡,它不是一个临时数据集,不会随着
函数调用结束而消失
"""
import numpy as np
import matplotlib.pyplot as plt
import scipy.special
class NeuralNetwork(object):
"""神经网络类"""
def __init__(self, input_odes, hidden_odes, output_odes, learning_grate):
"""
input_odes: 输入层节点
hidden_odes: 隐藏层节点
output_odes: 输出层节点
learning_grate: 学习率
"""
self.in_odes = input_odes
self.h_odes = hidden_odes
self.out_odes = output_odes
self.lg = learning_grate
# 创建两个链接权重矩阵
self.w1 = np.random.rand(self.h_odes, self.in_odes) - 0.5
self.w2 = np.random.rand(self.out_odes, self.h_odes) - 0.5
# 激活函数
self.activation_func = lambda x: scipy.special.expit(x)
pass
def train(self, inputs_list, targets_list):
"""训练"""
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
hidden_inputs = np.dot(self.w1, inputs)
hidden_outputs = self.activation_func(hidden_inputs)
final_inputs = np.dot(self.w2, hidden_outputs)
final_outputs = self.activation_func(final_inputs)
output_errors = targets - final_outputs
hidden_errors = np.dot(self.w2.T, output_errors)
self.w2 += self.lg * np.dot((output_errors * final_outputs * (1.0 - final_outputs)),
np.transpose(hidden_outputs))
self.w1 += self.lg * np.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)),
np.transpose(inputs))
pass
def query(self, inputs_list):
"""查询"""
# 将输入的列表转换为2d数组
inputs = np.array(inputs_list, ndmin=2).T
# 计算进入隐藏层的信号
hidden_inputs = np.dot(self.w1, inputs)
# 计算从隐藏层出来的信号
hidden_outputs = self.activation_func(hidden_inputs)
# 计算进入最后输出层的信号
final_inputs = np.dot(self.w2, hidden_outputs)
# 计算从最后输出层出来的信号
final_outputs = self.activation_func(final_inputs)
return final_outputs
input_nodes = 784 # 输入节点为图片的像素值
hidden_nodes = 100 # 隐藏节点为100
output_nodes = 10 # 输出节点为10, 表示数字0-9
learning_grate = 0.3 # 学习率为0.3
n = NeuralNetwork(input_nodes, hidden_nodes, output_nodes, learning_grate)
# 打开手写数字数据集
train_data_file = open("D:/PythonProjects/JupyterNotebookProjects/Matplotlib/mnist_dataset/mnist_traindata_100.csv",
"r", encoding="utf8")
train_data_list = train_data_file.readlines()
train_data_file.close()
# print(train_data_list[0])
# 训练神经网络
for record in train_data_list:
all_values = record.split(",")
inputs = (np.asfarray(all_values[1:]) / 255 * 0.99) + 0.01
targets = np.zeros(output_nodes) + 0.01
targets[int(all_values[0])] = 0.99
n.train(inputs, targets)
pass
# 测试神经网络
test_data_file = open("D:/PythonProjects/JupyterNotebookProjects/Matplotlib/mnist_dataset/mnist_testdata_10.csv", "r",
encoding='utf8')
test_data_list = test_data_file.readlines()
test_data_file.close()
# 测试集中的10个数字均进行预测
for i in range(10):
all_values = test_data_list[i].split(",") # 测试数据集中第几个数
print(all_values[0]) # 应该预测的结果
image_array = np.asfarray(all_values[1:]).reshape((28, 28))
plt.imshow(image_array, cmap="Greys", interpolation="None")
plt.show() # 需要识别的数字
# 查询结果
result = n.query((np.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01)
print(result) # 对各个数字的预测概率
```
### 将数组转换为对应的图像
```
import numpy as np
import matplotlib.pyplot as plt
# 打开手写数字数据集
data_file = open("./minist_dataset/minist_train_100.csv", "r")
data_list = data_file.readlines()
data_file.close()
# 第一个元素为预期数字,将28*28的像素着色,
all_values = data_list[99].split(",")
image_array = np.asfarray(all_values[1:]).reshape(28, 28) # 文本字符串转换为实数并转换为28*28的数组
print(all_values[0])
plt.imshow(image_array, cmap="Greys", interpolation="None")
import numpy as np
import matplotlib.pyplot as plt
import scipy.special
class NeuralNetwork(object):
"""神经网络类"""
def __init__(self, input_odes, hidden_odes, output_odes, learning_grate):
"""
input_odes: 输入层节点
hidden_odes: 隐藏层节点
output_odes: 输出层节点
learning_grate: 学习率
"""
self.in_odes = input_odes
self.h_odes = hidden_odes
self.out_odes = output_odes
self.lg = learning_grate
# 创建两个链接权重矩阵
self.w1 = np.random.rand(self.h_odes, self.in_odes) - 0.5
self.w2 = np.random.rand(self.out_odes, self.h_odes) - 0.5
# 激活函数
self.activation_func = lambda x: scipy.special.expit(x)
pass
def train(self, inputs_list, targets_list):
"""训练"""
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
hidden_inputs = np.dot(self.w1, inputs)
hidden_outputs = self.activation_func(hidden_inputs)
final_inputs = np.dot(self.w2, hidden_outputs)
final_outputs = self.activation_func(final_inputs)
output_errors = targets - final_outputs
hidden_errors = np.dot(self.w2.T, output_errors)
self.w2 += self.lg * np.dot((output_errors * final_outputs * (1.0 - final_outputs)),
np.transpose(hidden_outputs))
self.w1 += self.lg * np.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)),
np.transpose(inputs))
pass
def query(self, inputs_list):
"""查询"""
# 将输入的列表转换为2d数组
inputs = np.array(inputs_list, ndmin=2).T
# 计算进入隐藏层的信号
hidden_inputs = np.dot(self.w1, inputs)
# 计算从隐藏层出来的信号
hidden_outputs = self.activation_func(hidden_inputs)
# 计算进入最后输出层的信号
final_inputs = np.dot(self.w2, hidden_outputs)
# 计算从最后输出层出来的信号
final_outputs = self.activation_func(final_inputs)
return final_outputs
input_nodes = 784 # 输入节点为图片的像素值
hidden_nodes = 100 # 隐藏节点为100
output_nodes = 10 # 输出节点为10, 表示数字0-9
learning_grate = 0.2 # 学习率为0.2
n = NeuralNetwork(input_nodes, hidden_nodes, output_nodes, learning_grate)
# 打开手写数字数据集
train_data_file = open("D:/PythonProjects/JupyterNotebookProjects/Matplotlib/mnist_dataset/mnist_traindata_100.csv",
"r", encoding="utf8")
train_data_list = train_data_file.readlines()
train_data_file.close()
# print(train_data_list[0])
# 训练神经网络
for record in train_data_list:
all_values = record.split(",")
inputs = (np.asfarray(all_values[1:]) / 255 * 0.99) + 0.01
targets = np.zeros(output_nodes) + 0.01
targets[int(all_values[0])] = 0.99
n.train(inputs, targets)
pass
# 测试神经网络
test_data_file = open("D:/PythonProjects/JupyterNotebookProjects/Matplotlib/mnist_dataset/mnist_testdata_10.csv", "r",
encoding='utf-8-sig')
test_data_list = test_data_file.readlines()
test_data_file.close()
score_card = []
# 测试集中的10个数字均进行预测
for record in test_data_list:
all_values = record.split(",")
# print(type(all_values[0]), all_values[0])
correct_label = int(all_values[0])
print(correct_label, "correct label")
inputs = (np.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
outputs = n.query(inputs)
label = np.argmax(outputs)
print(label, "network's answer")
if label == correct_label:
score_card.append(1)
else:
score_card.append(0)
pass
pass
print(score_card)
# 计算神经网络的性能(预测准确率)
score_card_array = np.asarray(score_card)
performance = score_card_array.sum() / score_card_array.size
print("performance=", performance)
```
### 导入自己手写的图片,将其转换为需要的格式然后用自己训练的神经网络进行识别
```
import scipy.misc
img_array = scipy.misc.imread(image_file_name, flatten = True)
img_data = 255.0 - img_array . reshape(784)
img_data = (img_data / 255.0 * 0.99 ) + 0.01
```
### 将图片转换为.csv格式
The format is:
label, pix-11, pix-12, pix-13, ...
where pix-ij is the pixel in the ith row and jth column.
For the curious, this is the script to generate the csv files from the original data.
```
def convert(imgf, labelf, outf, n):
f = open(imgf, "rb")
o = open(outf, "w")
l = open(labelf, "rb")
f.read(16)
l.read(8)
images = []
for i in range(n):
image = [ord(l.read(1))]
for j in range(28*28):
image.append(ord(f.read(1)))
images.append(image)
for image in images:
o.write(",".join(str(pix) for pix in image)+"\n")
f.close()
o.close()
l.close()
convert("train-images-idx3-ubyte", "train-labels-idx1-ubyte",
"mnist_train.csv", 60000)
convert("t10k-images-idx3-ubyte", "t10k-labels-idx1-ubyte",
"mnist_test.csv", 10000)
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
%matplotlib inline
%config InlineBackend.figure_format='retina'
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
```
# Preparing data
```
import tensorflow
from tensorflow.keras import layers
import tensorflow as tf
import sklearn
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
import math
import pandas
import pandas as pd
from matplotlib import pyplot
import numpy as np
import matplotlib.pyplot as plt
import numpy
import seaborn as sns
from tqdm import tqdm
data = pandas.read_csv('ak_ts')
data1 = data.loc[:,['time_value','value']]
data1['time_value'] = pandas.to_datetime(data1['time_value'])
train_dates = data1['time_value']
data1.set_index('time_value',inplace = True)
print(data1.head(5))
data2 = data1.truncate(before='2020-4-1',after='2021-3-1')
print(data2.tail(5))
scaler = StandardScaler()
scaler = scaler.fit(data2)
data_scaled = scaler.transform(data2)
print(data_scaled)
print(len(data_scaled))
from tqdm import tqdm
def get_timeseries_inputs(data, window_size, horizon,
multivariate_output=False, shuffle=False, other_horizon=None):
"""
:param data: numpy.array
shape (n_samples, n_features) or (M, n_samples, n_features)
:param window_size: int
Fixed size of the look-back
:param horizon: int
Forecasting horizon, the number of future steps that have to be forecasted
:param multivariate_output: if True, the target array will not have shape
(n_samples, output_sequence_len) but (n_samples, output_sequence_len, n_features)
:param shuffle: if True shuffle the data on the first axis
:param other_horizon:
:return: tuple
Return two numpy.arrays: the input and the target for the model.
the inputs has shape (n_samples, input_sequence_len, n_features)
the target has shape (n_samples, output_sequence_len)
"""
if data.ndim == 2:
data = np.expand_dims(data, 0)
inputs = []
targets = []
for X in tqdm(data): # for each array of shape (n_samples, n_features)
n_used_samples = X.shape[0] - horizon - window_size + 1
for i in range(n_used_samples):
inputs.append(X[i: i + window_size])
# TARGET FEATURE SHOULD BE THE FIRST
if multivariate_output:
if other_horizon is None:
targets.append(
X[i + window_size: i + window_size + horizon])
else:
targets.append(
X[i + 1: i + window_size + 1])
else:
if other_horizon is None:
targets.append(
X[i + window_size: i + window_size + horizon, 0])
else:
targets.append(
X[i + 1: i + window_size + 1, 0])
encoder_input_data = np.asarray(inputs) # (n_samples, sequence_len, n_features)
decoder_target_data = np.asarray(targets) # (n_samples, horizon) or (n_samples, horizon, n_features) if multivariate_output
idxs = np.arange(encoder_input_data.shape[0])
if shuffle:
np.random.shuffle(idxs)
return encoder_input_data[idxs], decoder_target_data[idxs]
```
# Timestep = 45 & Horizon = 7
```
window_size = 45
horizon = 7
X_series,Y_series = get_timeseries_inputs(data_scaled, window_size, horizon)
print(X_series)
print(X_series.shape)
print(Y_series)
print(Y_series.shape)
train_dim = int(len(X_series)) -1
X_train, X_test = X_series[:train_dim], X_series[train_dim:]
print(X_train.shape)
print(X_test.shape)
Y_train, Y_test = Y_series[:train_dim], Y_series[train_dim:]
print(Y_train.shape)
print(Y_test.shape)
model = tf.keras.Sequential([
layers.LSTM(128,activation = 'relu',input_shape= (X_train.shape[1],X_train.shape[2]),return_sequences = True),
#layers.LSTM(64,activation = 'relu',return_sequences = True),
layers.LSTM(32,activation = 'relu',return_sequences = False),
#layers.LSTM(64,activation = 'relu',input_shape= (X_train.shape[1],X_train.shape[2])),
layers.Dropout(0.5),
layers.Dense(Y_train.shape[1])
])
model.compile(
optimizer = tf.keras.optimizers.Adam(0.01),
loss = tf.keras.metrics.mse,
)
model.summary()
earlystop_callback = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', min_delta= 1, patience=5, verbose=1,
mode='auto', baseline=None, restore_best_weights=False
)
history = model.fit(X_train,Y_train,epochs=80,batch_size=20,callbacks=[earlystop_callback],validation_split=0.1,verbose=2,shuffle=True)
#history = model.fit(X_train,Y_train,epochs=100,batch_size=20, validation_split=0.1,verbose=2,shuffle= True)
plt.figure(figsize=(12,7))
plt.plot(history.history['loss'],label= 'training loss')
plt.plot(history.history['val_loss'],label = 'validation loss')
plt.legend()
plt.show()
forcast = model.predict(X_test)
print(forcast.shape)
print(forcast)
y_pred_true = scaler.inverse_transform(forcast)
print(y_pred_true)
print(y_pred_true.shape)
y_pred_true_reshape = y_pred_true.reshape((-1,))
y_pred_true_reshape.shape
day = horizon
forecast_period_dates = pd.date_range(list(train_dates)[-day], periods=day, freq='1d').tolist()
forecast_period_dates
# Convert timestamp to date
forecast_dates = []
for time_i in forecast_period_dates:
forecast_dates.append(time_i.date())
df_forecast = pd.DataFrame({'Date':np.array(forecast_dates), 'value':y_pred_true_reshape})
df_forecast['Date']=pd.to_datetime(df_forecast['Date'])
original = data[['time_value', 'value']]
original['time_value']=pd.to_datetime(original['time_value'])
#original1= original.copy()
original = original.loc[original['time_value'] >= '2020-12-1']
train = original.loc[original['time_value'] < '2021-2-23']
test = original.loc[original['time_value'] >= '2021-2-23']
rmse = math.sqrt(mean_squared_error(test['value'],df_forecast['value']))
print('Test RMSE : %.3f ' % rmse)
rmsee = str(round(rmse,3))
window_sizee = str(window_size)
horizonn = str(horizon)
plt.figure(figsize=(12,7))
plt.title('Prediction of mortality(deaths per 100,000 people) in the state ak with rmse '+ rmsee + ' timestep '+ window_sizee +' for horizon ' + horizonn)
plt.plot(train['time_value'], train['value'],label="original")
plt.plot(test['time_value'], test['value'],label="ture")
plt.plot(df_forecast['Date'], df_forecast['value'],label="predict")
"""open the grid"""
plt.grid(True)
plt.legend()
#plt.legend(bbox_to_anchor=(1.0, 1), loc=1, borderaxespad=0.)
plt.show()
# forcast_train = model.predict(X_train)
# forcast_train_copies = numpy.repeat(forcast_train,10,axis = -1)
# x_pred_true = scaler.inverse_transform(forcast_train_copies)[:,0]
# #x_pred_true
# periods = len(x_pred_true)
# day = X_test.shape[0] + periods
# print(periods,day)
# forecast_period_dates = pd.date_range(list(train_dates)[-day], periods=periods, freq='1d').tolist()
# forecast_period_dates
# # Convert timestamp to date
# forecast_dates = []
# for time_i in forecast_period_dates:
# forecast_dates.append(time_i.date())
# df_forecast_x = pd.DataFrame({'Date':np.array(forecast_dates), 'value':x_pred_true})
# df_forecast_x['Date']=pd.to_datetime(df_forecast_x['Date'])
# original_data1 = data1.truncate(before='2020-5-16',after='2021-2-14')
# original2 = original_data1['value']
# rmse_ori = math.sqrt(mean_squared_error(original2,df_forecast_x['value']))
# print('Test RMSE : %.3f ' % rmse_ori)
# rmsee_ori = str(round(rmse_ori,3))
# plt.figure(figsize=(12,7))
# plt.title('Fitting of mortality(deaths per 100,000 people) in the state ak with rmse '+ rmsee_ori )
# plt.plot(original2,label="original")
# #plt.plot(test['time_value'], test['value'],"_-",label="ture")
# plt.plot(df_forecast_x['Date'], df_forecast_x['value'],label="predict")
# """open the grid"""
# plt.grid(True)
# plt.legend()
# #plt.legend(bbox_to_anchor=(1.0, 1), loc=1, borderaxespad=0.)
# plt.show()
# #plt.savefig('Prediction of mortality(deaths per 100,000 people) in the state ak with rmse '+ rmsee )
```
| github_jupyter |
# Chapter 1: Pandas Foundations
## Recipes
* [Dissecting the anatomy of a DataFrame](#Dissecting-the-anatomy-of-a-DataFrame)
* [Accessing the main DataFrame components](#Accessing-the-main-DataFrame-components)
* [Understanding data types](#Understanding-data-types)
* [Selecting a single column of data as a Series](#Selecting-a-single-column-of-data-as-a-Series)
* [Calling Series methods](#Calling-Series-methods)
* [Working with operators on a Series](#Working-with-operators-on-a-Series)
* [Chaining Series methods together](#Chaining-Series-methods-together)
* [Making the index meaningful](#Making-the-index-meaningful)
* [Renaming row and column names](#Renaming-row-and-column-names)
* [Creating and deleting columns](#Creating-and-deleting-columns)
```
import pandas as pd
import numpy as np
```

# Dissecting the anatomy of a DataFrame
#### Change options to get specific output for book
```
pd.set_option('max_columns', 8, 'max_rows', 10)
movie = pd.read_csv('data/movie.csv')
movie.head()
```

# Accessing the main DataFrame components
```
columns = movie.columns
index = movie.index
data = movie.values
columns
index
data
type(index)
type(columns)
type(data)
issubclass(pd.RangeIndex, pd.Index)
```
## There's more
```
index.values
columns.values
```
# Understanding data types
```
movie = pd.read_csv('data/movie.csv')
movie.dtypes
movie.get_dtype_counts()
```
# Selecting a single column of data as a Series
```
movie = pd.read_csv('data/movie.csv')
movie['director_name']
movie.director_name
type(movie['director_name'])
```
## There's more
```
director = movie['director_name'] # save Series to variable
director.name
director.to_frame().head()
```
# Calling Series methods
## Getting ready...
```
s_attr_methods = set(dir(pd.Series))
len(s_attr_methods)
df_attr_methods = set(dir(pd.DataFrame))
len(df_attr_methods)
len(s_attr_methods & df_attr_methods)
```
## How to do it...
```
movie = pd.read_csv('data/movie.csv')
director = movie['director_name']
actor_1_fb_likes = movie['actor_1_facebook_likes']
director.head()
actor_1_fb_likes.head()
pd.set_option('max_rows', 8)
director.value_counts()
actor_1_fb_likes.value_counts()
director.size
director.shape
len(director)
director.count()
actor_1_fb_likes.count()
actor_1_fb_likes.quantile()
actor_1_fb_likes.min(), actor_1_fb_likes.max(), \
actor_1_fb_likes.mean(), actor_1_fb_likes.median(), \
actor_1_fb_likes.std(), actor_1_fb_likes.sum()
actor_1_fb_likes.describe()
director.describe()
actor_1_fb_likes.quantile(.2)
actor_1_fb_likes.quantile([.1, .2, .3, .4, .5, .6, .7, .8, .9])
director.isnull()
actor_1_fb_likes_filled = actor_1_fb_likes.fillna(0)
actor_1_fb_likes_filled.count()
actor_1_fb_likes_dropped = actor_1_fb_likes.dropna()
actor_1_fb_likes_dropped.size
```
## There's more...
```
director.value_counts(normalize=True)
director.hasnans
director.notnull()
```
# Working with operators on a Series
```
pd.options.display.max_rows = 6
5 + 9 # plus operator example. Adds 5 and 9
4 ** 2 # exponentiation operator. Raises 4 to the second power
a = 10 # assignment operator.
5 <= 9 # less than or equal to operator
'abcde' + 'fg' # plus operator for strings. C
not (5 <= 9) # not is an operator that is a reserved keyword and reverse a boolean
7 in [1, 2, 6] # in operator checks for membership of a list
set([1,2,3]) & set([2,3,4])
[1, 2, 3] - 3
a = set([1,2,3])
a[0] # the indexing operator does not work with sets
```
## Getting ready...
```
movie = pd.read_csv('data/movie.csv')
imdb_score = movie['imdb_score']
imdb_score
imdb_score + 1
imdb_score * 2.5
imdb_score // 7
imdb_score > 7
director = movie['director_name']
director == 'James Cameron'
```
## There's more...
```
imdb_score.add(1) # imdb_score + 1
imdb_score.mul(2.5) # imdb_score * 2.5
imdb_score.floordiv(7) # imdb_score // 7
imdb_score.gt(7) # imdb_score > 7
director.eq('James Cameron') # director == 'James Cameron'
imdb_score.astype(int).mod(5)
a = type(1)
type(a)
a = type(imdb_score)
a([1,2,3])
```
# Chaining Series methods together
```
movie = pd.read_csv('data/movie.csv')
actor_1_fb_likes = movie['actor_1_facebook_likes']
director = movie['director_name']
director.value_counts().head(3)
actor_1_fb_likes.isnull().sum()
actor_1_fb_likes.dtype
actor_1_fb_likes.fillna(0)\
.astype(int)\
.head()
```
## There's more...
```
actor_1_fb_likes.isnull().mean()
(actor_1_fb_likes.fillna(0)
.astype(int)
.head())
```
# Making the index meaningful
```
movie = pd.read_csv('data/movie.csv')
movie.shape
movie2 = movie.set_index('movie_title')
movie2
pd.read_csv('data/movie.csv', index_col='movie_title')
```
# There's more...
```
movie2.reset_index()
```
# Renaming row and column names
```
movie = pd.read_csv('data/movie.csv', index_col='movie_title')
idx_rename = {'Avatar':'Ratava', 'Spectre': 'Ertceps'}
col_rename = {'director_name':'Director Name',
'num_critic_for_reviews': 'Critical Reviews'}
movie.rename(index=idx_rename,
columns=col_rename).head()
```
# There's more
```
movie = pd.read_csv('data/movie.csv', index_col='movie_title')
index = movie.index
columns = movie.columns
index_list = index.tolist()
column_list = columns.tolist()
index_list[0] = 'Ratava'
index_list[2] = 'Ertceps'
column_list[1] = 'Director Name'
column_list[2] = 'Critical Reviews'
print(index_list[:5])
print(column_list)
movie.index = index_list
movie.columns = column_list
movie.head()
```
# Creating and deleting columns
```
movie = pd.read_csv('data/movie.csv')
movie['has_seen'] = 0
movie.columns
movie['actor_director_facebook_likes'] = (movie['actor_1_facebook_likes'] +
movie['actor_2_facebook_likes'] +
movie['actor_3_facebook_likes'] +
movie['director_facebook_likes'])
movie['actor_director_facebook_likes'].isnull().sum()
movie['actor_director_facebook_likes'] = movie['actor_director_facebook_likes'].fillna(0)
movie['is_cast_likes_more'] = (movie['cast_total_facebook_likes'] >=
movie['actor_director_facebook_likes'])
movie['is_cast_likes_more'].all()
movie = movie.drop('actor_director_facebook_likes', axis='columns')
movie['actor_total_facebook_likes'] = (movie['actor_1_facebook_likes'] +
movie['actor_2_facebook_likes'] +
movie['actor_3_facebook_likes'])
movie['actor_total_facebook_likes'] = movie['actor_total_facebook_likes'].fillna(0)
movie['is_cast_likes_more'] = movie['cast_total_facebook_likes'] >= \
movie['actor_total_facebook_likes']
movie['is_cast_likes_more'].all()
movie['pct_actor_cast_like'] = (movie['actor_total_facebook_likes'] /
movie['cast_total_facebook_likes'])
movie['pct_actor_cast_like'].min(), movie['pct_actor_cast_like'].max()
movie.set_index('movie_title')['pct_actor_cast_like'].head()
```
## There's more...
```
profit_index = movie.columns.get_loc('gross') + 1
profit_index
movie.insert(loc=profit_index,
column='profit',
value=movie['gross'] - movie['budget'])
movie.head()
```
| github_jupyter |
```
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
import tensorflow.keras.backend as kb
from backwardcompatibilityml import scores
from backwardcompatibilityml.tensorflow import helpers as tf_helpers
from backwardcompatibilityml.tensorflow.loss.strict_imitation import BCStrictImitationKLDivLoss
import copy
tf.enable_v2_behavior()
tf.random.set_seed(0)
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
def normalize_img(image, label):
"""Normalizes images: `uint8` -> `float32`."""
label_one_hot = tf.one_hot(label, 10)
return tf.cast(image, tf.float32) / 255., label_one_hot
ds_train = ds_train.map(
normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(128)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(
normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_test = ds_test.batch(128)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.experimental.AUTOTUNE)
kldiv_loss = tf.keras.losses.KLDivergence()
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=kldiv_loss,
optimizer=tf.keras.optimizers.Adam(0.001),
metrics=['accuracy'],
)
model.fit(
ds_train,
epochs=3,
validation_data=ds_test,
)
lambda_c = 0.9
model.trainable = False
h2 = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
bc_loss = BCStrictImitationKLDivLoss(model, h2, lambda_c)
optimizer = tf.keras.optimizers.Adam(0.001)
tf_helpers.bc_fit(h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
model.trainable = False
h2.trainable = False
h1_predicted_labels = []
h2_predicted_labels = []
ground_truth_labels = []
for x_batch_test, y_batch_test in ds_test:
h1_batch_predictions = tf.argmax(model(x_batch_test), axis=1)
h2_batch_predictions = tf.argmax(h2(x_batch_test), axis=1)
h1_predicted_labels += h1_batch_predictions.numpy().tolist()
h2_predicted_labels += h2_batch_predictions.numpy().tolist()
ground_truth_labels += y_batch_test.numpy().tolist()
btc = scores.trust_compatibility_score(h1_predicted_labels, h2_predicted_labels, ground_truth_labels)
bec = scores.error_compatibility_score(h1_predicted_labels, h2_predicted_labels, ground_truth_labels)
print(f"lambda_c: {lambda_c}")
print(f"BTC: {btc}")
print(f"BEC: {bec}")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ashikshafi08/Learning_Tensorflow/blob/main/Experiments/Generator_to_Dataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
For this experiment, we'll use a dataset from AI Crowd competition (live now) https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-team-classification
This is just for experiment purposes learning how to use `tf.data.Dataset.from_generators()` and this dataset was a suitable one to experiment with.
# Creating a Dataset object from ImageDataGenerator
Since I am new to tensorflow and the `tf.data` api I wasn't sure how to construct complex pipelines. It was easy using `ImageDataGenerator` (high-level api) especially with directory and dataframe to load in images.
I came over this handy method `tf.data.Dataset.from_generator()` which help us createa a dataset object from the generator object itself. How cool?
Try to wrap the `Dataset` class around this data generators.
We will be looking into `.flow_from_dataframe()` method.
### Things we'll be doing
- Use transfer learning fine tuning to train our model
- Use mixed_precision
- Use prefetch
```
# Checking the GPU
!nvidia-smi
# Getting some helper functions from Daniels' TensorFlow Course
!wget https://raw.githubusercontent.com/mrdbourke/tensorflow-deep-learning/main/extras/helper_functions.py
# Importing the needed functions for our use
from helper_functions import plot_loss_curves , compare_historys
# Using AI Crowd APi to download our data
!pip install aicrowd-cli
API_KEY = '#########'
!aicrowd login --api-key $API_KEY
# Downloading the dataset
!aicrowd dataset download --challenge f1-team-classification -j 3
!rm -rf data
!mkdir data
!unzip train.zip -d data/train
!unzip val.zip -d data/val
!unzip test.zip -d data/test
!mv train.csv data/train.csv
!mv val.csv data/val.csv
!mv sample_submission.csv data/sample_submission.csv
# Let's create a variable for our data paths
train_dir = 'data/train/'
test_dir = 'data/test/'
val_dir = 'data/val/'
# Our ImageID and label dataframes
import pandas as pd
import numpy as np
df_train = pd.read_csv('data/train.csv')
df_val = pd.read_csv('data/val.csv')
# Looking into our train dataframe
df_train.head()
```
## Becoming one with the data
Alright now we've got our data and it's time to visualize it and see how they look.
```
# Are the labels are well balanced?
df_train['label'].value_counts()
# How many images are there in the training directory?
df_train['ImageID'].shape
# Defining some parameters
import tensorflow as tf
BATCH_SIZE = 64
IMG_SIZE = (224 , 224)
# Creating our ImageDataGenerators for train and valid
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale = 1/255.)
valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale = 1/255.)
# How does our filenames looks like?
import os
print(os.listdir(train_dir)[:10])
# Adding the jpg extension to our ImageID in our train and valid dataframe
def append_ext(fn):
return f'{fn}.jpg'
# Now applying our function
df_train['ImageID'] = df_train['ImageID'].apply(append_ext)
df_val['ImageID'] = df_val['ImageID'].apply(append_ext)
# Looking into our ImageID column
df_train['ImageID'][:5]
# Now it's time to import our data into the generator
train_data_all = train_datagen.flow_from_dataframe(dataframe= df_train ,
directory = train_dir ,
x_col = 'ImageID' ,
y_col = 'label' ,
target_size = IMG_SIZE ,
class_mode = 'binary' ,
batch_size = 32 ,
shuffle = True)
val_data_all = valid_datagen.flow_from_dataframe(dataframe = df_val ,
directory = val_dir ,
x_col = 'ImageID' ,
y_col = 'label' ,
target_size = IMG_SIZE ,
class_mode = 'binary',
batch_size = 32 ,
shuffle = True)
# Without any transformations (batch_size , imgsize etc..)
train_data_none = train_datagen.flow_from_dataframe(dataframe= df_train ,
directory = train_dir ,
x_col = 'ImageID' ,
y_col = 'label' ,
batch_size = 32 ,
class_mode = 'binary' )
val_data_none = valid_datagen.flow_from_dataframe(dataframe = df_val ,
directory = val_dir ,
x_col = 'ImageID' ,
y_col = 'label' ,
batch_size = 32,
class_mode = 'binary')
# Checking the image , label shape and dtype (with transforms)
images, labels = next(train_data_all)
# Checking their shapes and dtypes
images.shape , labels.shape , images.dtype , labels.dtype
# Checking the image , label shapes an dtypes (without any transforms)
images_none , labels_none = next(train_data_none)
# Checking their shapes and dtypes
images_none.shape , labels_none.shape , images_none.dtype , labels_none.dtype
# Getting the class indices
train_data_all.class_indices
```
### Creating a dataset using `tf.data.Dataset.from_generators()`
Now we're going to convert the generator into Dataset object using the `tf.data.Dataset.from_generator()`
Things to be noted:
- In the place of `lambda` use your datagenerator object.
- The **output_shapes** is really important because our dataset object will returns the exact shape we mention inside the `output_shapes`.
This was the reason we examined our data types and shape above as soon as we built our generator.
#### Creating a dataset with the transforms here (just for experimentation)
```
train_dataset_all = tf.data.Dataset.from_generator(
lambda: train_data_all ,
output_types = (tf.float32 , tf.float32) ,
output_shapes = ([32 , 224 , 224 , 3] , [32 , ])
)
valid_dataset_all = tf.data.Dataset.from_generator(
lambda: val_data_all ,
output_types = (tf.float32 , tf.float32),
output_shapes = ([32 , 224 , 224 , 3] , [32 , ])
)
train_dataset_all , valid_dataset_all
```
#### Creating a dataset without any transforms (just for experimentations)
```
train_dataset_none = tf.data.Dataset.from_generator(
lambda: train_data_none ,
output_types = (tf.float32 , tf.float32) ,
output_shapes = ([32 , 256 , 256 , 3] , [32 , ])
)
valid_dataset_none = tf.data.Dataset.from_generator(
lambda: val_data_all ,
output_types = (tf.float32 , tf.float32),
output_shapes = ([32 , 256 , 256 , 3] , [32 , ])
)
train_dataset_none , valid_dataset_none
```
### **Note**
Since we're derived our from dataset object from a generator we won't be able to use `len()` function to know the number of samples in our dataset.
We can use cardinality to get the number of samples in our dataset. It's because in our case after the conversion the length is unknown and infinite.
`tf.data.experimental.cardinality` --> returns cardinality of the **dataset**
This will return -2 for now.

It should return **40000** (for train) because that was the number of samples (images) in our train directory.

But don't worry we can even fix this by using a similar function, since our length is unknown and it's the common case when you convert from generator to a dataset object.
We can explicitly enter our number of samples and even better we can use the `len()` function now on our dataset using,
`tf.data.experimental.assert_cardinality()` --> Asserts the cardinality of the dataset. Now will apply this to our dataset.
```
# Using assert_cardinality to add the number of samples (input)
train_dataset_all = train_dataset_all.apply(tf.data.experimental.assert_cardinality(40000))
valid_dataset_all = valid_dataset_all.apply(tf.data.experimental.assert_cardinality(4000))
# Same for our without transformations dataset
train_dataset_none = train_dataset_none.apply(tf.data.experimental.assert_cardinality(40000))
valid_dataset_none = valid_dataset_none.apply(tf.data.experimental.assert_cardinality(4000))
train_dataset_all , valid_dataset_all
# Now checkin the len
len(train_dataset_all) , len(valid_dataset_all)
# Setting up mixed precision
from tensorflow.keras import mixed_precision
mixed_precision.set_global_policy(policy = 'mixed_float16')
mixed_precision.global_policy() # should output "mixed_float16"
# How many classes are there?
train_data_all.class_indices
# Visualizing our images
import matplotlib.pyplot as plt
x , y = train_data.next()
for i in range(0, 4):
image = x[i]
label = y[i]
plt.axis(False)
# print(label) --> for checking whether it's plotting right ones
if label == 1.0:
label = 'redbull'
else:
label = 'mercedes'
plt.title(label)
plt.imshow(image)
plt.show()
# Getting our class names in a list
class_names = list(train_data_all.class_indices.keys())
len(class_names)
```
## Modelling
```
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
# Create base model
input_shape = (224, 224, 3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False # freeze base model layers
# Create Functional model
inputs = layers.Input(shape=input_shape, name="input_layer")
# Note: EfficientNetBX models have rescaling built-in but if your model didn't you could have a layer like below
# x = preprocessing.Rescaling(1./255)(x)
x = base_model(inputs, training=False) # set base_model to inference mode only
x = layers.GlobalAveragePooling2D(name="pooling_layer")(x)
x = layers.Dense(1)(x) # want one output neuron per class
# Separate activation of output layer so we can output float32 activations
outputs = layers.Activation("sigmoid", dtype=tf.float32, name="softmax_float32")(x)
model_1 = tf.keras.Model(inputs, outputs)
# Checking whether our layers are using mixed precision
for layer in model_1.layers:
print(layer.name , layer.trainable , layer.dtype , layer.dtype_policy)
# Tensorflow addons for f1-score
!pip install tensorflow_addons
import tensorflow_addons as tfa
f1_score = tfa.metrics.F1Score(average='macro' , num_classes= 1)
# Compile the model
model_1.compile(loss = tf.keras.losses.BinaryCrossentropy() ,
optimizer = tf.keras.optimizers.Adam() ,
metrics = ['accuracy' , f1_score])
```
Let's train the model again
> **Note**: Before using `len(train_data)` in the `steps_per_epoch` we should divide it by our **batch_size**.
```
# To get the actual steps for epochs for our train data
len(train_dataset_all) // 64
# Training a feature extraction model
history_feature_model_1 = model_1.fit(train_dataset_all ,
steps_per_epoch = len(train_dataset_all) // 32,
epochs = 3 ,
validation_data = valid_dataset_all,
validation_steps = int(0.15 * (len(valid_dataset_all))) )
# Gotta unfreeze all the layers
base_model.trainable = True
# Refreeze all layers except last 5
for layer in base_model.layers[:-3]:
layer.trainable = False
# Compiling the model again making the change
model_1.compile(loss = tf.keras.losses.BinaryCrossentropy() ,
optimizer = tf.keras.optimizers.Adam(lr = 0.0001) ,
metrics = ['accuracy' , f1_score])
# Creating learning rate reduction callback
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss",
factor=0.2, # multiply the learning rate by 0.2 (reduce by 5x)
patience=2,
verbose=1, # print out when learning rate goes down
min_lr=1e-7)
# Re-fit to fine tune the model
initial_epochs = 5
fine_tune_epochs = initial_epochs + 25
history_fine_model_1 = model_1.fit(train_dataset_all ,
steps_per_epoch = len(train_dataset_all) // 32 ,
epochs = fine_tune_epochs ,
initial_epoch = history_feature_model_1.epoch[-1] ,
validation_data = valid_dataset_all ,
validation_steps = int(0.15 * (len(valid_dataset_all))) ,
callbacks = [reduce_lr])
```
### Log (should be improved)
Epoch 3/30
1250/1250 [==============================] - 151s 118ms/step - loss: 0.6951 - accuracy: 0.5050 - f1_score: 0.6656 - val_loss: 0.6951 - val_accuracy: 0.4953 - val_f1_score: 0.6624
Epoch 4/30
1250/1250 [==============================] - 145s 116ms/step - loss: 0.6944 - accuracy: 0.5048 - f1_score: 0.6677 - val_loss: 0.6932 - val_accuracy: 0.5073 - val_f1_score: 0.6602
Epoch 5/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6947 - accuracy: 0.4983 - f1_score: 0.6681 - val_loss: 0.6939 - val_accuracy: 0.4971 - val_f1_score: 0.6641
Epoch 6/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6943 - accuracy: 0.5001 - f1_score: 0.6683 - val_loss: 0.6930 - val_accuracy: 0.5061 - val_f1_score: 0.6612
Epoch 7/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6941 - accuracy: 0.5010 - f1_score: 0.6701 - val_loss: 0.6933 - val_accuracy: 0.4938 - val_f1_score: 0.6611
Epoch 8/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6941 - accuracy: 0.4985 - f1_score: 0.6630 - val_loss: 0.6931 - val_accuracy: 0.5199 - val_f1_score: 0.6628
Epoch 00008: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Epoch 9/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6933 - accuracy: 0.4955 - f1_score: 0.6656 - val_loss: 0.6931 - val_accuracy: 0.5056 - val_f1_score: 0.6616
Epoch 10/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6932 - accuracy: 0.5005 - f1_score: 0.6648 - val_loss: 0.6932 - val_accuracy: 0.4961 - val_f1_score: 0.6632
Epoch 00010: ReduceLROnPlateau reducing learning rate to 3.999999898951501e-06.
Epoch 11/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5013 - f1_score: 0.6666 - val_loss: 0.6932 - val_accuracy: 0.4939 - val_f1_score: 0.6612
Epoch 12/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5037 - f1_score: 0.6677 - val_loss: 0.6930 - val_accuracy: 0.5054 - val_f1_score: 0.6619
Epoch 00012: ReduceLROnPlateau reducing learning rate to 7.999999979801942e-07.
Epoch 13/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6932 - accuracy: 0.4973 - f1_score: 0.6691 - val_loss: 0.6930 - val_accuracy: 0.5052 - val_f1_score: 0.6620
Epoch 14/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6932 - accuracy: 0.4954 - f1_score: 0.6728 - val_loss: 0.6931 - val_accuracy: 0.5091 - val_f1_score: 0.6610
Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.600000018697756e-07.
Epoch 15/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5219 - f1_score: 0.6698 - val_loss: 0.6931 - val_accuracy: 0.4970 - val_f1_score: 0.6626
Epoch 16/30
1250/1250 [==============================] - 145s 116ms/step - loss: 0.6931 - accuracy: 0.5025 - f1_score: 0.6658 - val_loss: 0.6931 - val_accuracy: 0.5027 - val_f1_score: 0.6623
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 17/30
1250/1250 [==============================] - 145s 116ms/step - loss: 0.6931 - accuracy: 0.5117 - f1_score: 0.6680 - val_loss: 0.6931 - val_accuracy: 0.4972 - val_f1_score: 0.6609
Epoch 18/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5091 - f1_score: 0.6684 - val_loss: 0.6931 - val_accuracy: 0.4991 - val_f1_score: 0.6626
Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 19/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5043 - f1_score: 0.6677 - val_loss: 0.6931 - val_accuracy: 0.4999 - val_f1_score: 0.6629
Epoch 20/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5048 - f1_score: 0.6670 - val_loss: 0.6931 - val_accuracy: 0.4975 - val_f1_score: 0.6609
Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 21/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5094 - f1_score: 0.6707 - val_loss: 0.6931 - val_accuracy: 0.4972 - val_f1_score: 0.6626
Epoch 22/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5023 - f1_score: 0.6662 - val_loss: 0.6931 - val_accuracy: 0.5002 - val_f1_score: 0.6630
Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 23/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5080 - f1_score: 0.6700 - val_loss: 0.6931 - val_accuracy: 0.4959 - val_f1_score: 0.6616
Epoch 24/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5017 - f1_score: 0.6656 - val_loss: 0.6931 - val_accuracy: 0.4978 - val_f1_score: 0.6614
Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 25/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5032 - f1_score: 0.6659 - val_loss: 0.6931 - val_accuracy: 0.4993 - val_f1_score: 0.6627
Epoch 26/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5052 - f1_score: 0.6682 - val_loss: 0.6931 - val_accuracy: 0.4979 - val_f1_score: 0.6613
Epoch 00026: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 27/30
1250/1250 [==============================] - 147s 117ms/step - loss: 0.6931 - accuracy: 0.5065 - f1_score: 0.6702 - val_loss: 0.6931 - val_accuracy: 0.4976 - val_f1_score: 0.6628
Epoch 28/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5035 - f1_score: 0.6687 - val_loss: 0.6931 - val_accuracy: 0.4983 - val_f1_score: 0.6608
Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 29/30
1250/1250 [==============================] - 147s 118ms/step - loss: 0.6931 - accuracy: 0.5078 - f1_score: 0.6692 - val_loss: 0.6931 - val_accuracy: 0.4980 - val_f1_score: 0.6615
Epoch 30/30
1250/1250 [==============================] - 146s 117ms/step - loss: 0.6931 - accuracy: 0.5074 - f1_score: 0.6713 - val_loss: 0.6931 - val_accuracy: 0.5005 - val_f1_score: 0.6632
Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-07.
```
d
```
| github_jupyter |
# Machine Translation with Transformer
Tutorial from:
https://www.tensorflow.org/tutorials/text/transformer
```
import tensorflow_datasets as tfds
import tensorflow as tf
import time
import numpy as np
import matplotlib.pyplot as plt
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(en.numpy() for pt, en in train_examples), target_vocab_size=2**13)
tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
BUFFER_SIZE = 20000
BATCH_SIZE = 64
```
## Add start and end tokens to the input and target
```
def encode(lang1, lang2):
lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
lang1.numpy()) + [tokenizer_pt.vocab_size+1]
lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
lang2.numpy()) + [tokenizer_en.vocab_size+1]
return lang1, lang2
# Wrap the previous function in a tf.py_function.
def tf_encode(pt, en):
result_pt, result_en = tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
result_pt.set_shape([None])
result_pt.set_shape([None])
return result_pt, result_en
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
return tf.logical_and(tf.size(x) <= max_length,
tf.size(y) <= max_length)
train_preprocessed = (
train_examples
.map(tf_encode)
.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
.cache()
.shuffle(BUFFER_SIZE))
val_preprocessed = (
val_examples
.map(tf_encode)
.filter(filter_max_length))
```
## Pad and batch examples
```
train_dataset = (train_preprocessed
.padded_batch(BATCH_SIZE, padded_shapes=([None], [None]))
.prefetch(tf.data.experimental.AUTOTUNE))
val_dataset = (val_preprocessed
.padded_batch(BATCH_SIZE, padded_shapes=([None], [None])))
```
## Get a batch from the validation set
```
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
```
## Positional encoding
```
def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# apply sin to even indices in the array; 2i
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
# apply cos to odd indices in the array; 2i+1
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
```
## Masking
Indicate where the pad value "0" is present. Output is "1" for these locations and "0" otherwise.
```
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
# Add extra dimensions to add the padding to the
# attention logits.
return seq[:, tf.newaxis, tf.newaxis, :] # (batchSize, 1, 1, seqLen)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask # (seq_len, seq_len)
```
## Scaled dot product attention.
```
def scaled_dot_product_attention(q, k, v, mask):
"""Calculate the attention weights.
q, k, v must have matching leading dimensions.
k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
The mask has different shapes depending on its type(padding or look ahead)
but it must be broadcastable for addition.
Args:
q: query shape == (..., seq_len_q, depth)
k: key shape == (..., seq_len_k, depth)
v: value shape == (..., seq_len_v, depth_v)
mask: Float tensor with shape broadcastable
to (..., seq_len_q, seq_len_k). Defaults to None.
Returns:
output, attention_weights
"""
matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)
# scale matmul_qk
dk = tf.cast(tf.shape(k)[-1], tf.float32)
scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
# add the mask to the scaled tensor.
if mask is not None:
scaled_attention_logits += (mask * -1e9)
# softmax is normalized on the last axis (seq_len_k) so that the scores
# add up to 1.
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)
output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v)
return output, attention_weights
def print_out(q, k, v):
temp_out, temp_attn = scaled_dot_product_attention(
q, k, v, None)
print ('Attention weights are:')
print (temp_attn)
print ('Output is:')
print (temp_out)
np.set_printoptions(suppress=True)
temp_k = tf.constant([[10,0,0],
[0,10,0],
[0,0,10],
[0,0,10]], dtype=tf.float32) # (4, 3)
temp_v = tf.constant([[ 1,0],
[ 10,0],
[ 100,5],
[1000,6]], dtype=tf.float32) # (4, 2)
# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# This query aligns with a repeated key (third and fourth),
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# This query aligns equally with the first and second key,
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# Pass all queries together.
temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32) # (3, 3)
print_out(temp_q, temp_k, temp_v)
```
## Multi-head attention
```
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model)
self.wk = tf.keras.layers.Dense(d_model)
self.wv = tf.keras.layers.Dense(d_model)
self.dense = tf.keras.layers.Dense(d_model)
def split_heads(self, x, batch_size):
"""Split the last dimension into (num_heads, depth).
Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
"""
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, v, k, q, mask):
batch_size = tf.shape(q)[0]
q = self.wq(q) # (batch_size, seq_len, d_model)
k = self.wk(k) # (batch_size, seq_len, d_model)
v = self.wv(v) # (batch_size, seq_len, d_model)
q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)
k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)
v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)
# scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
# attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
scaled_attention, attention_weights = scaled_dot_product_attention(
q, k, v, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)
output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)
return output, attention_weights
# Try the MultiHeadAttention class.
temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
```
## Point wise feed forward network
```
def point_wise_feed_forward_network(d_model, dff):
return tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff)
tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model)
])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
```
## Encoder layer
```
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(EncoderLayer, self).__init__()
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(x + attn_output) # (batch_size, input_seq_len, d_model)
ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model)
ffn_output = self.dropout2(ffn_output, training=training)
out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model)
return out2
```
## Decoder Layer
```
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(DecoderLayer, self).__init__()
self.mha1 = MultiHeadAttention(d_model, num_heads)
self.mha2 = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
self.dropout3 = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
# enc_output.shape == (batch_size, input_seq_len, d_model)
attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask) # (batch_size, target_seq_len, d_model)
attn1 = self.dropout1(attn1, training=training)
out1 = self.layernorm1(attn1 + x)
attn2, attn_weights_block2 = self.mha2(
enc_output, enc_output, out1, padding_mask) # (batch_size, target_seq_len, d_model)
attn2 = self.dropout2(attn2, training=training)
out2 = self.layernorm2(attn2 + out1) # (batch_size, target_seq_len, d_model)
ffn_output = self.ffn(out2) # (batch_size, target_seq_len, d_model)
ffn_output = self.dropout3(ffn_output, training=training)
out3 = self.layernorm3(ffn_output + out2) # (batch_size, target_seq_len, d_model)
return out3, attn_weights_block1, attn_weights_block2
```
## Encoder
```
class Encoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
maximum_position_encoding, rate=0.1):
super(Encoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
self.pos_encoding = positional_encoding(maximum_position_encoding,
self.d_model)
self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
seq_len = tf.shape(x)[1]
# adding embedding and position encoding.
x = self.embedding(x) # (batch_size, input_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x = self.enc_layers[i](x, training, mask)
return x # (batch_size, input_seq_len, d_model)
```
## Decoder
```
class Decoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
maximum_position_encoding, rate=0.1):
super(Decoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
seq_len = tf.shape(x)[1]
attention_weights = {}
x = self.embedding(x) # (batch_size, target_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x, block1, block2 = self.dec_layers[i](x, enc_output, training,
look_ahead_mask, padding_mask)
attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
# x.shape == (batch_size, target_seq_len, d_model)
return x, attention_weights
```
## Create the Transformer
```
class Transformer(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
target_vocab_size, pe_input, pe_target, rate=0.1):
super(Transformer, self).__init__()
self.encoder = Encoder(num_layers, d_model, num_heads, dff,
input_vocab_size, pe_input, rate)
self.decoder = Decoder(num_layers, d_model, num_heads, dff,
target_vocab_size, pe_target, rate)
self.final_layer = tf.keras.layers.Dense(target_vocab_size)
def call(self, inp, tar, training, enc_padding_mask,
look_ahead_mask, dec_padding_mask):
enc_output = self.encoder(inp, training, enc_padding_mask) # (batch_size, inp_seq_len, d_model)
# dec_output.shape == (batch_size, tar_seq_len, d_model)
dec_output, attention_weights = self.decoder(
tar, enc_output, training, look_ahead_mask, dec_padding_mask)
final_output = self.final_layer(dec_output) # (batch_size, tar_seq_len, target_vocab_size)
return final_output, attention_weights
```
## Set Hyperparameters
```
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1
```
## Optimizer
Adam optimizer with custom learning rate scheduler.
```
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
```
## Loss and metrics
```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_sum(loss_)/tf.reduce_sum(mask)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
```
## Training and checkpointing
```
transformer = Transformer(num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size,
pe_input=input_vocab_size,
pe_target=target_vocab_size,
rate=dropout_rate)
def create_masks(inp, tar):
# Encoder padding mask
enc_padding_mask = create_padding_mask(inp)
# Used in the 2nd attention block in the decoder.
# This padding mask is used to mask the encoder outputs.
dec_padding_mask = create_padding_mask(inp)
# Used in the 1st attention block in the decoder.
# It is used to pad and mask future tokens in the input received by
# the decoder.
look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
dec_target_padding_mask = create_padding_mask(tar)
combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
return enc_padding_mask, combined_mask, dec_padding_mask
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(transformer=transformer,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.
train_step_signature = [
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]
@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
tar_inp = tar[:, :-1]
tar_real = tar[:, 1:]
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
with tf.GradientTape() as tape:
predictions, _ = transformer(inp, tar_inp,
True,
enc_padding_mask,
combined_mask,
dec_padding_mask)
loss = loss_function(tar_real, predictions)
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
train_loss(loss)
train_accuracy(tar_real, predictions)
for epoch in range(EPOCHS):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
# inp -> portuguese, tar -> english
for (batch, (inp, tar)) in enumerate(train_dataset):
train_step(inp, tar)
if batch % 50 == 0:
print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
epoch + 1, batch, train_loss.result(), train_accuracy.result()))
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
ckpt_save_path))
print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
train_loss.result(),
train_accuracy.result()))
print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
```
## Evaluate
```
def evaluate(inp_sentence):
start_token = [tokenizer_pt.vocab_size]
end_token = [tokenizer_pt.vocab_size + 1]
# inp sentence is portuguese, hence adding the start and end token
inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
encoder_input = tf.expand_dims(inp_sentence, 0)
# as the target is english, the first word to the transformer should be the
# english start token.
decoder_input = [tokenizer_en.vocab_size]
output = tf.expand_dims(decoder_input, 0)
for i in range(MAX_LENGTH):
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
encoder_input, output)
# predictions.shape == (batch_size, seq_len, vocab_size)
predictions, attention_weights = transformer(encoder_input,
output,
False,
enc_padding_mask,
combined_mask,
dec_padding_mask)
# select the last word from the seq_len dimension
predictions = predictions[: ,-1:, :] # (batch_size, 1, vocab_size)
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# return the result if the predicted_id is equal to the end token
if predicted_id == tokenizer_en.vocab_size+1:
return tf.squeeze(output, axis=0), attention_weights
# concatentate the predicted_id to the output which is given to the decoder
# as its input.
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
fig = plt.figure(figsize=(16, 8))
sentence = tokenizer_pt.encode(sentence)
attention = tf.squeeze(attention[layer], axis=0)
for head in range(attention.shape[0]):
ax = fig.add_subplot(2, 4, head+1)
# plot the attention weights
ax.matshow(attention[head][:-1, :], cmap='viridis')
fontdict = {'fontsize': 10}
ax.set_xticks(range(len(sentence)+2))
ax.set_yticks(range(len(result)))
ax.set_ylim(len(result)-1.5, -0.5)
ax.set_xticklabels(
['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'],
fontdict=fontdict, rotation=90)
ax.set_yticklabels([tokenizer_en.decode([i]) for i in result
if i < tokenizer_en.vocab_size],
fontdict=fontdict)
ax.set_xlabel('Head {}'.format(head+1))
plt.tight_layout()
plt.show()
def translate(sentence, plot=''):
result, attention_weights = evaluate(sentence)
predicted_sentence = tokenizer_en.decode([i for i in result
if i < tokenizer_en.vocab_size])
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(predicted_sentence))
if plot:
plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
```
| github_jupyter |
```
import glob
import xml.etree.ElementTree as ET
import re
folder="/pi/proto/framework/applications/datamodel/entitydef"
inputfilepattern=folder+"/*.xml"
files=glob.glob(inputfilepattern)
entities=[]
for filename in files:
print("process {} ..".format(filename))
tree = ET.parse(filename)
root = tree.getroot()
for entity in root.findall('entity'):
entities.append(entity)
print("total entities", len(entities))
r = re.compile(".*Person.*")
results = list(filter(lambda e:re.match(r, e.get("entity-name")) , entities))
for e in results:
title="*"+e.get('title')
print(e.get('entity-name'), title)
import xml.etree.ElementTree as ET
party_model_file=folder+"/party-entitymodel.xml"
tree = ET.parse(party_model_file)
root = tree.getroot()
print(root.tag)
# for child in root:
# print(child.tag, child.attrib)
def desc_entity(entity, index=1):
# rank = country.find('rank').text
name = entity.get('entity-name')
title=entity.get('title')
print(index, name,":", title)
for field in entity.findall('field'):
print(' |-->', field.get('name'))
for field in entity.findall('prim-key'):
print(' |-->', "℗", field.get('field'))
for field in entity.findall('relation'):
type=field.get("type")
sign="⊙"
if type=="many":
sign="⊕"
print(' |-->', sign, field.get('rel-entity-name'))
for field in entity.findall('index'):
print(' |-->', "☇", field.find("./index-field").get('name'))
index=1
for entity in root.findall('entity'):
desc_entity(entity, index)
index=index+1
person_m=root.find("./entity[@entity-name='Person']")
print(person_m.get("title"))
desc_entity(person_m)
import yaml
yaml.dump({"id":{'name': 'Silenthand, Olleander',
'race': 'Human'}})
field={"type":"integer", "primary":True,
"doc":"Id of the planet",
"input": False}
field2={"type":"integer",
"doc":"Id of the planet"}
print(yaml.dump({"id":field}, default_flow_style=False))
fields={}
fields["id"]=field
fields["name"]=field2
print(yaml.dump(fields, default_flow_style=False))
help(yaml.dump)
from sagas.util.str_converters import to_camel_case, to_snake_case
def s_name(name):
return to_snake_case(name)
def proc_relation(relation):
# relation one side attr:
# foreign_key: planet.id
# relation many side attr:
# model: people
# backref: planet
pass
def proc_model(entity):
# rank = country.find('rank').text
name = entity.get('entity-name')
title=entity.get('title')
model={"name":name, "title":title}
# print(name,":", title)
fields={}
for field in entity.findall('field'):
field_name= s_name(field.get('name'))
fields[field_name]={"type":field.get('type'),
"doc":field.get('name')}
for field in entity.findall('prim-key'):
field_name= s_name(field.get('field'))
fld=fields[field_name]
fld["primary"]=True
fld["input"]=False
for field in entity.findall('relation'):
type=field.get("type")
rel_ent=s_name(field.get('rel-entity-name'))
field_name=rel_ent+"_id"
field_type="string"
if type=="many":
field_name=rel_ent+"_list"
field_type="relation"
else:
# get the child 'key-map' node
proc_relation(field)
fields[field_name]={"type":field_type,
"doc":"related "+rel_ent}
for field in entity.findall('index'):
print(' |-->', "☇", field.find("./index-field").get('name'))
model["fields"]=fields
return model
# person_m=root.find("./entity[@entity-name='Person']")
person_m=root.find("./entity[@entity-name='Party']")
print(person_m.get("title"))
model=proc_model(person_m)
models={}
model_name=to_snake_case(model['name'])
models[model_name]=model
print(yaml.dump(models, default_flow_style=False))
# PartyContactMech and PartyContactMechPurpose
```
| github_jupyter |
# Searching
Try running it in a live notebook for animation!
* peakSearch
* bracketSearch
* binarySearch
```
# Reload modules every time code is called. Set autoreload 0 to disable
%load_ext autoreload
%autoreload 2
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
from lightlab.util.search import peakSearch, binarySearch, SearchRangeError
livePlots = False
```
## You want to find a peak? Sweeping is not good enough
```
center = .82
amp = .7
fwhm = .2
defaultNoise = amp * 5e-3
noise = defaultNoise
assertionTolerance = .2
def myPeakedFun(x):
y = amp / (1 + (2 * (x - center) / fwhm) ** 2) + noise * np.random.randn()
return y
xq = np.linspace(0,3, 10)
plt.plot(xq, myPeakedFun(xq))
plt.title('Poor, low-res sampling of underlying peak')
```
## Peak search
This demonstrates noise tolerance when `nSwarm` is greater than 3
```
for noi, nSwarm in zip([defaultNoise, 5e-2], [3, 7]):
noise = noi
xPeak, yPeak = peakSearch(evalPointFun=myPeakedFun, startBounds=[0,3],
nSwarm=nSwarm, xTol=assertionTolerance/4, livePlot=livePlots)
assert abs(xPeak - center) < assertionTolerance
assert abs(yPeak - amp) < assertionTolerance
noise = defaultNoise
```
## Interactive peak descent through binary search
```
binSearchOpts = dict(evalPointFun=myPeakedFun, xTol=.005, livePlot=livePlots)
```
### This is easy, well bounded
```
rightBounds = [xPeak, 3]
leftBounds = [0, xPeak]
hwhmKwargs = dict(targetY=0.5*yPeak, **binSearchOpts)
xRightHalf = binarySearch(startBounds=rightBounds, **hwhmKwargs)
xLeftHalf = binarySearch(startBounds=leftBounds, **hwhmKwargs)
assert abs(xLeftHalf - (center - fwhm/2)) < assertionTolerance
assert abs(xRightHalf - (center + fwhm/2)) < assertionTolerance
```
### Non-monotonic but still well defined
There is only one value in the domain that satisfies. It starts off bracketed
No test for when there is a peak in the middle and it starts *not* bracketed,
i.e. if rightStart fwhm was 0.75
To handle this, bracketSearch would have to report that it bracketed on both sides
```
rightStart = center + fwhm*.4
for leftStart in [0, center - fwhm, center - 0.6 * fwhm]:
xLeftHalf = binarySearch(startBounds=[leftStart, rightStart], **hwhmKwargs)
assert abs(xLeftHalf - (center - fwhm/2)) < assertionTolerance
```
### Bad bound conditioning saved by `bracketSearch`
```
noise = defaultNoise / 10 # turn down noise a little bit
# Bad domain that totally misses peak
xLeftHalf = binarySearch(startBounds=[0, xPeak/2], **hwhmKwargs)
assert abs(xLeftHalf - (center - fwhm/2)) < assertionTolerance
# Target very close to peak
for trialAgainstNoise in range(5):
try:
xRightOnPeak = binarySearch(startBounds=[0, xPeak/4], targetY=0.99*amp, **binSearchOpts)
break
except RangeError as err:
if 'probably noise' in err.args[0]:
continue
else:
raise err
else:
raise Exception('We tried multiple times but noise killed this one')
assert abs(xRightOnPeak - center) < assertionTolerance
noise = defaultNoise
```
### Graceful failures
```
# Targeting something too high, with peak within startBounds
goodAsItGets = binarySearch(startBounds=[0, center + .5 * fwhm], targetY=2, **binSearchOpts)
assert abs(goodAsItGets - center) < assertionTolerance
# Peak starts outside of startBounds
goodAsItGets = binarySearch(startBounds=[center + .5 * fwhm, 3], targetY=2, **binSearchOpts)
assert abs(goodAsItGets - center) < assertionTolerance
```
### These should generate errors
```
# Targeting outside of hard constrain domain
try:
binarySearch(startBounds=[xPeak, xPeak+.1], targetY=0, hardConstrain=True, **binSearchOpts)
assert False
except SearchRangeError as err:
assert err.args[1] == 'low'
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import datetime
import os, sys
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.transforms import Affine2D
import pickle
import copy as cp
import scipy.optimize
import casadi as cas
PROJECT_PATH = '/home/nbuckman/Dropbox (MIT)/DRL/2020_01_cooperative_mpc/mpc-multiple-vehicles/'
sys.path.append(PROJECT_PATH)
import src.vehicle as veh
import src.traffic_world as tw
# import src.IterativeBestResponseMPCMultiple as mibr
# import src.car_plotting_multiple as cmplot
import src.multiagent_mpc as mpc
np.set_printoptions(precision=2)
ego_veh = veh.Vehicle(.1)
ado_veh = veh.Vehicle(.1)
def get_ellipse(L, W):
min_elipse_a = lambda a: (1 - L**2/(2*a)**2 - W**2/(2*a + W - L)**2)
ax = scipy.optimize.fsolve(min_elipse_a, L/2.0)
by = ax + .5*(W-L)
return ax, by
def R(phi):
return np.array([[np.cos(phi), np.sin(phi)],[-np.sin(phi), np.cos(phi)]])
def Q_minkowski(R_e, M_e, R_a, M_a):
'''Aaron's Code'''
M_e_curr = R_e @ M_e
M_a_curr = R_a @ M_a
Q1 = M_e_curr @ M_e_curr.T
Q2 = M_a_curr @ M_a_curr.T
beta = np.sqrt(np.trace(Q1) / np.trace(Q2))
Q_minkowski = (1+1/beta) * Q1 + (1+beta) * Q2
return Q_minkowski
def dist_squared(Xe, Xa, Q_minkowski):
### Ellipse is Ego centered
return (Xa - Xe).T @ np.linalg.inv(Q_minkowski) @ (Xa - Xe)
R_e = np.eye(2)
M_e = np.eye(2)*2
R_a = np.eye(2)
M_a = np.eye(2)*3
Q = Q_minkowski(R_e, M_e, R_a, M_a)
Q
c = np.array([[0],[0]])
xt = np.array([[4],[0]])
xt.T @ np.linalg.inv(Q) @ xt <= 1
W = 1.8
L = 4.5
phi_e = 0 * np.pi/180
phi_a = 5 * np.pi/180
a_e, b_e = get_ellipse(ego_veh.L, ego_veh.W)
a_a, b_a = get_ellipse(ado_veh.L, ado_veh.W)
# M_e = np.array([[1/a_e**2, 0],[0, 1/b_e**2]], dtype=float)
# M_a = np.array([[1/a_a**2, 0],[0, 1/b_a**2]], dtype=float)
M_e = np.array([[a_e, 0],[0, b_e]], dtype=float)
M_a = np.array([[a_a, 0],[0, b_a]], dtype=float)
R_e = R(phi_e)
R_a = R(phi_a)
x_e, y_e = 0, 1
x_a, y_a = 6.5, 0
Xe = np.array([[x_e],[y_e]])
Xa = np.array([[x_a], [y_a]])
```
## Calculate the Ellipse Obstacle
### Plot the Ellipses
```
fig, ax = plt.subplots(1,1,figsize=(8,8))
for i_test_point in range(30):
# fig = plt.figure(edgecolor='red')
x_e = x_a + np.random.uniform(-6,6)
y_e = y_a + np.random.uniform(-6, 6)
phi_e = 0 * np.pi/180
Xa = np.array([[x_a], [y_a]])
Xe = np.array([[x_e],[y_e]])
plt.plot(x_a, y_a, 'x', color='blue')
ax = plt.gca()
ax.axis('square')
# ax.add_patch(c)
plt.xlim([0, 20])
plt.ylim([-10.5, 7.5])
e2 = patches.Ellipse((x_a, y_a), 2*a_a, 2*b_a, angle=np.rad2deg(phi_a), fill=False, edgecolor='blue')
ax.add_patch(e2)
Q_m = Q_minkowski(R_e, M_e, R_a, M_a)
d = dist_squared(Xa, Xe, Q_m)
collision = d <= 1
dist_squared1 = mpc.minkowski_ellipse_collision_distance(ego_veh, ado_veh, x_e, y_e, phi_e, x_a, y_a, phi_a)
collision1 = dist_squared1 <= 1
if collision1:
color = 'red'
else:
color = 'green'
plt.plot(x_e, y_e, 'x', color=color)
if not collision1:
e1 = patches.Ellipse((x_e, y_e), 2*a_e, 2*b_e, angle=np.rad2deg(phi_e), fill=False, edgecolor=color)
ax.add_patch(e1)
print(d, dist_squared1)
w, v = np.linalg.eig(Q_m)
a_q, b_q = np.sqrt(w[0]), np.sqrt(w[1])
v = v.T
phi_q = np.arctan2(v[1,0], v[0,0])
qp = patches.Ellipse((x_a, y_a), 2*a_q, 2*b_q, angle=np.rad2deg(phi_q), fill=False, edgecolor='blue', linestyle='--')
ax.add_patch(qp)
# plt.setp(ax.spines.values(), color=color)
# fig.set_edgecolor('red')
plt.show()
dist_squared1 = mpc.minkowski_ellipse_collision_distance(ego_veh, ado_veh, x_e, y_e, phi_e, x_a, y_a, phi_a, numpy=False)
float(dist_squared1)
Q = Q_minkowski(R_e, M_e, R_a, M_a)
e = np.linalg.eig(np.linalg.inv(Q))
e
(Xa - Xe).T @ np.linalg.inv(Q_m) @ (Xa-Xe)
phi_e = 0 * np.pi/180
Xe = np.array([[x_e],[y_e]])
plt.plot(x_e, y_e, 'x', color='green')
plt.plot(x_a, y_a, 'x', color='red')
ax = plt.gca()
ax.axis('equal')
# ax.add_patch(c)
plt.xlim([-5, 15])
plt.ylim([-5, 7.5])
e1 = patches.Ellipse((x_e, y_e), 2*a_e, 2*b_e, angle=np.rad2deg(phi_e), fill=False, edgecolor='green')
ax.add_patch(e1)
e2 = patches.Ellipse((x_a, y_a), 2*a_a, 2*b_a, angle=np.rad2deg(phi_a), fill=False, edgecolor='red')
ax.add_patch(e2)
Q_m = Q_minkowski(R_e, M_e, R_a, M_a)
w, v = np.linalg.eig(Q_m)
a_q, b_q = np.sqrt(w[0]), np.sqrt(w[1])
# v = v.T
phi_q = np.arctan2(v[1,0], v[0,0])
qp = patches.Ellipse((x_e, y_e), 2*a_q, 2*b_q, angle=np.rad2deg(phi_q), fill=False, edgecolor='blue', linestyle='--')
ax.add_patch(qp)
collision = dist_squared(Xa, Xe, Q_m) <= 1
print(collision)
if collision:
color = 'red'
else:
color = 'green'
plt.setp(ax.spines.values(), color=color)
# fig.set_edgecolor('red')
plt.show()
t = np.linspace(0, 2*np.pi)
n = len(t)
x1 = np.cos(t).reshape(1,n)
x2 = np.sin(t).reshape(1,n)
X = np.concatenate((x1,x2), axis=0)
Y = Q_m @ (X - Xe)
plt.plot(Y[0,:], Y[1,:])
ax = plt.gca()
ax.axis('equal')
np.rad2deg(phi_q)
Xe, Xa,((Xa - Xe).T @ np.linalg.inv(Q_minkowski) @ (Xa - Xe))
((Xe - Xa)/T * np.linalg.inv(Q_minkowski) * (Xe - Xa).T)
### Another way to see it as following
plt.plot(x_e, y_e,'o')
c = patches.Circle((x_e, y_e), radius=r, fill=None)
ax = plt.gca()
ax.axis('square')
ax.add_patch(c)
plt.xlim([-5, 10])
plt.ylim([-2.5, 7.5])
plt.plot(x_o, y_o)
a = 2.832
b = 1.4820
# a_val = 4.5
# b_val = 1.8
W = 1.8
L = 4.5
rec = patches.Rectangle((x_o-L/2, y_o-W/2), L, W, fill=None, edgecolor='blue')
ax.add_patch(rec)
a_val = 2.832
b_val = 1.4820
delta = 0.0447
# a_val, b_val = 3.897114298575689, 1.1022703788869466
e = patches.Ellipse((x_o, y_o), 2*(a_val), 2*(b_val), fill=False, edgecolor='green')
e_b = patches.Ellipse((x_o, y_o), 2*(a_val+delta+r), 2*(b_val+delta+r), fill=False, edgecolor='red')
ax.add_patch(e)
ax.add_patch(e_b)
collision_free = (np.array([[dx],[dy]]).T @ R_o.T @ np.array([[1/alpha**2, 0],[0, 1/beta**2]]) @ R_o @ np.array([[dx],[dy]])) > 1
print(collision_free)
import casadi as cas
opti = cas.Opti()
a = opti.variable()
b = opti.variable()
opti.minimize(a*b**2)
opti.subject_to( ((L/2)**2/a**2 + (W/2)**2/b**2) <= 1)
opti.subject_to(a>0)
opti.subject_to(b>0)
opti.solver('ipopt')
opti.set_initial(a, 1)
opti.set_initial(b, 1.5)
solution = opti.solve()
a_val = solution.value(a)
b_val = solution.value(b)
print(a_val, b_val)
(L/2)**2/a_val**2 + (W/2)**2/b_val**2
plt.plot([a_val*np.cos(t) for t in np.linspace(0, 2*np.pi, 50)], [b_val*np.sin(t) for t in np.linspace(0, 2*np.pi, 50)])
```
## From MATLAB
```
clear all
close all
clc
%%
W = 1.8;
L = 4.5;
min_elipse_box = @(a) (1 - L^2/(2*a)^2 - W^2/(2*a + W - L)^2) ;
a = fzero(min_elipse_box, 10)
b = a + 1/2*(W-L)
t = 0:.1:2*pi;
x = a*cos(t);
y = b*sin(t);
plot(x,y)
rectangle('Position',[-L/2 -W/2 L W])
axis('equal')
hold on
%% ellipsoid and circle dimensions
%a = 10;
%b = 2;
r = 1.5;
M = 400;
dtheta = 2*pi / M;
theta_M = (0 : dtheta : 2*pi)';
minimal_positive_root = @(delta) (2*(delta + r)^2*(2*a*b + a*(delta + r) + b*(delta + r)))/((a + b)*(a + b + 2*delta + 2*r))-r^2;
x0 = 1.5; % initial guess must be always positive
delta = fzero(minimal_positive_root,x0)
disp(delta)
a_new = a+delta+r
b_new = b+delta+r
x = a_new*cos(t);
y = b_new*sin(t);
plot(x,y)
%%
for i = 1 : M+1
theta = theta_M(i);
%% ellipse coordinates
x_M(i) = a * cos(theta);
y_M(i) = b * sin(theta);
alpha = a+delta+r;
beta = b+delta+r;
%% bounding ellipse
x_1_M(i) = alpha * cos(theta);
y_1_M(i) = beta * sin(theta);
%% Minkowsky sum of ellipse (a,b) and circle r
x_2_M(i) = a*cos(theta) + r*cos(theta)/(sqrt((cos(theta))^2 + (a^2/b^2)*(sin(theta))^2));
y_2_M(i) = b*sin(theta) + r*sin(theta)/(sqrt((b^2/a^2)*(cos(theta))^2 + (sin(theta))^2));
%% previously used bounding ellipse
a_3 = a + r;
b_3 = b + r;
x_3_M(i) = a_3 * cos(theta);
y_3_M(i) = b_3 * sin(theta);
%% circle coordinates
x_4_M(i) = r * cos(theta);
y_4_M(i) = r * sin(theta);
end
h=figure;
hold all;
box on;
grid on;
axis equal;
plot(x_M, y_M, '-r')
plot(x_1_M, y_1_M, '-k')
plot(x_2_M, y_2_M, '-b')
plot(x_3_M, y_3_M, '-r')
legend("Ellipse","Minimal Bounding ellipse","Minkowski Sum","Bound Ellipse + Circle","Circle")
% legend(h,'off')
plot(x_4_M, y_4_M, '-g')
% circle(0,0,a+r)
K = randi([floor(M/4/4), ceil(M/4/2)]);
theta = theta_M(K);
x = a * cos(theta);
y = b * sin(theta);
plot([0,x], [0, y], '-k')
normal = r*[2*cos(theta)/a; 2*sin(theta)/b] / norm([2*cos(theta)/a; 2*sin(theta)/b]);
plot([0, normal(1)], [0, normal(2)], '-b')
plot([x, normal(1)+x], [y, normal(2)+y], '-b')
circle(normal(1)+x, normal(2)+y, r)
hold all
% theta_T = theta_M(1 : ceil(M/2)+1);
% theta_T = theta_M;
% delta_x_T = cos(theta_T) .* (r ./ (sqrt((cos(theta_T)).^2 + (a^2/b^2)*(sin(theta_T)).^2)) - 1);
% delta_y_T = sin(theta_T) .* (r ./ (sqrt((b^2/a^2)*(cos(theta_T)).^2 + (sin(theta_T)).^2)) - 1);
% figure;
% hold all;
% grid on;
% box on;
% plot(delta_x_T, '-k');
% plot(delta_y_T, '-r');
%
% delta_a_T = r ./ (sqrt((cos(theta_T)).^2 + (a^2/b^2)*(sin(theta_T)).^2));
% delta_b_T = r ./ (sqrt((b^2/a^2)*(cos(theta_T)).^2 + (sin(theta_T)).^2));
% figure;
% hold all;
% grid on;
% box on;
% plot(delta_a_T, '-k');
% plot(delta_b_T, '-r');
%%Curvature calculation
% k=a*b/(sqrt(a^2/2+b^2/2)^3)
% k_r=(a+r)*(b+r)/(sqrt((a+r)^2/2+(b+r)^2/2)^3)
% t=0:0.01:2*pi
% figure;
% plot(t,a.*b./(sqrt(a.^2.*cos(t).^2+b.^2.*sin(t).^2).^3))
% hold on;
% ar=a+r;
% br=b+r;
% plot(t,ar.*br./(sqrt(ar.^2.*cos(t).^2+br.^2.*sin(t).^2).^3),'b')
% figure;
% plot(t,a^2.*cos(t).^2+b^2.*sin(t).^2)
function h = circle(x,y,r)
hold on
th = 0:pi/50:2*pi;
xunit = r * cos(th) + x;
yunit = r * sin(th) + y;
h = plot(xunit, yunit);
% hold off
end
```
| github_jupyter |
# Use a custom parser
While many of the parsers included within this libary may be useful, we do not have parsers for **every** dataset out there. If you are interested in adding your own parser (and hopefully contributing that parser to the main repo 😊 ), check out this walkthrough of how to build one!
## What is a Parser?
Basically, a parser collects information from two main sources:
* The file string
* The dataset itself
This means there are two main steps:
* Parsing out the file string, separating based on some symbol
* Opening the file, and extracting variables and their attributes, or even global attributes
The result from a "parser" is a dictionary of fields to add to the catalog, stored in a `pandas.DataFrame`
It would probably be **more helpful** to walk through a concrete example of this...
## Example of Building a Parser
Let's say we have a list of files which we wanted to parse! In this example, we are using a set of observational data on NCAR HPC resources. A full blog post detailing this dataset and comparison is [included here](https://ncar.github.io/esds/posts/2021/intake-obs-cesm2le-comparison/)
### Imports
```
import glob
import pathlib
import traceback
from datetime import datetime
import xarray as xr
from ecgtools import Builder
from ecgtools.builder import INVALID_ASSET, TRACEBACK
files = sorted(glob.glob('/glade/p/cesm/amwg/amwg_diagnostics/obs_data/*'))
files[::20]
```
Observational datasetsets in this directory follow the convention `source_(month/season/annual)_climo.nc.`
Let’s open up one of those datasets
```
ds = xr.open_dataset('/glade/p/cesm/amwg/amwg_diagnostics/obs_data/CERES-EBAF_01_climo.nc')
ds
```
We see that this dataset is gridded on a global 0.5° grid, with several variables related to solar fluxes (ex. `TOA net shortwave`)
### Parsing the Filepath
As mentioned before, the first step is parsing out information from the filepath. Here, we use [pathlib](https://docs.python.org/3/library/pathlib.html) which can be helpful when working with filepaths generically
```
path = pathlib.Path(files[0])
path.stem
```
This path can be split using `.split('_')`, separates the path into the following:
* Observational dataset source
* Month Number, Season, or Annual
* “climo”
```
path.stem.split('_')
```
### Open the File for More Information
We can also gather useful insight by opening the file!
```
ds = xr.open_dataset(files[0])
ds
```
Let’s look at the variable “Temperature” (`T`)
```
ds.T
```
In this case, we want to include the list of variables available from this single file, such that each entry in our catalog represents a single file. We can search for variables in this dataset using the following:
```
variable_list = [var for var in ds if 'long_name' in ds[var].attrs]
variable_list
```
### Assembling These Parts into a Function
Now that we have methods of extracting the relevant information, we can assemble this into a function which returns a dictionary. You'll notice the addition of the exception handling, which will add the unparsable file to a `pandas.DataFrame` with the unparsable file, and the associated traceback error.
```
def parse_amwg_obs(file):
"""Atmospheric observational data stored in"""
file = pathlib.Path(file)
info = {}
try:
stem = file.stem
split = stem.split('_')
source = split[0]
temporal = split[-2]
if len(temporal) == 2:
month_number = int(temporal)
time_period = 'monthly'
temporal = datetime(2020, month_number, 1).strftime('%b').upper()
elif temporal == 'ANN':
time_period = 'annual'
else:
time_period = 'seasonal'
with xr.open_dataset(file, chunks={}, decode_times=False) as ds:
variable_list = [var for var in ds if 'long_name' in ds[var].attrs]
info = {
'source': source,
'temporal': temporal,
'time_period': time_period,
'variable': variable_list,
'path': str(file),
}
return info
except Exception:
return {INVALID_ASSET: file, TRACEBACK: traceback.format_exc()}
```
### Test this Parser on Some Files
We can try this parser on a single file, to make sure that it returns a dictionary
```
parse_amwg_obs(files[0])
```
Now that we made sure that it works, we can implement in `ecgtools`!
First, we setup the `Builder` object
```
b = Builder('/glade/p/cesm/amwg/amwg_diagnostics/obs_data')
```
Next, we build the catalog using our newly created parser!
```
b.build(parse_amwg_obs)
```
Let's take a look at our resultant catalog...
```
b.df
```
| github_jupyter |
# Source reconstruction with lens mass fitting
Runs MCMC over lens model parameters, using SLIT to reconstruct the source at each iteration.
```
import os
import sys
import copy
import time
import numpy as np
import matplotlib.pyplot as plt
import astropy.io.fits as pf
import pysap
import corner
import pickle as pkl
from lenstronomy.Data.psf import PSF
from lenstronomy.Data.imaging_data import ImageData
from lenstronomy.ImSim.image_model import ImageModel
from lenstronomy.LensModel.lens_model import LensModel
from lenstronomy.LightModel.light_model import LightModel
from lenstronomy.Util import class_creator
from lenstronomy.Workflow.fitting_sequence import FittingSequence
from lenstronomy.Plots.model_plot import ModelPlot
from lenstronomy.Plots import chain_plot
from lenstronomy.Util import kernel_util
import lenstronomy.Util.simulation_util as sim_util
import lenstronomy.Util.image_util as image_util
import lenstronomy.Util.util as lenstro_util
from lenstronomy.LightModel.Profiles.starlets import Starlets
from slitronomy.Util.plot_util import nice_colorbar, log_cmap
from TDLMCpipeline.Util.plots import plot_convergence_by_walker
from TDLMCpipeline.Util.params import model_from_mcmc_sample
%matplotlib inline
subgrid_res_source = 2
use_threshold_mask = False
start_wayoff = False
n_burn = 0
n_run = 100
walker_ratio = 10
num_threads = 8
# uncomment parameters to fix those to truth
mass_fixed_list = [
#'gamma',
#'theta_E',
#'e1', 'e2',
#'center_x', 'center_y'
]
lin_scale = lambda x: x
log_scale = lambda x: np.log10(x)
sqrt_scale = lambda x: np.sqrt(x)
# data specifics
num_pix = 99 # cutout pixel size
delta_pix = 0.08 # pixel size in arcsec (area per pixel = deltaPix**2)
#background_rms = 0.05 # background noise per pixel
#exp_time = 0 # exposure time (arbitrary units, flux per pixel is in units #photons/exp_time unit)
psf_fwhm = 0.2 # full width half max of PSF, in delta_pix units
psf_num_pix = 15
# data specification (coordinates, etc.)
_, _, ra_at_xy_0, dec_at_xy_0, _, _, Mpix2coord, _ \
= lenstro_util.make_grid_with_coordtransform(numPix=num_pix, deltapix=delta_pix, subgrid_res=1,
inverse=False, left_lower=False)
kwargs_data = {
#'background_rms': background_rms,
#'exposure_time': np.ones((num_pix, num_pix)) * exp_time, # individual exposure time/weight per pixel
'ra_at_xy_0': ra_at_xy_0, 'dec_at_xy_0': dec_at_xy_0,
'transform_pix2angle': Mpix2coord,
'image_data': np.zeros((num_pix, num_pix))
}
data_class = ImageData(**kwargs_data)
# PSF specification
no_convolution = False
if no_convolution:
kwargs_psf = {'psf_type': 'NONE'}
else:
psf_kernel = kernel_util.kernel_gaussian(psf_num_pix, delta_pix, psf_fwhm)
print(psf_kernel.shape)
kwargs_psf = {'psf_type': 'PIXEL', 'kernel_point_source': psf_kernel}
#kwargs_psf = {'psf_type': 'GAUSSIAN', 'fwhm': psf_fwhm, 'pixel_size': delta_pix, 'truncation': 11}
psf_class = PSF(**kwargs_psf)
plt.title("PSF kernel")
im = plt.imshow(psf_class.kernel_point_source, origin='lower')
nice_colorbar(im)
plt.show()
lens_model_list = ['SPEMD']
kwargs_spemd = {'theta_E': 1.8, 'gamma': 2, 'center_x': 0, 'center_y': 0, 'e1': 0.1, 'e2': -0.2}
kwargs_lens = [kwargs_spemd]
lens_model_class = LensModel(lens_model_list=lens_model_list)
# list of source light profiles from Galsim (COSMOS galaxy)
galsim_index = 1
snr = 500
galsim_data_path = ('data/ring_sims/sims_SNR{}/simring_galsim{}_all.pkl'.format(snr, galsim_index))
[data, truth, lens_model] = pkl.load(open(galsim_data_path, 'rb'))
galsim_source_highres = truth['source_galsim_3']
background_rms = data['background_rms']
galsim_num_pix = data['num_pix']
galsim_delta_pix = data['delta_pix']
source_model_list = ['INTERPOL']
kwargs_interpol_source = {'image': galsim_source_highres, 'amp': 3000, 'center_x': +0.3, 'center_y': -0.1, 'phi_G': 0,
'scale': galsim_delta_pix/3}
kwargs_source = [kwargs_interpol_source]
source_model_class = LightModel(light_model_list=source_model_list)
kwargs_truth = {
'kwargs_lens': kwargs_lens,
'kwargs_source': kwargs_source,
'kwargs_special': {'delta_x_source_grid': 0, 'delta_y_source_grid': 0},
}
kwargs_numerics_sim = {'supersampling_factor': 3, 'supersampling_convolution': False}
# get the simalated lens image (i.e. image plane)
imageModel = ImageModel(data_class, psf_class, lens_model_class, source_model_class,
kwargs_numerics=kwargs_numerics_sim)
image_sim_no_noise = imageModel.image(kwargs_lens, kwargs_source)
bkg = image_util.add_background(image_sim_no_noise, sigma_bkd=background_rms)
#poisson = image_util.add_poisson(image_sim_no_noise, exp_time=exp_time)
noise = bkg # + poisson
image_sim = image_sim_no_noise + noise
image_sim_1d = lenstro_util.image2array(image_sim)
kwargs_data['image_data'] = image_sim
kwargs_data['background_rms'] = background_rms
kwargs_data['noise_map'] = background_rms * np.ones_like(image_sim)
data_class.update_data(image_sim)
# get the coordinates arrays of source plane (those are 'thetas' but in source plane !)
x_grid_src_1d, y_grid_src_1d = lenstro_util.make_grid(numPix=num_pix, deltapix=delta_pix,
subgrid_res=subgrid_res_source)
# get the light distribution in source plane on high resolution grid
source_sim_1d_hd = source_model_class.surface_brightness(x_grid_src_1d, y_grid_src_1d, kwargs_source)
source_sim_hd = lenstro_util.array2image(source_sim_1d_hd)
# get the light distribution in source plane at the image plane resolution
source_sim = imageModel.source_surface_brightness(kwargs_source, unconvolved=True, de_lensed=True)
source_sim_1d = lenstro_util.image2array(source_sim)
# get an automatic mask that includes the lensed source light
threshold_noise = 5
image_mask_1d = np.zeros_like(image_sim_1d)
mask_indices = np.where(image_sim_1d > threshold_noise * background_rms)
image_mask_1d[mask_indices] = 1
image_mask = lenstro_util.array2image(image_mask_1d)
fig = plt.figure(figsize=(20, 4))
ax = plt.subplot2grid((1, 3), (0, 0), fig=fig)
ax.set_title("image plane, convolved")
im = ax.imshow(lin_scale(image_sim), origin='lower', cmap='cubehelix')
nice_colorbar(im)
ax = plt.subplot2grid((1, 3), (0, 1))
ax.set_title("source plane, unconvolved")
im = ax.imshow(lin_scale(source_sim), origin='lower', cmap=log_cmap('cubehelix', 0.03, 1))
nice_colorbar(im)
ax = plt.subplot2grid((1, 3), (0, 2))
ax.set_title("mask from threshold {}$\sigma$".format(threshold_noise))
im = ax.imshow(image_mask*image_sim, origin='lower', cmap='gray_r')
nice_colorbar(im)
#ax = plt.subplot2grid((1, 4), (0, 2))
#ax.set_title(r"$\alpha_x$")
#im = ax.imshow(alpha_x, origin='lower', cmap='seismic')
#nice_colorbar(im)
#ax = plt.subplot2grid((1, 4), (0, 3))
#ax.set_title(r"$\alpha_y$")
#im = ax.imshow(alpha_y, origin='lower', cmap='seismic')
#nice_colorbar(im)
plt.show()
fig.savefig("last_mock.png")
```
## Refinement step using starlets (pixel-based)
```
kwargs_numerics = {'supersampling_factor': 1, 'supersampling_convolution': False}
kwargs_data_joint = {
'multi_band_list': [[kwargs_data, kwargs_psf, kwargs_numerics]],
'multi_band_type': 'single-band-sparse',
}
kwargs_model = {
'lens_model_list': lens_model_list,
'source_light_model_list': ['STARLETS'],
}
kwargs_lens_wayoff = [{'theta_E': 1.65, 'gamma': 1.8, 'center_x': 0, 'center_y': 0, 'e1': 0, 'e2': 0}]
if start_wayoff:
kwargs_lens_init = kwargs_lens_wayoff
else:
kwargs_lens_init = kwargs_truth['kwargs_lens']
kwargs_lens_sigma = [{'theta_E': 0.1, 'gamma': 0.05, 'center_x': 0.05, 'center_y': 0.05, 'e1': 0.05, 'e2': 0.05}]
kwargs_lens_lower = [{'theta_E': 1.6, 'gamma': 1.7, 'center_x': -0.5, 'center_y': -0.5, 'e1': -0.5, 'e2': -0.5}]
kwargs_lens_upper = [{'theta_E': 2, 'gamma': 2.2, 'center_x': 0.5, 'center_y': 0.5, 'e1': 0.5, 'e2': 0.5}]
kwargs_lens_fixed = [{}]
for i in range(len(kwargs_lens)):
for fixed_name in mass_fixed_list:
kwargs_lens_fixed[i][fixed_name] = kwargs_lens[i][fixed_name]
if len(kwargs_lens_fixed[0]) == len(kwargs_lens[0]) and len(kwargs_lens_fixed[1]) == len(kwargs_lens[1]):
print("All parameters are fixed !")
raise
kwargs_source_init = [{'coeffs': 1}] # starlet coeffs that are optimized for
kwargs_source_sigma = [{}]
kwargs_source_lower = [{}]
kwargs_source_upper = [{}]
kwargs_source_fixed = [
{
'n_scales': 6, 'n_pixels': num_pix**2 * subgrid_res_source**2,
'scale': 1, 'center_x': 0, 'center_y': 0,
}
]
kwargs_special_init = {'delta_x_source_grid': 0, 'delta_y_source_grid': 0}
kwargs_special_sigma = {'delta_x_source_grid': delta_pix/4., 'delta_y_source_grid': delta_pix/4.}
kwargs_special_lower = {'delta_x_source_grid': -1, 'delta_y_source_grid': -1}
kwargs_special_upper = {'delta_x_source_grid': 1, 'delta_y_source_grid': 1}
kwargs_special_fixed = {}
kwargs_params = {
'lens_model': [kwargs_lens_init, kwargs_lens_sigma, kwargs_lens_fixed, kwargs_lens_lower, kwargs_lens_upper],
'source_model': [kwargs_source_init, kwargs_source_sigma, kwargs_source_fixed, kwargs_source_lower, kwargs_source_upper],
'special': [kwargs_special_init, kwargs_special_sigma, kwargs_special_fixed, kwargs_special_lower, kwargs_special_upper]
}
kwargs_init = {
'kwargs_lens': kwargs_lens_init,
'kwargs_source': kwargs_source_init,
'kwargs_special': kwargs_special_init,
}
kwargs_constraints = {
'solver_type': 'NONE',
'image_plane_source_list': [False],
'source_grid_offset': False, # sample over offset of source plane grid
}
kwargs_sparse_solver = {
'source_interpolation': 'bilinear',
'include_regridding_error': True,
'subgrid_res_source': subgrid_res_source,
'minimal_source_plane': True,
'fix_minimal_source_plane': True, # if False, update source plane grid size when mass model changes (!)
'min_num_pix_source': 130,
'min_threshold': 3,
'threshold_decrease_type': 'exponential',
'num_iter_source': 15,
'num_iter_weights': 3,
'verbose': False,
'show_steps': False,
'thread_count': 1,
}
kwargs_likelihood = {
'image_likelihood': True,
'check_bounds': True,
'kwargs_sparse_solver': kwargs_sparse_solver,
}
if use_threshold_mask:
kwargs_likelihood['image_likelihood_mask_list'] = [image_mask.astype(bool)]
fitting_seq = FittingSequence(kwargs_data_joint, kwargs_model, kwargs_constraints,
kwargs_likelihood, kwargs_params, verbose=True)
fitting_seq.param_class.print_setting()
fitting_list = [
['MCMC', {'n_burn': n_burn, 'n_run': n_run, 'walkerRatio': walker_ratio, 'sampler_type': 'EMCEE',
'sigma_scale': 1, 'threadCount': num_threads}],
]
chain_list = fitting_seq.fit_sequence(fitting_list)
# get MCMC chains
sampler_type, samples_mcmc, param_mcmc, dist_mcmc = chain_list[-1]
print("(num samples, num params) :", samples_mcmc.shape)
walker_ratio = fitting_list[0][1]['walkerRatio']
num_param_nonlinear = len(param_mcmc)
plt.plot(dist_mcmc)
plt.show()
for i in range(len(chain_list)):
chain_plot.plot_chain_list(chain_list, i, num_average=walker_ratio*num_param_nonlinear)
plt.show()
# best fit from MCMC
kwargs_result = fitting_seq.best_fit()
print(kwargs_result)
def corner_add_values_indic(fig, values, color='green', linewidth=1):
# Extract the axes
ndim = len(values)
axes = np.array(fig.axes).reshape((ndim, ndim))
# Loop over the diagonal
for i in range(ndim):
ax = axes[i, i]
ax.axvline(values[i], color=color, linewidth=linewidth)
# Loop over the histograms
for yi in range(ndim):
for xi in range(yi):
ax = axes[yi, xi]
ax.axvline(values[xi], color=color, linewidth=linewidth)
ax.axhline(values[yi], color=color, linewidth=linewidth)
ax.plot(values[xi], values[yi], color=color, marker='s')
# get init/best/true parameter values as list
init_params = fitting_seq.param_class.kwargs2args(**kwargs_init)
print("initial", init_params)
bestlogL_params = fitting_seq.param_class.kwargs2args(**kwargs_result)
print("best logL", bestlogL_params)
truth_params = fitting_seq.param_class.kwargs2args(**kwargs_truth)
print("truth", truth_params)
fig = corner.corner(samples_mcmc, labels=param_mcmc, show_titles=True, quantiles=[0.5], smooth=0.6, smooth1d=0.6)
corner_add_values_indic(fig, truth_params, color='green', linewidth=2)
corner_add_values_indic(fig, bestlogL_params, color='red', linewidth=1)
corner_add_values_indic(fig, init_params, color='gray', linewidth=1)
plt.show()
fig.savefig("last_corner.png")
# convergence by walkers
[fig] = plot_convergence_by_walker(samples_mcmc, param_mcmc, walker_ratio, verbose=True)
plt.show()
fig.savefig("last_mcmc_conv.png")
```
### Update Starlets parameters from best fit
```
multi_band_list = kwargs_data_joint['multi_band_list']
multi_band_type = kwargs_data_joint['multi_band_type']
likelihood_mask_list = kwargs_likelihood.get('image_likelihood_mask_list', None)
kwargs_sparse_solver = kwargs_likelihood['kwargs_sparse_solver']
im_sim = class_creator.create_im_sim(multi_band_list, multi_band_type, kwargs_model,
likelihood_mask_list=likelihood_mask_list,
kwargs_sparse_solver=kwargs_sparse_solver)
# compute starlets "sparse" parameters and update corresponding kwargs
model, model_error, _, _ = im_sim.image_linear_solve(**kwargs_result)
print(kwargs_result, kwargs_result['kwargs_source'][0]['amp'].shape)
reduced_residuals = im_sim.reduced_residuals(model)
source = im_sim.source_surface_brightness(kwargs_result['kwargs_source'], kwargs_lens=None,
unconvolved=True, de_lensed=True)
kwargs_source_result = kwargs_result['kwargs_source'][0]
starlets_class = Starlets(second_gen=False)
x_grid_hd, y_grid_hd = lenstro_util.make_grid(numPix=np.sqrt(kwargs_source_result['n_pixels']),
deltapix=kwargs_source_result['scale'])
source_hd = lenstro_util.array2image(starlets_class.function(x_grid_hd, y_grid_hd,
**kwargs_source_result))
fig, axes = plt.subplots(1, 4, figsize=(20, 4))
ax = axes[0]
im = ax.imshow(source, origin='lower', cmap=log_cmap('cubehelix', 0.03, 1))
nice_colorbar(im)
ax = axes[1]
im = ax.imshow(source_sim, origin='lower', cmap=log_cmap('cubehelix', 0.03, 1))
nice_colorbar(im)
ax = axes[2]
im = ax.imshow(model, origin='lower', cmap='cubehelix')
nice_colorbar(im)
ax = axes[3]
im = ax.imshow(reduced_residuals, origin='lower', cmap='bwr', vmin=-6, vmax=6)
nice_colorbar(im)
#plt.show()
fig.savefig("last_starlets_recon.png")
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import cvxpy as cp
import pandas as pd
from tqdm import tqdm
plt.rcParams.update({
"text.usetex": True,
"font.family": "sans-serif",
"font.sans-serif": ["Helvetica Neue"],
"font.size": 20,
})
np.random.seed(0)
# Load data from MNIST dataset (please uncompress data.zip)
# in csv format (Kaggle)
# https://www.kaggle.com/oddrationale/mnist-in-csv/home
# First column = Label
# Other columns = Image
df_train = pd.read_csv('data/mnist_train.csv', header=None, index_col=None)
df_test = pd.read_csv('data/mnist_test.csv', header=None, index_col=None)
# Reduce size (to reduce size computations)
df_train = df_train.iloc[:2000]
df_test = df_test.iloc[:2000]
# Split data in X and y
X_train, y_train = df_train.iloc[:, 1:], df_train.iloc[:, 0]
X_test, y_test = df_test.iloc[:, 1:], df_test.iloc[:, 0]
n = len(X_train.columns)
# Example image
x_i = X_train.iloc[0].to_numpy().reshape(-1, 28)
plt.imshow(x_i, cmap='gray')
```
## Try to distinguish some digit from the others
```
# Classify for some integer
TrainingInteger = 5
# The training data set
X0_train = X_train.to_numpy()
y0_train = (y_train == TrainingInteger).astype(int).to_numpy() #converts to 0/1
y0_train[y0_train == 0] = -1 #tag the non-zeros as -1
# The testing data set
X0_test = X_test.to_numpy()
y0_test = (y_test == TrainingInteger).astype(int).to_numpy()
y0_test[y0_test == 0] = -1
m_train = len(y0_train)
m_test = len(y0_test)
# Define cvxpy problem
a = cp.Variable(n)
b = cp.Variable()
loss = cp.sum(cp.pos(1 - cp.multiply(y0_train, X0_train @ a + b)))/m_train
reg = cp.norm(a, 1)
lam = cp.Parameter(nonneg=True)
problem = cp.Problem(cp.Minimize(loss + lam * reg))
def predict(X, a, b):
theValue = X @ a + b
theSign = np.sign(theValue)
return theSign
# Compute a trade-off curve and record train and test error.
n_trials = 40 # try more steps (it might take 15 min)
train_error = np.zeros(n_trials)
test_error = np.zeros(n_trials)
lambda_vals = np.logspace(-2, 1, n_trials)
for i in tqdm(range(n_trials)):
lam.value = lambda_vals[i]
problem.solve()
y0_train_pred = predict(X0_train, a.value, b.value)
y0_test_pred = predict(X0_test, a.value, b.value)
train_error[i] = (y0_train != y0_train_pred).sum()/m_train
test_error[i] = (y0_test != y0_test_pred).sum()/m_test
# Plot the train and test error over the trade-off curve.
plt.figure()
plt.plot(lambda_vals, train_error, label="Train error")
plt.plot(lambda_vals, test_error, label="Test error")
plt.xscale('log')
plt.legend(loc='upper left')
plt.xlabel(r"$\lambda$")
plt.show()
#run it one more time with the best lambda value
lam.value = lambda_vals[np.argmin(train_error)]
problem.solve()
y0_test_pred = predict(X0_test, a.value, b.value)
#a bunch of random examples
np.random.seed(0)
nrows = 7
ncols = 3
fig, ax = plt.subplots(nrows, ncols, figsize=(10, 50))
rand_idx = np.random.randint(0, m_test, nrows*ncols)
axes = ax.ravel()
for i in range(nrows*ncols):
idx = rand_idx[i]
ax = axes[i]
x_i = np.reshape(X0_test[idx], (-1, 28))
if y0_test_pred[idx] == 1:
ax.set_title('predicted label: %i' %TrainingInteger)
else:
ax.set_title('predicted label: not %i' %TrainingInteger)
ax.imshow(x_i, cmap='gray')
plt.tight_layout()
```
| github_jupyter |
# Discrete Fourier Transform in Python
This notebook is a quick refresher on how to perform FFT in python/scipy.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.fftpack import fft
```
We define:
- $N$: number of samples
- $f_s$: sampling frequency/rate in samples/second
```
N = 1000
f_s = 100
```
Period between samples $T_s$:
```
T_s = 1/f_s
print(T_s, "seconds")
print(T_s*1000, "ms")
```
Create time vector, each element corresponds to a measurement
```
t = np.linspace(0, T_s*N, N)
```
The signal which we are sampling:
\begin{align}
s(t) = 0.1 sin(2\pi 5t) + sin(2\pi 3t - 0.25\pi)
\end{align}
```
x_t = 0.1*np.sin(2*np.pi*5*t) + np.sin(2*np.pi*3*t-np.pi/4)
plt.figure(figsize=(15,5))
plt.plot(t, x_t)
plt.plot(t, x_t, "k+")
plt.xlabel("time [s]")
plt.xlim([0, 2])
plt.grid()
plt.title("Visualizing samples")
```
Note that we can describe the **period** of each sinus component in number of samples:
- $0.1 sin(2\pi 5t)$: **20** samples ($f=5Hz$ leads to $T=1/5Hz=200ms$ with $T_s = 10ms$, $T/T_s = 20$)
- $sin(2\pi 3t - 0.25\pi)$ : **33** samples
Alternatively we can express the frequency in the reciprocal:
- $0.1 sin(2\pi 5t)$: **1/20 = 0.05**
- $sin(2\pi 3t - 0.25\pi)$ : **1/33 = 0.0303**
Alternatively we can express the frequency relative to the number of samples $N=1000$:
- $0.1 sin(2\pi 5t)$: **1000/20 = 50**
- $sin(2\pi 3t - 0.25\pi)$ : **1000/33 = 30.30**
You can think of the last representation as a reference of the highest $T_s$ (or lowest $f_s$) we can extract from FFT. I.e. the FFT method cannot extract frequency information lower than $\frac{f_s}{2}$ (ignore the $\frac{1}{2}$ for now).
## FFT
We perform the FFT on the sample array, note that the time vector ${t}$ is not used in the `fft` call:
```
a_f = fft(x_t)
a_f.dtype
```
FFT returns a symmetric shape with positive frequencies on the right side and negative on the left:
```
plt.figure(figsize=(10,5))
plt.plot(np.abs(a_f)) # we take abs in order to get the magnitude of a complex number
plt.axvline(N//2, color="red", label="left: positive frequencies | right: negative, from high to low")
plt.xlabel("index k")
plt.legend();
```
The index $k$ represents a frequency component.
Because we are interested in positive frequencies for now we cut the returned array in half:
```
a_f_positive = a_f[:N//2]
a_f_positive.shape
```
Each element in `a_f` represents the real and imaginary part (amplitude $A_i$ and phase $\phi_i$) for a specific frequency $f_i$.
The "frequency" after the FFT is defined as $\frac{N}{s_i}$ in the period of specific sinus component. The period $s_i$ is expressed in number of samples.
I.e. a sinus component with a frequency of $5 Hz$ or period of $\frac{1}{5Hz} = 0.2s$ is $\frac{0.2s}{T_s} = \frac{0.2s}{0.01s} = 20$ samples long. Thus its magnitude peak should appear at $\frac{N}{s_i} = \frac{1000}{20} = 50$.
- $0.1 sin(2\pi 5t)$: low peak (because of $0.1$) at $k=50$
- $sin(2\pi 3t - 0.25\pi)$: greater peak at $k= 30.303 \approx 30$
```
plt.figure(figsize=(10,5))
plt.plot(np.abs(a_f_positive))
plt.xlim([0, 100])
plt.xticks(range(0, 101, 10))
plt.grid()
plt.xlabel("frequency in $k = N/s_i$")
```
In order to relate the sample-frequencies (as $N/1$) into time domain we need to convert the $k$ into frequencies as $1/s$.
\begin{align}
k = \frac{N}{s_i} = \frac{N}{T_i/T_s} = \frac{N f_i}{1/T_s} = \frac{N f_i}{f_s}
\end{align}
Our translation formula from $k$ to frequency is the following
\begin{align}
\Rightarrow f_i =& f_s\frac{k}{N}
\end{align}
```
f_i = np.arange(0, N//2)*f_s/N
plt.figure(figsize=(10,5))
plt.plot(f_i, np.abs(a_f_positive))
plt.grid()
plt.xlabel("frequency in $1/s$")
plt.xticks(range(0, f_s//2, 1));
plt.xlim([0, 10]);
```
We need to normalize the magnitude of the peaks by the factor of $\frac{2}{N}$:
```
plt.figure(figsize=(10,5))
plt.plot(f_i, 2/N*np.abs(a_f_positive))
plt.grid()
plt.xlabel("frequency in $1/s$ (Hz)")
plt.ylabel("amplitude [1]")
plt.xticks(range(0, f_s//2, 1));
plt.xlim([0, 10]);
plt.ylim([-0.2, 1.2]);
plt.title("Final DFT result.")
plt.text(3, 1.02, "$sin(2\pi 3t - 0.25\pi)$", fontdict={"size": 15})
plt.text(5, 0.12, "$0.1 sin(2\pi 5t)$", fontdict={"size": 15});
```
As you can see we found both sinus components.
## Phase
We could find the magnitudes and the frequencies of both signals but not the $45^\circ$ phase of the slower $3Hz$ signal.
In the previous section we saw that the result of the FFT algorithm is a complex array. Let's plot the real and imaginary parts relative to frequency.
```
plt.figure(figsize=(15, 5))
plt.subplot(2, 1, 1)
plt.title("real")
plt.plot(f_i, 2/N*np.real(a_f_positive))
plt.grid()
plt.xlim([0, 10])
plt.subplot(2, 1, 2)
plt.title("imag")
plt.plot(f_i, 2/N*np.imag(a_f_positive))
plt.grid()
plt.xlim([0, 10])
```
Lets calculate the angle of the complex number:
\begin{align}
\alpha = \text{arctan} \frac{imag}{real}
\end{align}
There is a handy function: `np.angle` which does it for us.
```
angle = np.angle(a_f_positive, deg=True)
# OR manually
# angle = np.arctan2(2/N*np.imag(a_f_positive),(2/N*np.real(a_f_positive)))*grad_to_degree_factor
```
and plot it again
```
plt.figure(figsize=(15, 10))
plt.subplot(3, 1, 1)
plt.ylabel("real-component [1]")
plt.plot(f_i, 2/N*np.real(a_f_positive))
plt.grid()
plt.xlim([0, 10])
plt.subplot(3, 1, 2)
plt.ylabel("imag component [1]")
plt.plot(f_i, 2/N*np.imag(a_f_positive))
plt.grid()
plt.xlim([0, 10])
plt.subplot(3, 1, 3)
plt.plot(f_i, angle)
plt.grid()
plt.ylabel("phase [°]")
plt.xlabel("frequency [Hz]")
plt.xlim([0, 10])
plt.scatter(f_i[[30, 50]], angle[[30, 50]], color="k")
plt.text(f_i[30] + 0.1 , angle[30], "%d°" % int(angle[30]))
plt.text(f_i[50] + 0.1 , angle[50], "%d°" % int(angle[50]))
plt.ylim([-150, 100])
```
The $5Hz$ sinus wave with zero phase has an $\alpha \approx -90^\circ$, since a sine wave is a $90^\circ$-shifted cos wave.
The $3Hz$ sinus component with $45^\circ$ phase has an $\alpha \approx -90^\circ-45^\circ = -135^\circ$
## FFT on complex numbers
Because within the multi-chirp FMCW algorithm we do a FFT on a series of complex numbers we want to make a simple example here.
Our example function of interest will be:
\begin{align}
f(t) = 0.25\text{sin}(2\pi 5 t + \phi) \\
\phi = \phi(t) = -\frac{\pi}{8}t = vt
\end{align}
The phase shift is time dependent in this example.
**Goal**: find parameter $v$ via FFT.
```
def f(t, phi=0):
return 0.25*np.sin(2*np.pi*5*t + phi)
```
Let's visualize how the sinus wave develops over time ...
```
t = np.linspace(0, 10, 10000)
plt.figure(figsize=(15,5))
plt.plot(t, f(t), label="$\phi=0$")
plt.plot(t, f(t, -np.pi/8*t), label="$\phi=-\pi/8 \cdot t$")
plt.xlim([0, 4])
plt.xlabel("$t$ [s]")
plt.grid()
plt.legend();
```
For the sake of our example we will run the FFT each $T_{cycle}$ seconds.
```
T_cycle = 2 # seconds
n_cycles = 200
f_cycle = 1/T_cycle
```
Per cycle FFT config
```
f_s = 100
T_s = 1/f_s
N = int(T_cycle/T_s)
print("Sample frequency:", f_s, "Hz")
print("Sample period:", T_s, "sec")
print("Number samples:", N)
```
We run FFT in each cycle and save the results in a list.
```
fft_cycle_results = list() # result list
# for each cycle
for c in range(n_cycles):
# determine start and end of a cycle
t_start = c*T_cycle
t_end = (c+1)*T_cycle
# sample the signal at according timesteps
t_sample = np.arange(t_start, t_end, T_s)
f_sample = f(t_sample, -np.pi/8*t_sample)
# run FFT and append results
fft_res = fft(f_sample)
fft_cycle_results.append(fft_res)
```
We cut the positive frequency range and normalize the amplitudes (see introdcutory example above).
```
fft_cycle_results = [2/N*r[:N//2] for r in fft_cycle_results]
freq = np.arange(0, N//2)*f_s/N
freq
```
**Note**: The FFT frequency resolution is at 1Hz. That's important because the frequency shift by $-\frac{1}{8}Hz$ introduced by $\phi(t)$ is not visible in the FFT!
The FFT will show a peak at 5Hz with a different phase each time.
Because the frequency is almost the same in each cycle, we expect the same behaviour in each result:
```
n_cycles_to_display = 4
fft_res_display = fft_cycle_results[:n_cycles_to_display]
fig, ax = plt.subplots(ncols=len(fft_res_display), figsize=(15, 3), sharex=True, sharey=True)
for i, ax, res in zip(range(n_cycles_to_display), ax, fft_res_display):
res_abs = np.abs(res)
ax.plot(freq, res_abs)
ax.grid(True)
ax.set_xlim([0, 10])
ax.set_xlabel("frequency [Hz]")
k = np.argmax(res_abs)
magn_max = res_abs[k]
freq_max = freq[k]
ax.set_title("Cycle %d:\n%.2f at %.2f Hz" % (i, magn_max, freq_max))
```
Looks fine for the first 4 cycles ... Let's look at all cycles by picking the frequency with max. magnitude from each cycle:
```
freq_list = list()
for res in fft_cycle_results:
res_abs = np.abs(res)
k = np.argmax(res_abs)
freq_list.append(freq[k])
plt.figure(figsize=(15,3))
plt.plot(freq_list)
plt.xlabel("cycle nr.")
plt.ylabel("frequency [Hz]")
plt.title("Frequency with max. peak in FFT domain vs. cycle");
```
It seems that the position (frequency) of the peaks remains **eqal**, despite the changing real and imaginary components.
Let's collect the max. frequency component from each cycle
```
cycle_max_list = list()
for res in fft_cycle_results:
# calc. the magnitude
res_abs = np.abs(res)
# find frequency index
k = np.argmax(res_abs)
cycle_max_list.append(res[k])
```
... and visualize the complex numbers:
```
n_cycles_to_display = 4
cycle_max_list_display = cycle_max_list[:n_cycles_to_display]
fig, ax = plt.subplots(ncols=len(cycle_max_list_display), figsize=(15, 30),
subplot_kw={'projection': "polar"}, sharey=True)
for i, ax, res in zip(range(n_cycles_to_display), ax, cycle_max_list_display):
ax.plot([0, np.angle(res)], [0, np.abs(res)], marker="o")
ax.text(np.angle(res)+0.1, np.abs(res), "%d°" % int(np.angle(res, deg=True)))
ax.set_ylim([0, 0.4])
ax.set_title("Cycle %d:\n" % (i, ))
```
We can observe that the angle moves in negative direction with $-45^\circ = T_{cycle}v = 2\frac{\pi}{8} = \pi/4$ per cycle.
### Solution via phase differences
Now we could calculate ange velocity by taking differences between cycles and put them relative to cycle duration:
```
angle_diff = np.diff(np.angle(cycle_max_list, deg=True))
angle_vel = angle_diff/T_cycle
print(angle_vel[:10])
```
Let's look at the parameter $v = -\frac{\pi}{8}$
```
v = -np.pi/8*360/(2*np.pi)
print(v)
```
Let's calculate the differences right (to remove the $157^\circ-(-157^\circ)$ effect).
```
angle_vel[angle_vel>0] -= 180
print("Angle velocities:", angle_vel[:10])
plt.figure(figsize=(15,3))
plt.plot(angle_vel)
plt.xlabel("cycle nr.")
plt.ylabel("°/s")
plt.title("angular velocity derived by cycle FFT phase differences")
plt.ylim([-40, 0]);
```
As you can see, the phases of the FFT output from each cycle give a hint over the phase velocity $v$ of the signal in time domain.
**Summary**: We found $v$!
### Solution via second FFT
The core idea of this alternative approach is to extract the periodic change of phase $\phi(t)$.
We can find the phase velocity via a **second FFT over the cycle results**, too. Consider the first FFT result as a measurement/sample for the second FFT.
Remember, those are our results (FFT-magnitude from the $5Hz$-component):
```
cycle_max_list[:5]
# here, we take only the positive side of fft
second_fft_res = fft(cycle_max_list)[:n_cycles//2]
second_fft_res[:5]
```
Like in the introductory example, each element of `second_fft_res` represents a frequency component.
```
freq_second = np.arange(0, n_cycles//2)*f_cycle/n_cycles
omega_second = 360*freq_second # same as 2*np.pi*
omega_second
plt.figure(figsize=(10,5))
plt.plot(omega_second, np.abs(second_fft_res))
plt.grid()
plt.xlabel("angle velocity $\omega$ [°/s]")
plt.xticks(range(0, 90, 5));
```
As you could see we could detect the phase change $v=22.5^{\circ}$ with a second FFT on the results of the first FFT.
| github_jupyter |
# Plot Kmeans clusters stored in a GeoTiff
This is a notebook plots the GeoTiffs created out of [kmeans](../stable/kmeans.ipynb). Such GeoTiffs contains the Kmeans cluster IDs.
## Dependencies
```
import sys
sys.path.append("/usr/lib/spark/python")
sys.path.append("/usr/lib/spark/python/lib/py4j-0.10.4-src.zip")
sys.path.append("/usr/lib/python3/dist-packages")
import os
os.environ["HADOOP_CONF_DIR"] = "/etc/hadoop/conf"
import os
os.environ["PYSPARK_PYTHON"] = "python3"
os.environ["PYSPARK_DRIVER_PYTHON"] = "ipython"
from pyspark.mllib.clustering import KMeans, KMeansModel
from pyspark import SparkConf, SparkContext
from osgeo import gdal
from io import BytesIO
import matplotlib.pyplot as plt
import rasterio
from rasterio import plot
from rasterio.io import MemoryFile
```
## Spark Context
```
appName = "plot_kmeans_clusters"
masterURL="spark://pheno0.phenovari-utwente.surf-hosted.nl:7077"
try:
sc.stop()
except NameError:
print("A new Spark Context will be created.")
sc = SparkContext(conf = SparkConf().setAppName(appName).setMaster(masterURL))
```
## Mode of Operation setup
The user should modify the following variables to define which GeoTiffs should be loaded. In case it (s)he wants to visualize results that just came out of [kmeans](kmeans.ipnyb) laste execution, just copy the values set at its [**Mode of Operation Setup**](../stable/kmeans.ipynb#mode_of_operation_setup).
```
#GeoTiffs to be read from "hdfs:///user/hadoop/modis/"
offline_dir_path = "hdfs:///user/pheno/"
#
#Choose all and then the band or the dir which has the band extracted.
#0: Onset_Greenness_Increase
#1: Onset_Greenness_Maximum
#2: Onset_Greenness_Decrease
#3: Onset_Greenness_Minimum
#4: NBAR_EVI_Onset_Greenness_Minimum
#5: NBAR_EVI_Onset_Greenness_Maximum
#6: NBAR_EVI_Area
#7: Dynamics_QC
#
#for example:
#var geoTiff_dir = "Onset_Greenness_Increase"
#var band_num = 0
geoTiff_dir = "kmeans_BloomFinal_LeafFinal_test"
band_num = 3
#Satellite years between (inclusive) 1989 - 2014
#Model years between (inclusive) 1980 - 2015
first_year = 1980
last_year = 2015
#Kmeans number of iterations and clusters
numIterations = 75
minClusters = 60
maxClusters = 60
stepClusters = 1
```
## Mode of Operation verification
```
geotiff_hdfs_paths = []
if minClusters > maxClusters:
maxClusters = minClusters
stepClusters = 1
if stepClusters < 1:
stepClusters = 1
#Satellite years between (inclusive) 1989 - 2014
#Model years between (inclusive) 1980 - 2015
years = list(range(1980,2015))
numClusters_id = 1
numClusters = minClusters
while numClusters <= maxClusters :
path = offline_dir_path + geoTiff_dir + '/clusters_' + str(band_num) + '_' + str(numClusters) + '_' + str(numIterations) + '_' + str(first_year) + '_' + str(last_year) + '_' + str(years[numClusters_id]) + '.tif'
geotiff_hdfs_paths.append(path)
numClusters_id += 1
numClusters += stepClusters
```
## Load GeoTiffs
Load the GeoTiffs into MemoryFiles.
```
clusters_dataByteArrays = []
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
print(geotiff_hdfs_paths[numClusters_id])
clusters_data = sc.binaryFiles(geotiff_hdfs_paths[numClusters_id]).take(1)
clusters_dataByteArrays.append(bytearray(clusters_data[0][1]))
numClusters_id += 1
numClusters += stepClusters
```
## Check GeoTiffs metadata
```
for val in clusters_dataByteArrays:
#Create a Memory File
memFile = MemoryFile(val).open()
print(memFile.profile)
memFile.close()
```
## Plot GeoTiffs
```
%matplotlib inline
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
print ("Plot for " + str(numClusters) + " clusters!!!")
memFile = MemoryFile(clusters_dataByteArrays[numClusters_id]).open()
plt = plot.get_plt()
plt.figure(figsize=(20,20))
plot.show((memFile,1))
if (numClusters < maxClusters) :
_ = input("Press [enter] to continue.")
memFile.close()
numClusters_id += 1
numClusters += stepClusters
```
### Histogram
```
%matplotlib inline
numClusters_id = 0
numClusters = minClusters
while numClusters <= maxClusters :
print ("Plot for " + str(numClusters) + " clusters!!!")
memFile = MemoryFile(clusters_dataByteArrays[numClusters_id]).open()
plt = plot.get_plt()
plt.figure(figsize=(20,20))
plot.show_hist(memFile, bins=numClusters)
if (numClusters < maxClusters) :
_ = input("Press [enter] to continue.")
memFile.close()
numClusters_id += 1
numClusters += stepClusters
%pylab inline
from ipywidgets import interactive
def wave(i):
x = np.linspace(0, np.pi * 2)
y = np.sin(x * i)
plt.plot(x,y)
plt.show()
interactive_plot = interactive(wave, i=(1,3))
interactive_plot
import ipywidgets
ipywidgets.__version__
```
| github_jupyter |
# Dynamics 365 Business Central Trouble Shooting Guide (TSG) - Login issues (SaaS)
This notebook contains Kusto queries that can help getting to the root cause of a login issue for an environment in the online version of Business Central (SaaS). Each section in the notebook contains links to the TSG part of the authorization telemetry documentation in [aka.ms/bctelemetry](aka.ms/bctelemetry), as well as Kusto queries that help dive into a specific area.
NB! The signal used in this notebook is only available in version 16.2 (or newer) of Business Central online, so check the version of your environment.
## 1. Connect to Application Insights
First you need to set the notebook Kernel to Python3, load the KQLmagic module (did you install it?) and connect to your Application Insights resource (get appid and appkey from the API access page in the Application Insights portal)
```
# load the KQLmagic module
%reload_ext Kqlmagic
# Connect to the Application Insights API
#%kql appinsights://appid='<add app id from the Application Insights portal>';appkey='<add API key from the Application Insights portal>'
```
## 2. Define filters
This workbook is designed for troubleshooting a single environment. Please provide values for aadTenantId and environmentName:
```
# aadTenantId = "<Add AAD tenant id here>"
# environmentName = "<add environment name here>"
aadTenantId = "de612f9e-c3a1-4e62-a824-84eff03fd3af"
environmentName = "Production"
```
# Analyze the login flow
Now you can run Kusto queries to look for possible root causes for login issues.
Either click **Run All** above to run all sections, or scroll down to the type of analysis you want to do and manually run queries
## Authentication
Authentication in the online version of Business Central happens strictly in Azure Active Directory (AAD). Only when a user is authenticated in AAD, a session is attempted to be created in the Business Central server (NST). When dealing with login issues, check for *absense* of signal in eventIds for Authorization in the pre-open company of opening a session in the NST (eventIds RT0001 and RT0003) to determine if the issue is related to AAD (e.g. user is disabled, wrong password, failed MFA) or maybe something happening in the customer network (could be a DNS issue, or a changed firewall rule).
**If you do not see any signal for eventIds RT0001 and RT0003, then start troubleshooting network issues first.**
Read more in the Security Guide here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/security/security-application#authentication
```
%%kql
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
traces
| where 1==1
and timestamp > ago(1d) // adjust accordingly to your analysis
and customDimensions.aadTenantId == _aadTenantId
and customDimensions.environmentName == _environmentName
and customDimensions.eventId in ('RT0001', 'RT0003' )
| summarize request_count=count() by bin(timestamp, 1h) | render timechart title= 'Number of pre-open company authorization attempts in the last day'
```
## Authorization failures (pre-open company)
A user can fail authorization before the open company trigger is executed for a number of different reasons:
* The user was successfully authenticated in Azure Active Directory but the user account is disabled in Business Central.
* A user successfully authenticated in Azure Active Directory but the user does not have any entitlements in Business Central (license issue)
Read more about these types of failures in the authorization signal docs here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/telemetry-authorization-trace#authorizationfailedpreopencompany
```
%%kql
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
traces
| where timestamp > ago(1d) // adjust accordingly to your analysis
and customDimensions.aadTenantId == _aadTenantId
and customDimensions.environmentName == _environmentName
and customDimensions.eventId == 'RT0001'
| project timestamp
, guestUser = customDimensions.guestUser
, userType = customDimensions.userType
, failureReason = customDimensions.failureReason
, entitlementSetIds = customDimensions.entitlementSetIds
| order by timestamp desc
| limit 100
```
## Authorization failures (in the open company process)
Events show up here for a number of different reasons
* The company name is invalid
* User has no permission to access the company
* The environment is locked
* The license has expired or the trial period has ended
* The user's license is not valid for use on production companies
Read more in the authorization signal docs here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/telemetry-authorization-trace#authorization-failed-open-company
```
%%kql
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
traces
| where timestamp > ago(1d) // adjust accordingly to your analysis
and customDimensions.aadTenantId == _aadTenantId
and customDimensions.environmentName == _environmentName
and customDimensions.eventId == 'RT0002'
| project timestamp
, clientType = customDimensions.clientType
, companyName = customDimensions.companyName
, failureReason = customDimensions.failureReason
| order by timestamp desc
| limit 100
```
## Successful logins (authentication in AAD succeded, authorization in the Business Central server succeeded)
If the user can authenticate against AAD and the two authorization steps inside the Business Central server succeeds, then a session is created and the user has successfully logged in.
Read more about application security in the Security Guide here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/security/security-application#authentication
```
%%kql
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
traces
| where timestamp > ago(1d) // adjust accordingly to your analysis
and customDimensions.aadTenantId == _aadTenantId
and customDimensions.environmentName == _environmentName
and customDimensions.eventId == 'RT0004'
| project timestamp
, clientType = customDimensions.clientType
, companyName = customDimensions.companyName
, totalTimeInMS = toreal(totimespan(customDimensions.totalTime))/10000 // totalTime is measured in ticks, divide by 10000 to get milliseconds
| order by timestamp desc
| limit 100
%%kql
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
traces
| where 1==1
and customDimensions.aadTenantId == _aadTenantId
and customDimensions.environmentName == _environmentName
and customDimensions.eventId == 'RT0004'
and timestamp > ago(1d)
| extend clientType = tostring( customDimensions.clientType )
| summarize count=count() by clientType, bin(timestamp, 1h)
| render timechart title= 'Number of successful logins the last day (shown by client/session type)'
%%kql
let _aadTenantId = aadTenantId;
let _environmentName = environmentName;
traces
| where 1==1
and customDimensions.aadTenantId == _aadTenantId
and customDimensions.environmentName == _environmentName
and customDimensions.eventId in ('RT0001', 'RT0002', 'RT0004')
and timestamp > ago(1d)
| extend attemptType = case(
customDimensions.eventId == 'RT0001', 'Failure before open company' ,
customDimensions.eventId == 'RT0002', 'Failure in open company trigger' ,
customDimensions.eventId == 'RT0004', 'Successful login' ,
'Unknown reason'
)
| summarize count=count() by attemptType, bin(timestamp, 1h)
| render timechart title= 'Number of login attempts the last day (shown by success/failure)'
```
| github_jupyter |
# Introduction to Glue-Viz
**version 0.1**
***
By AA Miller (Northwestern CIERA/Adler Planetarium)
03 May 2018
## Introduction
[All of my slides from Tuesday morning]
... that is all
## Glue
As a point of review, on Tuesday we learned about ParaView. I'd summarize the major strength of ParaView as providing an interface to create really nice 3D representations of data (and we barely scratched the surface of the most complex renderings that you can create).
On Wednesday, we learned about `bokeh`. I would summarize the major strengths of `bokeh` as being the ability to create linked plots, as well as the relative ease of getting the output from bokeh into a server and on the web.
Today we are going to learn about [`glue`](http://glueviz.org), which is a pure python library that designed to explore the relationships between related datasets. `glue` is actually developed by astronomers (in collaboration with medical imaging researchers), so a lot of the functionality is designed with *our* needs in mind.
(though note - it is created as a general purpose tool. But, if there is something that you'd like to see in `glue` that does not exist, then you can reach out and maybe they will develop it)
`glue` includes elements that we have already explored this week. In particular, `glue`, due to the medical imaging connection, provides nice functionality for visualizing 3D data sets. Additionally, given the large collection of heterogenous catalogs in astronomy, `glue` is designed to make linking between data sets very straightforward.
You should have already installed `glue`, but if not
conda install -c glue glue
Furthermore, our first example will use the data included in this tarball: https://northwestern.box.com/s/uiwq47ir8r4h6njlxv6njtx174wdeoox
## Problem 1) Using Glue
**Problem 1a**
Open `glue`. The standard way to do this is to launch the program from the command line: `glue`.
At this stage you will notice 4 primary windows within the application:
* upper left ––– data collection (lists all the open data, as well as the selected subsets)
* middle left ––– viewer layers (shows the different layers, and allows control over which are displayed)
* lower left ––– viewer options (includes global options for the active viewer)
* right ––– visualization canvas (this is where the data renderings are actually shown)
**Problem 1b**
Open the w5 fits image in `glue`.
As a quick note - this image is from the [*WISE*](https://www.nasa.gov/mission_pages/WISE/main/index.html) satellite and it is showing the [Westerhout 5 (W5)](https://en.wikipedia.org/wiki/Westerhout_5) star forming region.
*Hint* - you can drag and drop, or select the file path in [*File $\rightarrow$ Open Data Set.*]
**Problem 1c**
Render the image, by dragging the w5 entry in the data collection window to the visualization canvas.
The will pop up a drop down menu asking what type of render you would like. Select the best option.
*Hint* - you may want to resize the window within the visualization canvas.
As previously noted, one of the great strengths of `glue` is the ability to drill down on subsets of linked data.
At the top of the 2D image window there are 5 different methods for selecting subsets of the data. From left to right they include: rectangular selection, vertical selection, horizontal selection, circular selection, and finally freeform selection [this has similar functionality to `bokeh`'s lasso.].
**Problem 1d**
Use the horizontal selection tool to select the subset of the data near the center of the image (this is done via drag and click).
Then, use the vertical selection tool to select the subset of the data near the center of the image.
Notice that there are now 2 subsets in the data collection panel, as well as additional entries in the view layers panels.
**Problem 1e**
Adjust the color of subset 1 to be "DSFP blue" and adjust the transparency bar to its maximum (i.e. minimize the transparency of the selection.
Adjust the color of subset 2 to be "DSFP light grey" and make this selection more transparent.
At this point, it is a little difficult to see the emission under the selected subsets.
**Problem 1f**
Select the w5 data in the data collection, and adjust the data scaling to match the optimal choice for astronomical images.
*Hint* - think back to Zolt's lecture.
[You may want to adjust the colorbar and range of the data being displayed. Be sure the subset panels can still be seen after making these changes.]
There is a bright knot of emission in the northwest portion of the nebula, we will now focus on that.
**Problem 1g**
Adjust the subset selections to be centered on the bright emission knot in the northwest portion of the nebula. This can be done by selecting the subset in the data collection and then holding *cntl* while dragging the mouse over a new region to redefine the subset.
**Problem 1h**
Create a histogram of the brightness data in the fits image [drag the w5 data from the data collection into the visualization canvas and select the appropriate option from the drop down menu].
Notice that you now have histograms in 3 different colors. This is because the data linking in `glue` is (in some cases) automatic. By creating the histrogram for the data, you have also automatically created a histogram for the two subsets of the data.
You will also notice that the histogram, as currently constructed, is not particularly informative.
**Problem 1i**
Update the range of the histogram to extend to a maximum value of 1000. Increase the number of bins to 25. Finally, normalize the histogram.
Does the resulting histogram make sense?
The current histograms are strongly polluted by background pixels. We can improve this with the selection tools.
**Problem 1j**
Select the pixels in the bright knot by changing the selection mode to "remove" (5th option). Then select the horizontal selection tool in the histogram plot. Drag and click to select the region with pixel values less than 500 to remove those from the selection.
How do the histograms look now? Does the resulting image layers/histrogram make sense?
*Note* - don't forget to return to the default selection mode after you have done this.
## Problem 2) Linking Data Sets
So far we have only employed automatic linking via data subsets. This has some utility (for instance, I could imagine teaching non-experts about source detection using the steps we just covered regarding the removal of faint pixels), but the real power of `glue` is in linking heterogeneous data sets.
**Problem 2a**
Open the second data file from the tarball `w5_psc.vot`.
*Aside* - this VO table file includes sources in the W5 region that were detected by the [*Spitzer*](http://www.spitzer.caltech.edu/) space telescope. One reason for comparing *WISE* to *Spitzer* is that *WISE* covers the entire sky, while *Spitzer* offers higher resolution and greater depth, so it has more complete catalogs in the areas that it has observed.
Given that the catalog and image are heterogeneous, linking will not be automatic (as it was for the subsets created in problem 1).
**Problem 2b**
Link the data sets by selecting the *Link Data* option in the top of the menu bar.
Select an appropriate component from the image and catalog data, and then link those components by clicking on the *glue* button.
Get it? Link the things by "glueing" them together, using `glue`.
.
.
.
Get it?
No seriously,
**Do you get it?**
Be sure that you glue both of the relevant variables that connect these two data sets.
Hold on, now it's about to get real.
With the catalog and image now linked, subsets selected in either space (e.g., the bright emission knot selected in Problem 1) will automatically be reflected in the other space.
**Problem 2c**
Create a scatter plot of the the catalog data by dragging `w5_psc.vot` into the visualization canvas.
For the scatter plot show the [4.5] - [5.8] vs. [3.6] color magnitude diagram.
**Problem 2d**
Remove the previously created subsets. In the 2D image, choose the circular selection tool and highlight a small region centered on the bright know in the northwest portion of the nebula.
What do you notice when you make this selection?
**Problem 2e**
Show the individual *Spitzer* point sources on the image by selecting the subset in the data collection and dragging it onto the 2D image.
Look at the overlap of the sources relative to the bright knot - does this make sense?
**Problem 2f**
Adjust the plot of the subset of points to provide a linear colormap for the data.
Color the points by their [3.6] magnitude? Does the image make sense?
What about the reverse? Can we select interesting sources in CMD space and highlight their spatial positions in the cluster?
This could be useful, for example, to identify the location of the youngest stars within the W5 star-forming region.
**Problem 2g**
Select the *Spitzer* point source catalog in the data collection. Then, using the rectangular selection tool in the CMD, choose all the red sources with [4.5] - [5.8] > 1 mag.
What can you say about the positions of the red sources relative to the 12 micron emission?
## Problem 3) Reproducibility
Hopefully at this point it is clear that `glue` can be very powerful in the way that it allows linking across image and catalog data.
However, everything we have done has been in an interactive mode that may be hard to reproduce.
Fortunately, `glue` provides multiple different ways to save your work.
You can either save your entire session, save specific plots from your session, or save subsets created via the various selection tools from your session.
## Problem 4) Easy (?) False Color Images
You should have already unpacked a tarball with 5 fits images: https://northwestern.box.com/s/hmitigmvcfi2tuzlgt1psatebkyrk0e3
**Problem 4a**
Open each of the 5 fits files (named g, r, i, z, y) in glue.
*Note* - as you open each image after the first you will be prompted to "merge" the data. Select no on that option for now.
**Problem 4b**
Create a 2D image of the g-band data.
**Problem 4c**
Drag and drop the data from each of the other filters on to the g band image.
**Problem 4d**
Change the color option from colorbar to "one color per channel". Then select 3 layers for the 2D image, assigning RGB to one of each of the layers.
**Problem 4e**
Adjust the scalings (and colors if necessary) to create a nice false color image of the galaxy.
## Problem 5) 3D scatter plots in glue
### Warning 1
3D viewing in `glue` is relatively new, and as such it does not provide the full range of functionality that is eventually expected within the package.
[Read the docs](http://docs.glueviz.org/en/latest/gui_guide/3d_viewers.html) for caveats regarding the use of the 3D viewer.
### Warning 2
There is a very very good chance that you may have a non-working version of `glue` on your machine if you have not updated your `anaconda` software since session 4. At this point, please proceed carefully to make sure you have the correct install for 3D rendering in `glue`.
As a first test, please try:
conda list glue
If that returns something like this:
# Name Version Build Channel
glue-core 0.13.2 py36_0 glueviz
glue-vispy-viewers 0.10 py36_1 glueviz
glueviz 0.13.2 0 glueviz
Namely, `glue-core` and `glueviz` versions 0.13.x **AND** `glue-vispy-viewers` version 0.10 –– then you are safe and ready to proceed.
Alternatively, if you have something like this:
# Name Version Build Channel
glue-core 0.12.5 py36_0 glueviz
glue-vispy-viewers 0.10 py36_1 glueviz
glueviz 0.13.0 0 glueviz
Or, any combination of `glue-core` or `glueviz` <= 0.12.x **AND** `glue-vispy-viewers` version 0.10 –– then 3D viewing is likely not going to be supported in your installation.
The easiest way to address right now is to roll back your `glueviz` packages:
conda install -c glueviz glueviz=0.12.4
conda install -c glueviz glueviz=0.12.4
conda install -c glueviz glue-vispy-viewers=0.9
If you are unsure about any of this, please raise your hand and I'll stop by to make sure everything is set up correctly.
As an example of a 3D scatter plot in `glue`, we will create a fits table using the training data from the feature engineering then render the data.
**Problem 5a**
Create `astropy.io.fits` columns for each of the 3 data arrays.
*Hint* - `fits.Column` takes `name`, `format`, and `array` as optional arguments. For `format` "D" = double precision, and "J" = integer. You'll want to pass `np.array`s to the `array` argument.
```
import pandas as pd
from astropy.io import fits
import numpy as np
train_df = pd.read_csv("training_sources.csv")
col1 = fits.Column(name="mean", format="D", array=np.array(train_df["mean"]))
col2 = fits.Column(name="nobs", format="J", array=np.array(train_df["nobs"]))
col3 = fits.Column(name="duration", format="J", array=np.array(train_df["duration"]))
```
**Problem 5b**
Merge the columns into a `fits` hdu object.
```
hdu = fits.BinTableHDU.from_columns([col1, col2, col3])
```
**Problem 5c**
Write the hdu object to a fits file.
```
hdu.writeto("training_set.fits")
```
**Problem 5d**
Open the fits file in `glue`.
Drag the file to the canvas, and select 3D scatter plot.
Zoom, rotate, adjust the rendering to get a sense of how this 3d scatter plot compares to ParaView.
**Problem 5e**
Adjust the size of the points and color the data via the value of the mean. Choose a colorbar that makes it easier to see the variation in the data.
**Problem 5f**
Identify the predominant axis in the data. Adjust its stretch value to 10.
Change the limits on duration to extend from 2000 to 2500.
Do these changes help or harm your visualization?
**Problem 5g**
Create a new fits table including more informative features from Tuesday's lecture, as well as the class information for each source.
Open the fits table in `glue` and create a 3D scatterplot highlighting 3 very useful features with the points colored by the classification of each star.
**Problem 5h**
Use the circle selection tool to create a subset of the data. Generate a histogram of that subset showing a variable that is not displayed in the 3D render.
## Problem 6) 3D volume renderings
You should have already downloaded the [astropy fits cube for L1448](https://northwestern.box.com/s/plr126cuuag8dqff0qk2wugeoqyw37t7), which includes $^{13}$CO data.
**Problem 6a**
Open the `l1448_13co.fits` file in glue.
Adjust the scaling parameters and rotate the cube to get a sense of the data.
**Problem 6b**
Create a 1D histogram of the same data. Use the horizontal select tool to select the brightest pixels with `PRIMARY` > 1.5.
How does this change the appearance of the data cube?
**Problem 6c**
Click the start/stop rotation button.
Preeeeeeeeeeeeeeety.
**Problem 6d**
Record a movie as you move around the cube. Press the camera button, choose a location to save the movie. At that point you will see a red button which means you are recording –– select this button to stop the recording.
The movie file that this generates is an animated gif. Open it in a browser to see the output.
*Note* - I could not get the record and rotation button (**6c**) to work simultaneously.
Finally - unfortunately, I do not have the table data to make a link here, but if you did, you could then select information within the data cube and highlight it in the table, in much the same way we did in Problem **2**.
## Problem 7) Python Scripting
Like the other tools introduced this week `glue` supports python scripting.
The easiest way to launch this is to click on the *IPython Terminal* button at the top of the `glue` window.
This launches an `IPython` instace, from which you can perform several operations, so [read the docs](http://docs.glueviz.org/en/latest/python_guide/data_tutorial.html). Here we will re-create some of the operations from **Problem 2** using the command line.
If nesssary, reload the W5 data (both fits and image).
**Problem 7a**
Launch the *IPython Terminal* in `glue` and examine the data collection:
dc
The data collection is effectively a list. We will focus on the catalog data.
**Problem 7b**
Select the catalog data by creating a variable `psc_cat` which is equal to the second list entry in `dc`.
Examine the contents of `psc_cat`
psc_cat.components
You can now manipulate `psc_cat` in the same way that you would a dictionary. This also means that `np` style indexing can be performed on the data.
**Problem 7c**
To test this, print out the "Hmag" values from the catalog.
Now that we have the data in `python` we can combine multiple attributes within the same data set.
**Problem 7d**
Create an array that is equal to the [3.6] - [4.5] color. Add this array to the `psc_cat`.
*Hint* - this is handled in exactly the same way as it would be for a ditionary.
**Problem 7e**
Create a new subset of the data where the [3.6] - [4.5] color is > 1.2.
state = psc_cat.id["__3.6__-__4.5__"] > 1.2
label = "[3.6] - [4.5] > 1.2"
subset_group = dc.new_subset_group(label, state)
You should now see a new subset in the data collection panel. In this way it is possible to create precise selections based on the parameters in the catalog. Or to create new variables within the catalog.
There is more functionality for adjusting the subsets within `IPython`, to learn more about those details [read the docs](http://docs.glueviz.org/en/latest/python_guide/data_tutorial.html)
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
```
# Transformer
> Training a Timesformer model for UCR video classif.
Tbaks to Phil wang (@lucidrains) we have a bunch of attention based models to trian:
- `Is Space-Time Attention All You Need for Video Understanding?`: This paper looks pretty cool, as it is the first full attention model for video. The training is complicated without pretrained models, but this will be solved soon. The model code comes from LucidRains [implementation](https://github.com/lucidrains/TimeSformer-pytorch)
- `STAM (Space Time Attention Model)`: This is a ddiferent type of joint space-time attention. Code [here](https://github.com/lucidrains/STAM-pytorch)
- `ViVit` : Our google friends made this model that I have not tried yet, and thanks to Phil and rishikksh20 we have some code [here](https://github.com/rishikksh20)
All this models are difficult to train, so if you find ways to train them from scratch, or you make loading ViT weights on the image encoder part let me know.
```
from fastai.vision.all import *
from action_recognition.core import *
from action_recognition.models import *
torch.cuda.set_device(0)
torch.cuda.get_device_name()
data_path = Path.home()/'.fastai/data/UCF101-frames'
instances = get_instances(data_path)
seq_len = 20
step=5
image_size = 128
bs = 16
```
you could put this split on a text file:
```
dls = get_action_dataloaders(instances, bs=bs, image_size=image_size, seq_len=seq_len, step=step).cuda()
```
## TimesFormer
```
model = TimeSformer(
dim = 128,
image_size = 128,
patch_size = 16,
num_frames = 20,
num_classes = dls.c,
depth = 12,
heads = 8,
dim_head = 64,
attn_dropout = 0.1,
ff_dropout = 0.1
)
model = model.cuda()
model(x).shape
learn = Learner(dls, model, metrics=[accuracy, top_k_accuracy], wd=0.1, opt_func=ranger).to_fp16()
learn.lr_find()
learn.fit_flat_cos(10, 1e-3)
```
It needs ImageNet pretraining of the encoder. It is documented on the paper.
> First, we attempted to train TimeSformer on video datasets
directly, without ImageNet pretraining. For these exper-
iments, we followed the training-from-scratch protocol
of Touvron et al. (2020) and we also evaluated some vari-
ants of it. However, the model failed to learn meaningful
features.
```
learn.show_results()
```
## STAM
```
model = STAM(
dim = 256,
image_size = 128, # size of image
patch_size = 16, # patch size
num_frames = 20, # number of image frames, selected out of video
space_depth = 6, # depth of vision transformer
space_heads = 8, # heads of vision transformer
space_mlp_dim = 512, # feedforward hidden dimension of vision transformer
time_depth = 6, # depth of time transformer (in paper, it was shallower, 6)
time_heads = 4, # heads of time transformer
time_mlp_dim = 512, # feedforward hidden dimension of time transformer
num_classes = dls.c, # number of output classes
space_dim_head = 64, # space transformer head dimension
time_dim_head = 64, # time transformer head dimension
dropout = 0.1, # dropout
emb_dropout = 0.1 # embedding dropout
)
learn = Learner(dls, model, metrics=[accuracy, top_k_accuracy], wd=0.1, opt_func=ranger).to_fp16()
learn.lr_find()
learn.fit_flat_cos(10, 1e-4)
learn.show_results()
```
| github_jupyter |
# Dive
wrapped in a python class
## Implementation
- Dive profile is usually shown as a series of depth and time, in `MM:SS` format, points.
- Need to convert the latter into decimal minutes,
- and convert it to time and current depth.
- constructor: initialize the model to ZH-L16C w/ 5-minute compartment,
- diving air,
- at sea level,
- using $ RQ = 0.9 $
- and fixed gradient factor 0.85
- `segment()` takes the new depth and time and updates gas loadings.
This also keeps track of the ceiling together with $ P_t $
```
import pprint
import re
import diyzhl
#
#
class dive( object ) :
# ctor
#
def __init__( self, verbose = False ) :
self._verbose = bool( verbose )
self._timepat = re.compile( r"(?:(\d{1,2}):)?(\d{1,2}):(\d{1,2})" )
# air, sea level, USN RQ. S is surface pressure (const), P is current pressure (var)
self._T = 0
self._S = 1.0
self._P = self._S
self._Q = 0.79
self._RQ = 0.9
self._GFHi = 0.85
self._TCs = []
# starting Pt (same for all TCs)
sp = diyzhl.palv( Pamb = self._P, Q = self._Q, RQ = self._RQ )
# use ZH-L16Cb (skip over 4-minute TC)
for tc in diyzhl.ZHL16N.keys() :
if tc == 1 : continue
self._TCs.append( {
"t" : diyzhl.ZHL16N[tc]["t"],
"a" : diyzhl.ZHL16N[tc]["a"]["C"],
"b" : diyzhl.ZHL16N[tc]["b"],
"P" : sp
} )
# init. ceiling
for i in range( len( self._TCs ) ) :
self._TCs[i]["C"] = diyzhl.buhlmann( Pn = self._TCs[i]["P"],
an = self._TCs[i]["a"],
bn = self._TCs[i]["b"],
gf = self._GFHi )
if self._verbose :
pprint.pprint( self._TCs )
# helpers for plotting
# (could actually do this in a more "pythonic" way but this is more obvious)
#
@property
def compartments( self ) :
rc = []
for i in range( len( self._TCs ) ) :
rc.append( self._TCs[i]["t"] )
return rc
@property
def loadings( self ) :
rc = []
for i in range( len( self._TCs ) ) :
rc.append( self._TCs[i]["P"] )
return rc
# helper function: takes human-readable time string like "1:30" and returns minutes: 1.5
#
def _time( self, t = "0:0" ) :
if t is None : return 0
m = self._timepat.search( str( t ).strip() )
if not m : raise Exception( "Invalid time string %s" % (t,) )
rc = 0.0
if m.group( 1 ) is not None :
rc = float( m.group( 1 ) ) * 60.0
rc += float( m.group( 2 ) )
rc += float( m.group( 3 ) ) / 60.0
return round( rc, 1 )
# newdepth is new depth in 0.1 bar
# timestr is time as [hours:]minutes:seconds string. *it is the total elapsed* time
#
def segment( self, newdepth = 0.0, newtimestr = "1:0" ) :
assert float( newdepth ) >= 0.0
if float( newdepth ) == 0.0 :
newP = self._P
else :
newP = round( self._S + float( newdepth ) / 10, 1 )
t = self._time( newtimestr ) - self._T
for i in range( len( self._TCs ) ) :
p = diyzhl.schreiner( Pi = self._TCs[i]["P"],
Palv = diyzhl.palv( Pamb = self._P, Q = self._Q, RQ = self._RQ ),
t = t,
R = diyzhl.arr( d0 = self._P, dt = newP, t = t, Q = self._Q ),
k = diyzhl.kay( Th = self._TCs[i]["t"] ) )
self._TCs[i]["P"] = p
self._TCs[i]["C"] = diyzhl.buhlmann( Pn = self._TCs[i]["P"],
an = self._TCs[i]["a"],
bn = self._TCs[i]["b"],
gf = self._GFHi )
self._P = newP
self._T += t
if self._verbose :
sys.stdout.write( "* At time %f, P %f:\n" % (self._T, self._P,) )
pprint.pprint( self._TCs )
```
## Animate
The profile is from a real dive, simplified.
(TODO: change bar colour when TC has a ceiling)
```
%pylab inline
import matplotlib.pyplot as plt
import numpy as np
from IPython import display
d = dive( verbose = False )
idx = np.arange( len( d.compartments ) )
plt.xticks( idx, d.compartments, rotation = 30 )
PROFILE = [(24,"1:40"),(30,"6:10"),(29,"10:20"),(23,"16:20"),(15,"21:30"),(10,"25:30"),(3,"40:0")]
for i in PROFILE :
display.clear_output( wait = True )
d.segment( newdepth = i[0], newtimestr = i[1] )
plt.ylim( 0, 3.0 )
plt.bar( idx, d.loadings )
plt.show()
display.display( plt.gcf() )
time.sleep( 1.5 )
```
| github_jupyter |
```
import os, sys
import matplotlib.pyplot as plt
import numpy as np
from sklearn import decomposition, manifold
% matplotlib notebook
def compute_distance(x,y):
x = x / np.linalg.norm(x)
y = y / np.linalg.norm(y)
return np.linalg.norm(x-y)
def compute_xcorr(x,y):
return x.dot(y.T).sum()
def print_percentage(n, t):
sys.stdout.write('\r')
sys.stdout.write("[%-20s] %d%%" % ('=' * ((n * 20/t) + 1) , n * 100/t + 1 ))
sys.stdout.flush()
def norm_feat(feat):
norms = np.linalg.norm( feat, axis=1)
feat = (feat.T / norms.T).T
return feat[ ~np.isnan(feat[:,0]) ]
# def norm_agg_feat(feat):
def norm_agg_feat(x):
normx = np.linalg.norm(x.sum(axis=0))
return x / normx
```
### Sum of Radial Basis Function similarities (RBFsim) effectively estimating kernel density estimate (KDE)
1. For a given feature directory, read in numpy arrays
2. Calculate upper right hand comparisons with RBF kernel function
```
feature_dir = '/fileserver/nmec-handwriting/localfeatures/nmec_bw_denoised_cc_deNNiam_fiel657_min500/'
#feature_dir = '/fileserver/nmec-handwriting/localfeatures/nmec_bw_cc_deNNiam_fiel657_min500'
#feature_dir = '/fileserver/nmec-handwriting/localfeatures/nmec_bw_crop_cc_deNNiam120_fiel657-120'
files = os.listdir(feature_dir)
files.sort()
files = files[:-1]
C = np.zeros((len(files),len(files)))
feature_map = {}
for i,filei in enumerate(files):
feati = norm_agg_feat( np.load(feature_dir+"/"+filei) )
feature_map[filei] = feati
# for j, filej in enumerate(files):
# feati = norm_feat( np.load(feature_dir + "/" + filei) )
# featj = norm_feat( np.load(feature_dir + "/" + filej) )
# Cij = feati.dot(featj.T)
# C[i,j] = Cij.max(axis=0).mean()
# C[i,j] = feati.mean(axis=0).dot(featj.mean(axis=0))
# feati = np.load(feature_dir+"/"+filei).mean(axis=0)
# featj = np.load(feature_dir+"/"+filej).mean(axis=0)
# C[i,j] = feati.dot(featj)
print_percentage(i, len(files))
metric = []
for i, image in enumerate(feature_map):
featsi = feature_map[image]
metricline = [np.array([compute_xcorr(featsi, feature_map[other]) for other in feature_map])]
metric += metricline
print_percentage(i, len(feature_map))
metric = np.array(metric)
F = -metric
np.fill_diagonal(F, -sys.maxint)
Csym = np.zeros(C.shape)
Csym[:] = C
for i in xrange(len(files)):
for j in xrange(len(files)):
if j > i:
Csym[j,i] = C[i,j]
F = Csym
#feati.mean(axis=0).dot(featj.mean(axis=0))
F = -metric
np.fill_diagonal(F,-1.0)
print F
soft_correct = 0
hard_correct = 0
total_num = 0
k = 10
g = 8
max_top = 1
for j, i in enumerate(F):
#if not files[j][7:10]=='004':
# continue
total_num += 1
topk = i.argsort()[-k:]
if files[j][:6] in (files[index][:6] for index in topk):
soft_correct += 1
hardsample = list(files[index][3:6] for index in topk[-max_top:])
if len(set(hardsample)) == 1 and hardsample[0] == files[j][3:6]:
print "%s matched %s" % (files[j][3:10], hardsample)
hard_correct += 1
print "%-30s" % ( "-" * 37 )
print "SOFT CRITERIA: Top %d\t= %f" %(k, (soft_correct + 0.0) / total_num)
print "HARD CRITERIA: Top %d\t= %f" %(max_top, (hard_correct + 0.0) / total_num)
feature_map[filei].dot(feature_map[filei].T).sum()
feature_map['FR-003-001.bin.tif.npy'].dot(feature_map['FR-010-001.bin.tif.npy'].T).sum()
compute_xcorr(feature_map['FR-004-002.bin.tif.npy'],feature_map['FR-004-004.bin.tif.npy'])
plt.imshow(metric)
```
| github_jupyter |
# Naas - NLP Examples
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Naas/Naas_NLP_Examples.ipynb" target="_parent"><img src="https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg=="/></a>
**Tags:** #naas #nlp
## How it works?
Naas NLP formulas follow this format.
```
nlp.get(task, model, tokenizer)(inputs)
```
The supported tasks are the following:
- text-generation (model: GPT2)
- summarization (model: t5-small)
- fill-mask (model: distilroberta-base)
- text-classification (model: distilbert-base-uncased-finetuned-sst-2-english)
- feature-extraction (model: distilbert-base-cased)
- token-classification (model: dslim/bert-base-NER)
- question-answering
- translation
We use [Hugging Face API](https://huggingface.co/models) under the hood to access the models.
## Input
### Import library
```
from naas_drivers import nlp
```
## Model
### Text Generation
```
nlp.get("text-generation", model="gpt2", tokenizer="gpt2")("What is the most important thing in your life right now?")
```
### Text Summarization
Summarize the text given, maximum lenght (number of tokens/words) is set to 200.
```
nlp.get("summarization", model="t5-small", tokenizer="t5-small")('''
There will be fewer and fewer jobs that a robot cannot do better.
What to do about mass unemployment this is gonna be a massive social challenge and
I think ultimately we will have to have some kind of universal basic income.
I think some kind of a universal basic income is going to be necessary
now the output of goods and services will be extremely high
so with automation they will they will come abundance there will be or almost everything will get very cheap.
The harder challenge much harder challenge is how do people then have meaning like a lot of people
they find meaning from their employment so if you don't have if you're not needed if
there's not a need for your labor how do you what's the meaning if you have meaning
if you feel useless these are much that's a much harder problem to deal with.
''')
```
### Text Classification
Basic sentiment analysis on a text.<br>
Returns a "label" (negative/neutral/positive), and score between -1 and 1.
```
nlp.get("text-classification",
model="distilbert-base-uncased-finetuned-sst-2-english",
tokenizer="distilbert-base-uncased-finetuned-sst-2-english")('''
It was a weird concept. Why would I really need to generate a random paragraph?
Could I actually learn something from doing so?
All these questions were running through her head as she pressed the generate button.
To her surprise, she found what she least expected to see.
''')
```
### Fill Mask
Fill the blanks ('< mask >') in a sentence given with multiple proposals. <br>
Each proposal has a score (confidence of accuracy), token value (proposed word in number), token_str (proposed word)
```
nlp.get("fill-mask",
model="distilroberta-base",
tokenizer="distilroberta-base")('''
It was a beautiful <mask>.
''')
```
### Feature extraction
This generate a words embedding (extract numbers out of the text data).<br>
Output is a list of numerical values.
```
nlp.get("feature-extraction", model="distilbert-base-cased", tokenizer="distilbert-base-cased")("Life is a super cool thing")
```
### Token classification
Basically NER. If you give names, location, or any "entity" it can detect it.<br>
| Entity abreviation | Description |
|--------------|------------------------------------------------------------------------------|
| O | Outside of a named entity |
| B-MIS | Beginning of a miscellaneous entity right after another miscellaneous entity |
| I-MIS | Miscellaneous entity |
| B-PER | Beginning of a person’s name right after another person’s name |
| I-PER | Person’s name |
| B-ORG | Beginning of an organization right after another organization |
| I-ORG | organization |
| B-LOC | Beginning of a location right after another location |
| I-LOC | Location |
Full documentation : https://huggingface.co/dslim/bert-base-NER.<br>
## Output
### Display result
```
nlp.get("token-classification", model="dslim/bert-base-NER", tokenizer="dslim/bert-base-NER")('''
My name is Wolfgang and I live in Berlin
''')
```
| github_jupyter |
```
import os
import pandas as pd
import sys
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, StratifiedKFold
import tensorflow as tf
sys.path.append("../../DNN-RE-new/src")
raw_data = pd.read_csv('raw_data/MBdata_33CLINwMiss_1KfGE_1KfCNA.csv')
def to_categorical(data, dtype=None):
val_to_cat = {}
cat = []
index = 0
for val in data:
if val not in val_to_cat:
val_to_cat[val] = index
cat.append(index)
index += 1
else:
cat.append(val_to_cat[val])
return np.array(cat)
```
# MB-1004-GE-2Hist
```
all_genes_df = pd.read_csv('raw_data/all_genes.csv')
df_4 = all_genes_df[['METABRIC_ID','CDH1', 'MKI67', 'FOXA1', 'PTEN']]
df_4.rename(columns={"CDH1": "GE_CDH1", 'MKI67':'GE_MKI67', 'FOXA1':'GE_FOXA1', 'PTEN':'GE_PTEN'}, inplace=True)
df_4_raw_data = pd.merge(df_4, raw_data, on='METABRIC_ID')
df_4_raw_data_2Hist = df_4_raw_data[(df_4_raw_data['Histological_Type'] == 'IDC') | (df_4_raw_data['Histological_Type'] == 'ILC')].copy()
df_4_raw_data_2Hist['Histological_Type'].replace({"IDC":0,"ILC":1}, inplace=True)
genes_1000_2Hist = df_4_raw_data_2Hist.iloc[:,38:1038]
genes_4_2Hist = df_4_raw_data_2Hist.iloc[:,1:5]
labels_2Hist = df_4_raw_data_2Hist.loc[:,'Histological_Type']
MB_1004_GE_2Hist = pd.concat([genes_1000_2Hist, genes_4_2Hist, labels_2Hist ], axis=1)
MB_1004_GE_2Hist["Histological_Type"].value_counts()
MB_1004_GE_2Hist.to_csv('MB-1004-GE-2Hist.csv', index=False)
```
## save
```
dataset_name = 'MB-1004-GE-2Hist'
target_col_name = 'Histological_Type'
data = MB_1004_GE_2Hist
from model.generation.helpers import init_dataset_dir
path_to_data_folder = '../../'
init_dataset_dir.run(dataset_name=dataset_name, path_to_data_folder=path_to_data_folder)
data_path = '../../' + dataset_name + '/'
data.to_csv(data_path + 'data.csv', index=False)
```
# MB-ClinP-ER
```
clin = raw_data.iloc[:,2:19]
clin.drop(["Date_Of_Diagnosis", "Last_Followup_Status", "Breast_Surgery", "ER_Status"], axis =1, inplace=True)
clin['Age_At_Diagnosis'] = clin['Age_At_Diagnosis'].astype(float)
clin["Breast_Tumour_Laterality"] = to_categorical(clin["Breast_Tumour_Laterality"])
clin['NPI'] = clin['NPI'].astype(float)
clin["Inferred_Menopausal_State"] = to_categorical(clin["Inferred_Menopausal_State"])
clin['Lymph_Nodes_Positive'] = clin['Lymph_Nodes_Positive'].astype(int)
clin["CT"] = to_categorical(clin["CT"])
clin["HT"] = to_categorical(clin["HT"])
clin["RT"] = to_categorical(clin["RT"])
clin["Grade"].replace("?", np.NaN, inplace=True)
clin.fillna(clin['Grade'].value_counts().index[0], inplace=True)
clin['Grade'] = clin['Grade'].astype(int)
clin["Size"].replace("?", np.NaN, inplace=True)
clin['Size'] = clin['Size'].astype(float)
clin['Size'].fillna(clin['Size'].mean(), inplace=True)
clin["Histological_Type"] = to_categorical(clin["Histological_Type"])
clin["Stage"].replace("?", np.NaN, inplace=True)
clin.fillna(clin['Stage'].value_counts().index[0], inplace=True)
clin['Stage'] = clin['Stage'].astype(int)
clin["Cellularity"] = to_categorical(clin["Cellularity"])
er_data_df = raw_data.loc[:,"ER_Expr"]
er_data_df.replace({"-":0,"+":1}, inplace=True)
clin["ER_Expr"] = er_data_df.values
```
## save
```
dataset_name = 'MB-ClinP-ER'
target_col_name = 'ER_Expr'
init_dataset_dir.run(dataset_name=dataset_name, path_to_data_folder=path_to_data_folder)
clin.to_csv(data_path + 'data.csv', index=False)
```
# MB-GE-ER
```
raw_data = pd.read_csv('MBdata_33CLINwMiss_1KfGE_1KfCNA.csv')
ge = raw_data.iloc[:,34:1034]
er = raw_data.loc[:, "ER_Expr"]
MB_GE_ER = pd.concat([ge, er], axis=1)
MB_GE_ER["ER_Expr"].replace({'+':1, '-': 0}, inplace=True)
MB_GE_ER.to_csv('MB-GE-ER.csv', index=False)
```
## save
```
dataset_name = 'MB-GE-ER'
target_col_name = 'ER_Expr'
data = MB_GE_ER
init_dataset_dir.run(dataset_name=dataset_name, path_to_data_folder=path_to_data_folder)
data.to_csv(data_path + 'data.csv', index=False)
```
# MB-GE-Clin-ER
```
GE_data = pd.read_csv("../../MB-GE-ER/data.csv")
ClinP_data = pd.read_csv("../../MB-ClinP-ER/data.csv")
GE_data = GE_data.drop(columns=["ER_Expr"])
GE_ClinP_data = pd.concat([GE_data, ClinP_data], axis=1)
```
## save
```
dataset_name = 'MB-GE-ClinP-ER'
target_col_name = 'ER_Expr'
init_dataset_dir.run(dataset_name=dataset_name, path_to_data_folder=path_to_data_folder)
GE_ClinP_data.to_csv(data_path + 'data.csv', index=False)
```
| github_jupyter |
Arun Das
Research Fellow
Secure AI and Autonomy Laboratory
University of Texas at San Antonio
# Rotational Invariance in Convolutional Neural Networks
Over the course of history, convolution operation has helped accelerate science and signal processing in a variety of ways. With the advent of deep learning, computer vision researchers began exploring the use of 2D and 3D convolutional neural networks (CNNs) directly on 2D or 3D images to reduce the parameters involved with fully connected deep neural networks. With large amount of data and computation at their disposal, supervised CNN learning algorithms tackled problems which were almost impossible to generalize in the past decade.
CNNs are impressive feature extractors, extracting features heirarchically from the training images during the learning process. First few layers close to the input data learns kernels related to high contrast points, edges, and lines. Layers further in the network learns to map these primitive kernels together to understand countours and other shapes. This heirarchical way of learning by representation enables complex pattern recognition that was impossible using traditional signal processing and machine learning algorithms.
Invariances in input data distribution used for training is mapped in to the CNN as weights, which are infact learned by the kernels. For example, if a face classifier is trained on images with face cropped, aligned, and centered in the center of the image, the CNN will learn to map the input pixels accordingly, and generalize on providing impressive results on faces which are preprocessed and centered properly. However, the interesting question arises on the robustness of CNNs on slighly invariant input images which are from outside the data distribution. This is where our discussion on rotational invariance starts - and in my opinion, the many questions we ask are translated from this bigger topic of robustness and safe artificial intelligence (AI).
<h2><center>How to follow this blog/report series</h2></center>
I am planning to tackle this problem of rotational invariance in 3 or 4 parts. The first part, which is this post, will cover some of the early work and foundation that we need to set up to further our research and understanding. This involves coding the CNNs, writing some generic functions to visualize the weights and activations of the CNN. We will focus on creating an easily modifiable CNN architecture so that we can add or remove layers at ease. Inorder to study rotational invariance, we must be able to rotate our training and testing data by a specific angle. Unfortunately, I didn't find a preprocessor that does that in pytorch, which works well with pytorch `transform.compose` functionalities. Hence, we will spend some time creating a custom rotation function that allows us to create training and testing data rotated at our convenience. Also, we will see how we can write a model weight and actication visualization function, that can take any kernel and output the visualizations automatically. Let me break it down below:
Part 1:
a. Model definitions
b. Custom rotation function
c. Custom weight and activation visualization function
d. Research leading to part 2: Train on 0 degree and 90 degree images separately.
Test mixed. Are there any correlations, change in accuracy, or difference
in kernels and activations?
Part 2:
a. Double down on weights. What are the fundamental differences in the
weights learned?
b. Double down on activations. Do you find any significant changes?
Are there something popping out to you? Something obvious?
c. What happens if I add another CONV layer, maxpool, or other layers?
...
...
<h2><center>Rotational Invariance in CNNs - Part 1</h2></center>
Let's start coding right away. First goal is to define the models and train one with pure MNIST dataset -> Rotation = $0^o$.
### Import Libraries
```
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
from torchsummary import summary
import numpy as np
import matplotlib.pyplot as plt
from torchvision.utils import make_grid
import math
```
### Define the hyperparameters
We define the hyperparameters as keys in an `args` dictionary. This way, it is easy to add and remove hyperparameters, and also to use them.
```
args={}
kwargs={}
args['batch_size']=1000
args['test_batch_size']=1000
args['epochs']=20 # The number of Epochs is the number of times you go
# through the full dataset.
args['lr']=0.01 # Learning rate is how fast it will decend.
args['momentum']=0.5 # SGD momentum (default: 0.5) Momentum is a moving
# average of our gradients (helps to keep direction).
args['seed']=1 # random seed
args['log_interval']=40
args['cuda']=True # False if you don't have a CUDA w/ NVIDIA GPU available.
args['train_now']=False
```
### Define custom rotation function
```
class CustomRotation(object):
"""Rotate image by a fixed angle which is ready for tranform.Compose()
"""
def __init__(self, degrees, resample=False, expand=False, center=None):
self.degrees = degrees
self.resample = resample
self.expand = expand
self.center = center
def __call__(self, img):
return transforms.ToTensor()(
transforms.functional.rotate(
transforms.ToPILImage()(img),
self.degrees, self.resample, self.expand, self.center))
```
### Define data loaders
```
rotation = 0 # Specifies the rotation of images.
# Define the train and test loader
# Here we are adding our CustomRotation function to the transformations
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data/', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
CustomRotation(rotation),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args['batch_size'], shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data/', train=False, transform=transforms.Compose([
transforms.ToTensor(),
CustomRotation(rotation),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args['test_batch_size'], shuffle=False, **kwargs)
```
### Define the CNN model
We'll gracefully build upon existing OOP paradigms and define the CNN as a Class. Later, we could change the layers and configurations the way we want. This is our base class for the CNN model.
```
class Net(nn.Module):
#This defines the structure of the NN.
def __init__(self):
super(Net, self).__init__()
# These are all operations that we are defining.
# Unlike keras, this is not the network definition.
# This is just initialization of the variables that
# we are going to use in the `forward()` function.
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
# https://pytorch.org/docs/stable/nn.html#dropout2d
# If adjacent pixels within feature maps are strongly correlated
# (as is normally the case in early convolution layers) then
# i.i.d. dropout will not regularize the activations and will
# otherwise just result in an effective learning rate decrease.
# In this case, nn.Dropout2d() will help promote independence
# between feature maps and should be used instead.
self.conv3 = nn.Conv2d(20, 40, kernel_size=3)
self.conv4 = nn.Conv2d(40, 40, kernel_size=3)
self.conv4_drop = nn.Dropout2d()
# Fix the number of neurons in the linear (fully connected)
# layer by studying x.shape[1]*x.shape[2]*x.shape[3] in
# the `forward()` function.
self.fc1 = nn.Linear(40, 20)
self.fc2 = nn.Linear(20, 10)
def forward(self, x):
x = F.relu(
F.max_pool2d(
self.conv1(x), 2)) # stride of 2 for max pool
x = F.relu(
self.conv2_drop(
self.conv2(x)))
x = F.relu(
F.max_pool2d(
self.conv3(x), 2))
x = F.relu(
self.conv4_drop(
self.conv4(x)))
# Since input dimensions to the fully connected
# layer is set as 80 in the init function
# above, we have to reshape the CONV output to reflect that.
x = x.view(-1, x.shape[1]*x.shape[2]*x.shape[3])
# Fully Connected Layer/Activation
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
# Fully Connected Layer/Activation
x = self.fc2(x)
# Softmax gets probabilities.
return F.log_softmax(x, dim=1)
model = Net()
if args['cuda']:
model.cuda()
summary(model, (1, 28, 28))
```
We will write functions to train and test the model we created above. Here, we are taking the defined `model` variable as a global variable. Hence, we use it directly within the function, and doesn't pass it as an argument to the function. This helps in updating model parameters easily, but could be made better by adding model as a parameter to the function and thinking about ways to make it a global variable accessible from anywhere.
```
def train(epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
if args['cuda']:
data, target = data.cuda(), target.cuda()
#Variables in Pytorch are differenciable.
data, target = Variable(data), Variable(target)
#This will zero out the gradients for this batch.
optimizer.zero_grad()
output = model(data)
# Calculate the loss The negative log likelihood loss.
# It is useful to train a classification problem with C classes.
loss = F.nll_loss(output, target)
#dloss/dx for every Variable
loss.backward()
#to do a one-step update on our parameter.
optimizer.step()
#Print out the loss periodically.
if batch_idx % args['log_interval'] == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data))
def test():
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if args['cuda']:
data, target = data.cuda(), target.cuda()
with torch.no_grad(): # volatile was removed and now
# has no effect. Use `with torch.no_grad():` instead.
data= Variable(data)
target = Variable(target)
output = model(data)
# sum up batch loss # size_average and reduce args will
# be deprecated, please use reduction='sum' instead.
test_loss += F.nll_loss(output, target, reduction='sum').data
# get the index of the max log-probability
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).long().cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
```
### Train the CNN model on normal MNIST images
We'll use stocastic gradient descend (SGD) as the optimizer and use momentum to lead the way. The hyperparameters are passed using `args` dictionary and the required key.
```
optimizer = optim.SGD(model.parameters(),
lr=args['lr'], momentum=args['momentum'])
# Training loop.
# Change `args['log_interval']` if you want to change logging behavior.
# We test the network in each epoch.
# Setting the bool `args['train_now']` to not run training all the time.
# We'll save the weights and use the saved weights instead of
# training the network everytime we load the jupyter notebook.
if args['train_now']:
for epoch in range(1, args['epochs'] + 1):
train(epoch)
test()
torch.save(model.state_dict(), 'models/model_normal_mnist.pytrh')
else:
model = Net()
if args['cuda']:
device = torch.device("cuda")
model.load_state_dict(torch.load('models/model_normal_mnist.pytrh'))
model.to(device)
else:
model.load_state_dict(torch.load('models/model_normal_mnist.pytrh'))
model.eval()
```
## Kernel weight visualizations
Inorder to understand how the network learns, it is not only important to log the training and testing accuracies but also to visualize what the network learns. As we get over the deep learning hype, we should invest time in learning the intricate features which makes these networks what they are. As a first step, we shall write a custom visualization function to plot the kernels and activations of the CNN - whatever the size. This is a key piece of code that will drive us forward and unfortunately isn't available in Pytorch or internet :) So custom indeed.
```
def custom_viz(kernels, path=None, cols=None, size=None, verbose=False):
"""Visualize weight and activation matrices learned
during the optimization process. Works for any size of kernels.
Arguments
=========
kernels: Weight or activation matrix. Must be a high dimensional
Numpy array. Tensors will not work.
path: Path to save the visualizations.
cols: Number of columns (doesn't work completely yet.)
size: Tuple input for size. For example: size=(5,5)
verbose: Print information about the input.
Example
=======
kernels = model.conv1.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv1_weights.png', 5)
"""
def set_size(w,h, ax=None):
""" w, h: width, height in inches """
if not ax: ax=plt.gca()
l = ax.figure.subplotpars.left
r = ax.figure.subplotpars.right
t = ax.figure.subplotpars.top
b = ax.figure.subplotpars.bottom
figw = float(w)/(r-l)
figh = float(h)/(t-b)
ax.figure.set_size_inches(figw, figh)
N = kernels.shape[0]
C = kernels.shape[1]
if verbose:
print("Shape of input: ", kernels.shape)
# If single channel kernel with HxW size,
# plot them in a row.
# Else, plot image with C number of columns.
if cols==None:
req_cols = C
elif cols:
req_cols = cols
elif C>1:
req_cols = C
total_cols = N*C
req_cols = cols
num_rows = int(np.ceil(total_cols/req_cols))
pos = range(1,total_cols + 1)
fig = plt.figure(1)
fig.tight_layout()
k=0
for i in range(kernels.shape[0]):
for j in range(kernels.shape[1]):
img = kernels[i][j]
ax = fig.add_subplot(num_rows,req_cols,pos[k])
ax.imshow(img, cmap='gray')
plt.axis('off')
k = k+1
if size:
size_h,size_w = size
set_size(size_h,size_w,ax)
if path:
plt.savefig(path, dpi=100)
plt.show()
kernels = model.conv1.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv1_weights.png', 4)
kernels = model.conv2.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv2_weights.png', cols=5)
kernels = model.conv3.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv3_weights.png')
kernels = model.conv4.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv4_weights.png')
```
## Activation Visualization
```
examples = enumerate(test_loader)
batch_idx, (example_data, example_targets) = next(examples)
def rotate_tensor(_in_tensor, plot=True):
in_tensor = _in_tensor.clone()
# Add one more channel to the beginning. Tensor shape = 1,1,28,28
in_tensor.unsqueeze_(0)
# Convert to Pytorch variable
in_tensor = Variable(in_tensor, requires_grad=True)
in_tensor_90 = in_tensor.transpose(2, 3).flip(3)
in_tensor_180 = in_tensor.flip(2).flip(3)
in_tensor_270 = in_tensor.transpose(2, 3).flip(2)
if plot:
plt.figure(1)
plt.subplot(221)
plt.gca().set_title('0 degree')
plt.imshow(in_tensor[0][0].cpu().detach().clone(), cmap='gray')
plt.subplot(222)
plt.gca().set_title('+90 degree')
plt.imshow(in_tensor_90[0][0].cpu().detach().clone(), cmap='gray')
plt.subplot(223)
plt.gca().set_title('+270 degree')
plt.imshow(in_tensor_270[0][0].cpu().detach().clone(), cmap='gray')
plt.subplot(224)
plt.gca().set_title('+180 degree')
plt.imshow(in_tensor_180[0][0].cpu().detach().clone(), cmap='gray')
plt.tight_layout()
plt.show()
return(in_tensor, in_tensor_90, in_tensor_180, in_tensor_270)
number, number_90, number_180, number_270 = rotate_tensor(example_data[4])
print("Predicted Class: ",
np.argmax(model.forward(number.cuda()).cpu().detach().numpy()))
conv1_out = model.conv1.forward(number.cuda())
custom_viz(conv1_out.cpu().detach().clone(), 'results/conv1_actv.png')
conv2_out = model.conv2.forward(conv1_out.cuda())
custom_viz(conv2_out.cpu().detach().clone(), 'results/conv2_actv.png')
conv3_out = model.conv3.forward(conv2_out.cuda())
custom_viz(conv3_out.cpu().detach().clone(), 'results/conv3_actv.png')
conv4_out = model.conv4.forward(conv3_out.cuda())
custom_viz(conv4_out.cpu().detach().clone(), 'results/conv4_actv.png')
```
# Study +90 degree rotation on MNIST
```
# Specify the rotation
rotation = 90
# Load the data
train_loader_90 = torch.utils.data.DataLoader(
datasets.MNIST('data/', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
CustomRotation(rotation),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args['batch_size'], shuffle=True, **kwargs)
test_loader_90 = torch.utils.data.DataLoader(
datasets.MNIST('data/', train=False, transform=transforms.Compose([
transforms.ToTensor(),
CustomRotation(rotation),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args['test_batch_size'], shuffle=False, **kwargs)
# Get some example data from test loader
examples_90 = enumerate(test_loader_90)
batch_idx, (example_data_90, example_targets_90) = next(examples_90)
# Specify and account for GPU usage
model_90 = Net()
if args['cuda']:
model_90.cuda()
# Define train and test functions as before.
# TODO: Consider adding model as an argument.
def train_90(epoch):
model_90.train()
for batch_idx, (data, target) in enumerate(train_loader_90):
if args['cuda']:
data, target = data.cuda(), target.cuda()
#Variables in Pytorch are differenciable.
data, target = Variable(data), Variable(target)
#This will zero out the gradients for this batch.
optimizer.zero_grad()
output = model_90(data)
# Calculate the loss The negative log likelihood loss.
# It is useful to train a classification problem with C classes.
loss = F.nll_loss(output, target)
#dloss/dx for every Variable
loss.backward()
#to do a one-step update on our parameter.
optimizer.step()
#Print out the loss periodically.
if batch_idx % args['log_interval'] == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader_90.dataset),
100. * batch_idx / len(train_loader_90), loss.data))
def test_90():
model_90.eval()
test_loss = 0
correct = 0
for data, target in test_loader_90:
if args['cuda']:
data, target = data.cuda(), target.cuda()
with torch.no_grad(): # volatile was removed and now
# has no effect. Use `with torch.no_grad():` instead.
data= Variable(data)
target = Variable(target)
output = model_90(data)
# sum up batch loss # size_average and reduce args will be
# deprecated, please use reduction='sum' instead.
test_loss += F.nll_loss(output, target, reduction='sum').data
# get the index of the max log-probability
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).long().cpu().sum()
test_loss /= len(test_loader_90.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader_90.dataset),
100. * correct / len(test_loader_90.dataset)))
# Define optimizer and train the model.
# If the model is already trained, try to load the model.
# Will give an error if trained model doesn't exist.
optimizer = optim.SGD(model_90.parameters(),
lr=args['lr'], momentum=args['momentum'])
if args['train_now']:
for epoch in range(1, args['epochs'] + 1):
train_90(epoch)
test_90()
torch.save(model_90.state_dict(), 'models/model_90_mnist.pytrh')
else:
model_90 = Net()
if args['cuda']:
device = torch.device("cuda")
model_90.load_state_dict(torch.load('models/model_90_mnist.pytrh'))
model_90.to(device)
else:
model_90.load_state_dict(torch.load('models/model_90_mnist.pytrh'))
model_90.eval()
```
## Kernel Weight Visualization
```
kernels = model_90.conv1.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv1_weights_90.png', 4)
kernels = model_90.conv2.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv2_weights_90.png')
kernels = model_90.conv3.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv3_weights_90.png')
kernels = model.conv4.weight.cpu().detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
custom_viz(kernels, 'results/conv4_weights_90.png')
```
#### Activation Visualization
```
print("Predicted Class: ",
np.argmax(model_90.forward(number_90.cuda()).cpu().detach().numpy()))
```
We see that the prediction itself is wrong! Why?
#### CNN Layer 1
```
conv1_out_90 = model_90.conv1.forward(number_90.cuda())
custom_viz(conv1_out_90.cpu().detach().clone(), 'results/conv1_actv_90.png')
```
#### CNN Layer 2
```
conv2_out_90 = model_90.conv2.forward(conv1_out_90.cuda())
custom_viz(conv2_out_90.cpu().detach().clone(), 'results/conv2_actv_90.png')
```
#### CNN Layer 3
```
conv3_out_90 = model_90.conv3.forward(conv2_out_90.cuda())
custom_viz(conv3_out_90.cpu().detach().clone(), 'results/conv3_actv_90.png')
```
#### CNN Layer 4
```
conv4_out_90 = model_90.conv4.forward(conv3_out_90.cuda())
custom_viz(conv4_out_90.cpu().detach().clone(), 'results/conv4_actv_90.png')
```
# Questions for part 2
1. What happens if I try to evaluate `model` with 90 rotated MNIST,
and `model_90` with normal MNIST? Will I see a drop in accuracy?
2. What does the activations show? Are the features look similar?
3. What do you understand from the weight matrices? Can you come
to a conclusion based on them?
4. Why do you think the model predicted wrongly?
5. Come up with an architecture that can learn rotated MNIST properly.
6. What are the relationships, think.., think!
| github_jupyter |
# COVID-19 Exploratory Data Analysis
> (Almost) Everything You Want To Know About COVID-19.
- author: Devakumar kp
- comments: true
- categories: [EDA]
- permalink: /corona-eda/
- toc: true
- image: images/copied_from_nb/covid-eda-2-1.png
These visualizations were made by [Devakumar kp](https://twitter.com/imdevskp). Original notebook is [here](https://www.kaggle.com/imdevskp/covid-19-analysis-viz-prediction-comparisons).
```
#hide
# essential libraries
import json
import random
from urllib.request import urlopen
# storing and anaysis
import numpy as np
import pandas as pd
# visualization
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import plotly.graph_objs as go
import plotly.figure_factory as ff
import folium
# color pallette
cnf = '#393e46' # confirmed - grey
dth = '#ff2e63' # death - red
rec = '#21bf73' # recovered - cyan
act = '#fe9801' # active case - yellow
# converter
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# hide warnings
import warnings
warnings.filterwarnings('ignore')
# html embedding
from IPython.display import Javascript
from IPython.core.display import display, HTML
#hide
# importing datasets
url = 'https://raw.githubusercontent.com/imdevskp/covid_19_jhu_data_web_scrap_and_cleaning/master/covid_19_clean_complete.csv'
full_table = pd.read_csv(url,
parse_dates=['Date'])
full_table.head()
#hide
# cases
cases = ['Confirmed', 'Deaths', 'Recovered', 'Active']
# Active Case = confirmed - deaths - recovered
full_table['Active'] = full_table['Confirmed'] - full_table['Deaths'] - full_table['Recovered']
# replacing Mainland china with just China
full_table['Country/Region'] = full_table['Country/Region'].replace('Mainland China', 'China')
# filling missing values
full_table[['Province/State']] = full_table[['Province/State']].fillna('')
full_table[cases] = full_table[cases].fillna(0)
#hide
# cases in the ships
ship = full_table[full_table['Province/State'].str.contains('Grand Princess')|full_table['Province/State'].str.contains('Diamond Princess cruise ship')]
# china and the row
china = full_table[full_table['Country/Region']=='China']
row = full_table[full_table['Country/Region']!='China']
# latest
full_latest = full_table[full_table['Date'] == max(full_table['Date'])].reset_index()
china_latest = full_latest[full_latest['Country/Region']=='China']
row_latest = full_latest[full_latest['Country/Region']!='China']
# latest condensed
full_latest_grouped = full_latest.groupby('Country/Region')['Confirmed', 'Deaths', 'Recovered', 'Active'].sum().reset_index()
china_latest_grouped = china_latest.groupby('Province/State')['Confirmed', 'Deaths', 'Recovered', 'Active'].sum().reset_index()
row_latest_grouped = row_latest.groupby('Country/Region')['Confirmed', 'Deaths', 'Recovered', 'Active'].sum().reset_index()
```
# World-Wide Totals
```
#hide
temp = full_table.groupby(['Country/Region', 'Province/State'])['Confirmed', 'Deaths', 'Recovered', 'Active'].max()
# temp.style.background_gradient(cmap='Reds')
#hide_input
temp = full_table.groupby('Date')['Confirmed', 'Deaths', 'Recovered', 'Active'].sum().reset_index()
temp = temp[temp['Date']==max(temp['Date'])].reset_index(drop=True)
temp.style.background_gradient(cmap='Pastel1')
```
# Progression of Virus Over Time
```
#hide_input
# https://app.flourish.studio/visualisation/1571387/edit
HTML('''<div class="flourish-embed flourish-bar-chart-race" data-src="visualisation/1571387"><script src="https://public.flourish.studio/resources/embed.js"></script></div>''')
```
## Cumalitive Outcomes
```
#hide
temp = full_table.groupby('Date')['Recovered', 'Deaths', 'Active'].sum().reset_index()
temp = temp.melt(id_vars="Date", value_vars=['Recovered', 'Deaths', 'Active'],
var_name='Case', value_name='Count')
temp.head()
fig = px.area(temp, x="Date", y="Count", color='Case',
title='Cases over time', color_discrete_sequence = [rec, dth, act])
fig.write_image('covid-eda-2-1.png')
```

## Recovery and Mortality Rate
```
#hide
temp = full_table.groupby('Date').sum().reset_index()
# adding two more columns
temp['No. of Deaths to 100 Confirmed Cases'] = round(temp['Deaths']/temp['Confirmed'], 3)*100
temp['No. of Recovered to 100 Confirmed Cases'] = round(temp['Recovered']/temp['Confirmed'], 3)*100
# temp['No. of Recovered to 1 Death Case'] = round(temp['Recovered']/temp['Deaths'], 3)
temp = temp.melt(id_vars='Date', value_vars=['No. of Deaths to 100 Confirmed Cases', 'No. of Recovered to 100 Confirmed Cases'],
var_name='Ratio', value_name='Value')
fig = px.line(temp, x="Date", y="Value", color='Ratio', log_y=True,
title='Recovery and Mortality Rate Over The Time', color_discrete_sequence=[dth, rec])
fig.write_image('covid-eda-2-2.png')
```

## No. of Places To Which COVID-19 spread
```
#hide
c_spread = china[china['Confirmed']!=0].groupby('Date')['Province/State'].unique().apply(len)
c_spread = pd.DataFrame(c_spread).reset_index()
fig = px.line(c_spread, x='Date', y='Province/State', text='Province/State',
title='Number of Provinces/States/Regions of China to which COVID-19 spread over the time',
color_discrete_sequence=[cnf,dth, rec])
fig.update_traces(textposition='top center')
fig.write_image('covid-eda-3-1.png')
# ------------------------------------------------------------------------------------------
spread = full_table[full_table['Confirmed']!=0].groupby('Date')['Country/Region'].unique().apply(len)
spread = pd.DataFrame(spread).reset_index()
fig = px.line(spread, x='Date', y='Country/Region', text='Country/Region',
title='Number of Countries/Regions to which COVID-19 spread over the time',
color_discrete_sequence=[cnf,dth, rec])
fig.update_traces(textposition='top center')
fig.write_image('covid-eda-3-2.png')
```


# Maps
```
#hide
# Confirmed
fig = px.choropleth(full_latest_grouped, locations="Country/Region",
locationmode='country names', color="Confirmed",
hover_name="Country/Region", range_color=[1,7000],
color_continuous_scale="aggrnyl",
title='Countries with Confirmed Cases')
fig.update(layout_coloraxis_showscale=False)
fig.write_image('covid-eda-1-1.png')
#hide
# Deaths
fig = px.choropleth(full_latest_grouped[full_latest_grouped['Deaths']>0],
locations="Country/Region", locationmode='country names',
color="Deaths", hover_name="Country/Region",
range_color=[1,50], color_continuous_scale="agsunset",
title='Countries with Deaths Reported')
fig.update(layout_coloraxis_showscale=False)
fig.write_image('covid-eda-1-2.png')
```


# Top 20 Countries
```
#hide
flg = full_latest_grouped
flg.head()
#hide
fig = px.bar(flg.sort_values('Confirmed', ascending=False).head(20).sort_values('Confirmed', ascending=True),
x="Confirmed", y="Country/Region", title='Confirmed Cases', text='Confirmed', orientation='h',
width=700, height=700, range_x = [0, max(flg['Confirmed'])+10000])
fig.update_traces(marker_color=cnf, opacity=0.6, textposition='outside')
fig.write_image('covid-eda-4-1.png')
#hide
fig = px.bar(flg.sort_values('Deaths', ascending=False).head(20).sort_values('Deaths', ascending=True),
x="Deaths", y="Country/Region", title='Deaths', text='Deaths', orientation='h',
width=700, height=700, range_x = [0, max(flg['Deaths'])+500])
fig.update_traces(marker_color=dth, opacity=0.6, textposition='outside')
fig.write_image('covid-eda-4-2.png')
#hide
fig = px.bar(flg.sort_values('Recovered', ascending=False).head(20).sort_values('Recovered', ascending=True),
x="Recovered", y="Country/Region", title='Recovered', text='Recovered', orientation='h',
width=700, height=700, range_x = [0, max(flg['Recovered'])+10000])
fig.update_traces(marker_color=rec, opacity=0.6, textposition='outside')
fig.write_image('covid-eda-4-3.png')
#hide
fig = px.bar(flg.sort_values('Active', ascending=False).head(20).sort_values('Active', ascending=True),
x="Active", y="Country/Region", title='Active', text='Active', orientation='h',
width=700, height=700, range_x = [0, max(flg['Active'])+3000])
fig.update_traces(marker_color=act, opacity=0.6, textposition='outside')
fig.write_image('covid-eda-4-4.png')
#hide
# (Only countries with more than 100 case are considered)
flg['Mortality Rate'] = round((flg['Deaths']/flg['Confirmed'])*100, 2)
temp = flg[flg['Confirmed']>100]
temp = temp.sort_values('Mortality Rate', ascending=False)
fig = px.bar(temp.sort_values('Mortality Rate', ascending=False).head(15).sort_values('Mortality Rate', ascending=True),
x="Mortality Rate", y="Country/Region", text='Mortality Rate', orientation='h',
width=700, height=600, range_x = [0, 8], title='No. of Deaths Per 100 Confirmed Case')
fig.update_traces(marker_color=act, opacity=0.6, textposition='outside')
fig.write_image('covid-eda-4-5.png')
```





# Composition of Cases
```
#hide_input
fig = px.treemap(full_latest.sort_values(by='Confirmed', ascending=False).reset_index(drop=True),
path=["Country/Region", "Province/State"], values="Confirmed", height=700,
title='Number of Confirmed Cases',
color_discrete_sequence = px.colors.qualitative.Prism)
fig.data[0].textinfo = 'label+text+value'
fig.write_image('covid-eda-8-1.png')
fig = px.treemap(full_latest.sort_values(by='Deaths', ascending=False).reset_index(drop=True),
path=["Country/Region", "Province/State"], values="Deaths", height=700,
title='Number of Deaths reported',
color_discrete_sequence = px.colors.qualitative.Prism)
fig.data[0].textinfo = 'label+text+value'
fig.write_image('covid-eda-8-2.png')
```


# Epidemic Span
Note : In the graph, last day is shown as one day after the last time a new confirmed cases reported in the Country / Region
```
#hide_input
# first date
# ----------
first_date = full_table[full_table['Confirmed']>0]
first_date = first_date.groupby('Country/Region')['Date'].agg(['min']).reset_index()
# first_date.head()
from datetime import timedelta
# last date
# ---------
last_date = full_table.groupby(['Country/Region', 'Date', ])['Confirmed', 'Deaths', 'Recovered']
last_date = last_date.sum().diff().reset_index()
mask = last_date['Country/Region'] != last_date['Country/Region'].shift(1)
last_date.loc[mask, 'Confirmed'] = np.nan
last_date.loc[mask, 'Deaths'] = np.nan
last_date.loc[mask, 'Recovered'] = np.nan
last_date = last_date[last_date['Confirmed']>0]
last_date = last_date.groupby('Country/Region')['Date'].agg(['max']).reset_index()
# last_date.head()
# first_last
# ----------
first_last = pd.concat([first_date, last_date[['max']]], axis=1)
# added 1 more day, which will show the next day as the day on which last case appeared
first_last['max'] = first_last['max'] + timedelta(days=1)
# no. of days
first_last['Days'] = first_last['max'] - first_last['min']
# task column as country
first_last['Task'] = first_last['Country/Region']
# rename columns
first_last.columns = ['Country/Region', 'Start', 'Finish', 'Days', 'Task']
# sort by no. of days
first_last = first_last.sort_values('Days')
# first_last.head()
# visualization
# --------------
# produce random colors
clr = ["#"+''.join([random.choice('0123456789ABC') for j in range(6)]) for i in range(len(first_last))]
#plot
fig = ff.create_gantt(first_last, index_col='Country/Region', colors=clr, show_colorbar=False,
bar_width=0.2, showgrid_x=True, showgrid_y=True, height=1600,
title=('Gantt Chart'))
fig.write_image('covid-eda-9-1.png')
```

# China vs. Not China
```
#hide
# In China
temp = china.groupby('Date')['Confirmed', 'Deaths', 'Recovered'].sum().diff()
temp = temp.reset_index()
temp = temp.melt(id_vars="Date",
value_vars=['Confirmed', 'Deaths', 'Recovered'])
fig = px.bar(temp, x="Date", y="value", color='variable',
title='In China',
color_discrete_sequence=[cnf, dth, rec])
fig.update_layout(barmode='group')
fig.write_image('covid-eda-10-1.png')
#-----------------------------------------------------------------------------
# ROW
temp = row.groupby('Date')['Confirmed', 'Deaths', 'Recovered'].sum().diff()
temp = temp.reset_index()
temp = temp.melt(id_vars="Date",
value_vars=['Confirmed', 'Deaths', 'Recovered'])
fig = px.bar(temp, x="Date", y="value", color='variable',
title='Outside China',
color_discrete_sequence=[cnf, dth, rec])
fig.update_layout(barmode='group')
fig.write_image('covid-eda-10-2.png')
#hide
def from_china_or_not(row):
if row['Country/Region']=='China':
return 'From China'
else:
return 'Outside China'
temp = full_table.copy()
temp['Region'] = temp.apply(from_china_or_not, axis=1)
temp = temp.groupby(['Region', 'Date'])['Confirmed', 'Deaths', 'Recovered']
temp = temp.sum().diff().reset_index()
mask = temp['Region'] != temp['Region'].shift(1)
temp.loc[mask, 'Confirmed'] = np.nan
temp.loc[mask, 'Deaths'] = np.nan
temp.loc[mask, 'Recovered'] = np.nan
fig = px.bar(temp, x='Date', y='Confirmed', color='Region', barmode='group',
text='Confirmed', title='Confirmed', color_discrete_sequence= [cnf, dth, rec])
fig.update_traces(textposition='outside')
fig.write_image('covid-eda-10-3.png')
fig = px.bar(temp, x='Date', y='Deaths', color='Region', barmode='group',
text='Confirmed', title='Deaths', color_discrete_sequence= [cnf, dth, rec])
fig.update_traces(textposition='outside')
fig.update_traces(textangle=-90)
fig.write_image('covid-eda-10-4.png')
#hide
gdf = full_table.groupby(['Date', 'Country/Region'])['Confirmed', 'Deaths', 'Recovered'].max()
gdf = gdf.reset_index()
temp = gdf[gdf['Country/Region']=='China'].reset_index()
temp = temp.melt(id_vars='Date', value_vars=['Confirmed', 'Deaths', 'Recovered'],
var_name='Case', value_name='Count')
fig = px.bar(temp, x="Date", y="Count", color='Case', facet_col="Case",
title='China', color_discrete_sequence=[cnf, dth, rec])
fig.write_image('covid-eda-10-5.png')
temp = gdf[gdf['Country/Region']!='China'].groupby('Date').sum().reset_index()
temp = temp.melt(id_vars='Date', value_vars=['Confirmed', 'Deaths', 'Recovered'],
var_name='Case', value_name='Count')
fig = px.bar(temp, x="Date", y="Count", color='Case', facet_col="Case",
title='ROW', color_discrete_sequence=[cnf, dth, rec])
fig.write_image('covid-eda-10-6.png')
```





# Data By Country
### Top 50 Countries By Confirmed Cases
```
#hide_input
temp_f = full_latest_grouped.sort_values(by='Confirmed', ascending=False).head(50)
temp_f = temp_f.reset_index(drop=True)
temp_f.style.background_gradient(cmap='Reds')
```
### Top 25 Countries By Deaths Reported
```
#hide_input
temp_flg = temp_f[temp_f['Deaths']>0][['Country/Region', 'Deaths']].head(25)
temp_flg.sort_values('Deaths', ascending=False).reset_index(drop=True).style.background_gradient(cmap='Reds')
```
## Top 25 Chinese Provinces By Confirmed Cases
```
#hide_input
temp_f = china_latest_grouped[['Province/State', 'Confirmed', 'Deaths', 'Recovered']]
temp_f = temp_f.sort_values(by='Confirmed', ascending=False)
temp_f = temp_f.reset_index(drop=True)
temp_f.style.background_gradient(cmap='Pastel1_r')
```
# Related Work
1. https://www.kaggle.com/imdevskp/mers-outbreak-analysis
2. https://www.kaggle.com/imdevskp/sars-2003-outbreak-analysis
3. https://www.kaggle.com/imdevskp/western-africa-ebola-outbreak-analysis
| github_jupyter |
```
import numpy as np
import pandas as pd
from tqdm import tqdm
from rdkit import Chem
import seaborn as sns
from sklearn.cluster import AgglomerativeClustering, DBSCAN, SpectralClustering
from scipy.stats import ks_2samp, chisquare, power_divergence
import tmap, os
from faerun import Faerun
from mhfp.encoder import MHFPEncoder
from rdkit.Chem import AllChem
#from map4 import MAP4Calculator, to_mol
import matplotlib.pyplot as plt
%matplotlib inline
tqdm.pandas(ascii=True)
np.random.seed(123)
def GetJacarrdD(tmlf, VectorUint1, VectorUint2):
M = len(VectorUint1)
N = len(VectorUint2)
Jacarrd_d =[]
for fp1 in tqdm(VectorUint1, ascii=True):
for fp2 in VectorUint2:
s = tmlf.get_distance(fp1, fp2)
Jacarrd_d.append(s)
Jacarrd_d = np.array(Jacarrd_d)
return Jacarrd_d.reshape(M, N)
dim = 1024
n_clusters = 5
#https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html
#https://reneshbedre.github.io/blog/chisq.html#-chi-square-%CF%872-test-for-independence-pearson-chi-square-test
from chembench import dataset
data = dataset.load_BACE() #load_ESOL, load_Lipop, load_Malaria, load_PDBF, load_HIV, load_BACE,load_BBBP
task_name = data.task_name
data_save_folder = './cluster_split_results/%s' % task_name
if not os.path.exists(data_save_folder):
os.makedirs(data_save_folder)
mols = [Chem.MolFromSmiles(s) for s in data.x]
ECFP4_fps = [AllChem.GetMorganFingerprintAsBitVect(x,2,dim) for x in tqdm(mols, ascii=True)]
ecfps = [tmap.VectorUchar(list(fp)) for fp in ECFP4_fps]
enc = tmap.Minhash(dim,seed = 42)
lf = tmap.LSHForest(dim)
lf.batch_add(enc.batch_from_binary_array(ecfps))
lf.index()
# # # Calculate the MAP4 fp
# calc = MAP4Calculator(dimensions=dim)
# fps = calc.calculate_many([to_mol(s) for s in data.x])
# # # Calculate the MHFP
# # # enc = MHFPEncoder(dim)
# # # fps = [tmap.VectorUint(enc.encode(s)) for s in data.x]
# # Initialize the LSH Forest
# lf = tmap.LSHForest(dim)
# # Add the Fingerprints to the LSH Forest and index
# lf.batch_add(fps)
# lf.index()
# # # Calculate the MAP4 fp
# calc = MAP4Calculator(dimensions=dim)
# fps = calc.calculate_many([to_mol(s) for s in data.x])
# # # Calculate the MHFP
# # # enc = MHFPEncoder(dim)
# # # fps = [tmap.VectorUint(enc.encode(s)) for s in data.x]
# # Initialize the LSH Forest
# lf = tmap.LSHForest(dim)
# # Add the Fingerprints to the LSH Forest and index
# lf.batch_add(fps)
# lf.index()
x, y, s, t, gp = tmap.layout_from_lsh_forest(lf)
X = np.array([x,y]).T
def adj_list_to_matrix(adj_list):
n = len(adj_list)
adj_matrix = np.zeros((n,n))
for i,c in enumerate(adj_list):
for (j, weight) in c:
adj_matrix[i, j] = weight
return adj_matrix
adj_csr = adj_list_to_matrix(gp.adjacency_list)
clustering = AgglomerativeClustering(n_clusters = n_clusters, connectivity = adj_csr,).fit(X)
# clustering= SpectralClustering(n_clusters = n_clusters, random_state = 2, n_init = 100).fit(X)
dft = pd.concat([pd.Series(clustering.labels_), pd.Series(x)], axis=1)
order_dict = dft.groupby(0)[1].apply(np.min).sort_values().argsort().to_dict()
clustering.labels_ = pd.Series(clustering.labels_).map(order_dict).values
pd.Series(clustering.labels_).value_counts()
mapd = {}
for k, v in pd.Series(clustering.labels_ + 1).value_counts().items():
mapd.update({k:'%s(%s)'% (k,v)})
branch_name = 'Group'
df = data.df
df = pd.DataFrame(data.y, columns = [task_name])
df[branch_name]= (clustering.labels_ + 1)
df['TMAP1'] = x
df['TMAP2'] = y
df[branch_name] = df[branch_name].map(mapd)
df['smiles'] = data.x
df[[branch_name]].to_pickle(os.path.join(data_save_folder, 'cluster_split_%s.idx' % task_name))
sns.set(style='white', font_scale = 1.3)
size = 12
palette = sns.color_palette("Set1", n_clusters)
order = df[branch_name].unique()
order.sort()
fig, axes = plt.subplots(ncols=3,figsize=(20,6))
ax1, ax2, ax3 = axes
sns.set(style="white")
_ = sns.scatterplot('TMAP1', 'TMAP2', hue = branch_name, palette = palette, hue_order = order, s = size,
data = df, ax = ax1, linewidth = 0)
ax1.legend(loc='upper right')
if data.task_type == 'regression':
num = 6
_ = sns.catplot(x = branch_name, y = task_name, kind="swarm", palette = palette,order = order, data=df, ax= ax2 , )
else:
num = 1
gb = df.groupby([branch_name, task_name]).size().unstack()
gb.columns = gb.columns.astype(int)
# _ = gb.plot(kind='bar', stacked = True, cmap = 'rainbow', ax= ax2)
gbb = gb[1]/gb[0]
gbb.plot(kind = 'bar', color = palette, ax= ax2, rot=0)
ax2.set_ylabel('Ratio(positive/negative)')
im3 = ax3.scatter(x = df.TMAP1, y = df.TMAP2, alpha = .8, c = df[task_name].tolist(), cmap = 'rainbow', s = size)
ax3.set_xlabel('TMAP1')
ax3.set_ylabel('TMAP2')
# fig.colorbar(im, ax=ax3)
lg3 = ax3.legend(*im3.legend_elements(num = num), loc="upper right", title=task_name,)
ax3.add_artist(lg3)
# fig.tight_layout()
fig.show()
plt.close(2)
plt.tight_layout()
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0.25, hspace=None)
fig.savefig(os.path.join(data_save_folder, '%s.png' % task_name), dpi=300, format='png')
fig.savefig(os.path.join(data_save_folder, '%s.pdf' % task_name), dpi=300, format='pdf')
sns.set(style='white', font_scale = 1.2)
fig, axes = plt.subplots(ncols=2,figsize=(16,6))
ax1, ax2, = axes
fontsize = 16
if data.task_type == 'regression':
gb = df.groupby('Group')[task_name].apply(lambda x:x.values)
ks_values = []
p_values = []
for i in gb.index:
for j in gb.index:
expected = gb.loc[i]
observed = gb.loc[j]
ks, p = ks_2samp(expected, observed)
ks_values.append(ks)
p_values.append(p)
arrv = np.array(ks_values).reshape(len(gb), len(gb)).astype('float16')
arrp = np.array(p_values).reshape(len(gb), len(gb))
dfv = pd.DataFrame(arrv, index = gb.index, columns = gb.index)
dfp = pd.DataFrame(arrp, index = gb.index, columns = gb.index)
vax = sns.heatmap(dfv, annot=True, cmap = 'Greens', fmt='.3g', ax = ax1,
linewidths = 0.5, linecolor='0.9', cbar_kws={'label': 'KS value'})
vax.figure.axes[-1].yaxis.label.set_size(fontsize)
vax.collections[0].colorbar.ax.tick_params(labelsize=15) #cbar ticklabel size
pax = sns.heatmap(dfp, vmax = 0.05, annot=True, cmap = 'Greens', fmt='.3g', ax= ax2,
linewidths = 0.5, linecolor='0.9', cbar_kws={'label': 'p value', })
pax.figure.axes[-1].yaxis.label.set_size(fontsize)
pax.collections[0].colorbar.ax.tick_params(labelsize=15) #cbar ticklabel size
else:
gb = df.groupby([branch_name, task_name]).size().unstack()
gb.columns = gb.columns.astype(int)
chisq_values = []
p_values = []
for i in gb.index:
for j in gb.index:
expected = gb.loc[i].values
observed = gb.loc[j].values
# adjust the number of the expected
expected_adjust = (expected / expected.sum()) * observed.sum()
chisq, p = chisquare(expected_adjust, observed)
chisq_values.append(chisq)
p_values.append(p)
arrv = np.array(chisq_values).reshape(len(gb), len(gb)).astype('float16')
arrp = np.array(p_values).reshape(len(gb), len(gb))
dfv = pd.DataFrame(arrv, index = gb.index, columns = gb.index)
dfp = pd.DataFrame(arrp, index = gb.index, columns = gb.index)
vax = sns.heatmap(dfv, vmax = 10, annot=True, cmap = 'Greens', fmt='.3g', ax = ax1,
linewidths = 0.5, linecolor='0.9', cbar_kws={'label': 'chi-square value'})
vax.figure.axes[-1].yaxis.label.set_size(fontsize)
vax.collections[0].colorbar.ax.tick_params(labelsize=15) #cbar ticklabel size
pax = sns.heatmap(dfp, vmax = 0.05, annot=True, cmap = 'Greens', fmt='.3g', ax= ax2,
linewidths = 0.5, linecolor='0.9', cbar_kws={'label': 'p value',})
pax.figure.axes[-1].yaxis.label.set_size(fontsize)
pax.collections[0].colorbar.ax.tick_params(labelsize=15) #cbar ticklabel size
for ax in [ax1, ax2]:
ax.set_yticklabels(dfv.index, rotation=0, fontsize="15", va="center")
ax.set_xticklabels(dfv.index, rotation=0, fontsize="15", va="center")
ax.axhline(y=0, color='0.9',lw= 0.5, ls = '--')
ax.axhline(y=dfv.shape[0], color='0.9',lw= 0.5, ls = '--')
ax.autoscale()
ax.axvline(x=dfv.shape[1], color='0.9',lw= 0.5, ls = '--')
ax.axvline(x=0, color='0.9',lw= 0.5, ls = '--')
ax.set_xlabel('Group', fontsize = 16)
ax.set_ylabel('Group', fontsize = 16)
fig.tight_layout()
plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0.3, hspace=None)
fig.savefig(os.path.join(data_save_folder, '%s_stat_test.png' % task_name), dpi=300, format='png')
fig.savefig(os.path.join(data_save_folder, '%s_stat_test.pdf' % task_name), dpi=300, format='pdf')
dfv['Value'] = 'statistic value'
dfv = dfv.reset_index().set_index(['Value', 'Group'])
dfp['Value'] = 'p value'
dfp = dfp.reset_index().set_index(['Value', 'Group'])
dfv.append(dfp).to_excel(os.path.join(data_save_folder, '%s_stat_test.xlsx' % task_name))
# Now plot interactive results
if data.task_type == 'regression':
categorical=[False, True,]
else:
categorical = [True, True,]
faerun = Faerun(view="front", clear_color='#111111',coords=False) #'#ffffff'
faerun.add_scatter(
task_name,
{ "x": x,
"y": y,
"c": [data.y.reshape(-1, ), clustering.labels_],
"labels": data.x},
point_scale=5,
colormap = ['rainbow', 'Set1'],
has_legend=True,
categorical = categorical,
series_title = [task_name, branch_name],
legend_labels = [None, [(i, "%s" % (i+1)) for i in range(n_clusters)]],
shader = 'smoothCircle'
)
faerun.add_tree(task_name + "_tree", {"from": s, "to": t}, point_helper=task_name, color='#666666', ) #colors when no value
# Choose the "smiles" template to display structure on hover
faerun.plot(task_name, path = data_save_folder, template="smiles", notebook_height=750)
```
| github_jupyter |
```
from __future__ import print_function # to use Python 3 features in Python 2
%matplotlib inline
import matplotlib as mpl
from matplotlib import pyplot as plt
import numpy as np
from astropy import constants as const
```
# Line Plot
```
def gaussian(x, sigma=2):
y = (2*np.pi*sigma**2)**-0.5 * np.exp(- x**2 / (2 * sigma**2))
return y
x = np.linspace(-10,10)
y = gaussian(x)
plt.plot(x, y, label="Gaussian")
plt.title("Sample Plot #1")
plt.xlabel("x [arbitrary units]")
plt.ylabel("y [arbitrary units]")
plt.legend(loc="best")
plt.yscale("log")
```
# Scatter Plot
```
import sys
sys.path.insert(0, "../day2") # to access exoplanets.py
import exoplanets
exoplanets.download_data()
data = exoplanets.parse_data()
data.dtype.names
# pull up `plt.errorbar` documentation
plt.errorbar?
planet_distances = data["pl_orbsmax"]
planet_distances_err = np.array([data["pl_orbsmaxerr1"],
data["pl_orbsmaxerr2"] * -1])
planet_masses = data["pl_bmassj"] *(const.M_jup / const.M_earth)
planet_masses_err = np.array([data["pl_bmassjerr1"],
data["pl_bmassjerr2"] *-1])*(const.M_jup / const.M_earth)
plt.errorbar(planet_distances,
planet_masses,
fmt=".",
xerr = planet_distances_err,
yerr = planet_masses_err)
plt.xscale("log")
plt.yscale("log")
plt.xlabel("Distance to the star (AU)")
plt.ylabel("Planet mass ($M_E$)")
plt.xlim(10**-2, 10**4)
plt.ylim(10**-2, 10**4)
```
# Subplots
```
N_samples = 1000
lambda_1 = 1.5
lambda_2 = 5.0
poisson_samples_1 = np.random.poisson(lam=lambda_1, size=N_samples)
poisson_samples_2 = np.random.poisson(lam=lambda_2, size=N_samples)
bin_edges = np.arange(-.5, 11.5)
f, (ax1, ax2) = plt.subplots(1,2)
ax1.hist(poisson_samples_1, bins = bin_edges)
ax2.hist(poisson_samples_2, bins = bin_edges)
ax1.set_xlim(bin_edges.min(), bin_edges.max())
ax2.set_xlim(bin_edges.min(), bin_edges.max())
ax1.set_title("mean = " + str(lambda_1))
ax2.set_title("mean = " + str(lambda_2))
```
### Seaborn distribution plotting:
```
rc_orig = mpl.rcParams.copy()
import seaborn as sns
sns.set_style(rc = rc_orig) # keep matplotlib default aesthetics
sns.distplot?
# creates a histogram, along with a "KDE" curve,
# which estimates the shape of the distribution
f, (ax1, ax2) = plt.subplots(1,2)
sns.distplot(poisson_samples_1,
bins=bin_edges,
kde_kws={"bw":1}, # set smoothing width of KDE
ax=ax1)
sns.distplot(poisson_samples_2,
bins=bin_edges,
kde_kws={"bw":1}, # set smoothing width of KDE
ax=ax2)
ax1.set_xlim(bin_edges.min(), bin_edges.max())
ax2.set_xlim(bin_edges.min(), bin_edges.max())
ax1.set_title("mean = " + str(lambda_1))
ax2.set_title("mean = " + str(lambda_2))
```
# 2D hist
```
means = [1,2]
covariances = [[5,1],[1,1]]
data1 = np.random.multivariate_normal(mean=means, cov=covariances, size=100000)
means = [6.75, 4.5]
data2 = np.random.multivariate_normal(mean=means, cov=covariances, size=100000)
data = np.append(data1, data2, axis=0)
data = data.T
plt.scatter(data[0], data[1])
plt.hist2d(data[0], data[1], bins=100, normed=True)
plt.colorbar(label="density of points")
```
| github_jupyter |
# Structure learning with cause2e
This notebook shows how ```cause2e``` can be used for learning causal graphs. Structure learning (also called causal discovery) can be performed by the ```discovery.StructureLearner``` after reading data and specifying domain knowledge. If we only want to perform a quick exploratory search, we can use the provided reasonable default parameters for the search procedure. However, if we need to finetune the settings for the causal search after encountering problems with the default option, this notebooks shows how to do it. The search is mostly based on the ```py-causal``` package, a wrapper around the well-known JAVA ```TETRAD``` software. ```Cause2e``` aims to use ```py-causal``` algorithms only for the search itself, in order to spare the user from dealing with JAVA error messages when dealing with peripheral tasks.
### Imports
```
import os
from cause2e import path_mgr, discovery, knowledge
```
## Set up paths to data and output directories
This step is conveniently handled by the ```PathManager``` class, which avoids having to wrestle with paths throughout the multistep causal analysis. If we want to perform the analysis in a directory ```'dirname'``` that contains ```'dirname/data'``` and ```'dirname/output'``` as subdirectories, we can also use ```PathManagerQuick``` for an even easier setup. The experiment name is used for generating output files with meaningful names, in case we want to study multiple scenarios (e.g. with varying model parameters). For this analysis, we use the sprinkler dataset.
```
cwd = os.getcwd()
wd = os.path.dirname(cwd)
paths = path_mgr.PathManagerQuick(experiment_name='sprinkler',
data_name='sprinkler.csv',
directory=wd
)
```
## Initialize the StructureLearner
As in the other notebooks, we set up a ```StructureLearner``` and read our data.
```
learner = discovery.StructureLearner(paths)
learner.read_csv(index_col=0)
```
The first step in the analysis should be an assessment of which variables we are dealing with. In the sprinkler dataset, each sample tells us
- the current season
- whether it is raining
- whether our lawn sprinkler is activated
- whether our lawn is slippery
- whether our lawn is wet.
```
print(learner.variables)
```
It necessary to communicate to the ```StructureLearner``` if the variables are discrete, continuous, or both. We check how many unique values each variable takes on in our sample and deduce that all variables are discrete.
```
print(learner.data.nunique())
```
This information is passed to the ```StructureLearner``` by indicating the exact sets of discrete and continuous variables.
```
learner.discrete = set(learner.variables)
learner.continuous = set()
```
### Provide domain knowledge
Humans can often infer parts of the causal graph from domain knowledge. The nodes are always just the variables in the data, so the problem of finding the right graph comes down to selecting the right edges between them.
As a reminder: The correct causal graph has an edge from variable A to variable B if and only if variable A directly influences variable B (changing the value of variable A changes the value of variable B if we keep all other variables fixed).
There are three ways of passing domain knowledge:
- Indicate which edges must be present in the causal graph.
- Indicate which edges must not be present in the causal graph.
- Indicate a temporal order in which the variables have been created. This is then used to generate forbidden edges, since the future can never influence the past.
In this example, we only assume that the current season is directly influencing the weather and the probability that the sprinkler is on. This makes sense: During the summer, it is less likely to rain and sprinklers are more likely to be activated.
```
required = {('Season', 'Rain'), ('Season', 'Sprinkler')}
edge_creator = knowledge.EdgeCreator()
edge_creator.forbid_edges(required)
```
We pass the knowledge to the ```StructureLearner``` and check if it has been correctly received.
```
learner.set_knowledge(edge_creator)
print(learner.knowledge)
```
## Select and use a structure learning algorithm
Now that the ```StructureLearner``` has received the data and the domain knowledge, we can try to recover the original graph using causal discovery methods provided by the internally called ```py-causal``` package. There are many parameters that can be tuned (choice of algorithm, search score, independence test, hyperparameters, ...) and we can get an overview by calling some informative methods of the learner.
```
learner.show_search_algos()
learner.show_search_scores()
learner.show_independence_tests()
```
To make an informed choice, we can browse through the proposed search algorithms and decide which one fits our problem. Let us have a look at the FGES algorithm, which is a well known score-based algorithm that is suitable for a mix of continuous and discrete data. Note that it also accepts domain knowledge, which makes it a good starting point for many datasets.
```
learner.show_algo_info('fges')
```
The description tells us that we can select a search score and pass our domain knowledge. If we actually want to call the algorithm, we need to know if it requires additional hyperparameters and what they mean. These can be inspected via another utility method. Since FGES requires a score, we need to pass one to ```show_algo_params```, but it seems that the choice does not affect the output, so we just choose one at random from the above list.
```
learner.show_algo_params('fges', score_name='cg-bic-score')
```
Let us try out a possible search configuration.
```
learner.run_search(algo='fges', scoreId='cg-bic-score', maxDegree=5, faithfulnessAssumed=True, symmetricFirstStep=True)
```
The output of the search is a proposed causal graph. We can ignore the warning about stopping the Java Virtual Machine (needed by ```py-causal``` which is a wrapper around the ```TETRAD``` software that is written in Java) if we do not run into any problems. If the algorithm cannot orient all edges, we need to do this manually. Therefore, the output includes a list of all undirected edges, so we do not miss them in complicated graphs with many variables and edges. In our case, all the edges are already oriented.
The result seems reasonable:
- The weather depends on the season.
- The sprinkler use also depends on the season.
- The lawn will be wet if it rains or if the sprinkler is activated.
- The lawn will be slippery if it is wet.
We can also see that the result is automatically saved to different file formats and that our graph respects the previously indicated domain knowledge.
In order to spare users from the pain of going through all the above reading whenever they want to perform just a quick exploratory analysis, we have provided the above configuration as default arguments (FGES with CG-BIC score for possibly mixed datatypes, respecting domain knowledge, assuming faithfulness, using symmetric first step) that let us start the search without any finetuning. Just call ```run_quick_search()``` and you are good to go.
```
learner.run_quick_search()
```
In this notebook we want to show how to switch to a different algorithm, e.g. a variant of the constraint-based PC algorithm, which can be found under the name ```pc-all``` in the above listing of algorithms. The procedure is the same as above, it just requires some reading of ```py-causal```'s algorithm descriptions.
```
learner.show_algo_info('pc-all')
```
In this case, we need to pass a ```test_name``` to ```show_algo_params``` instead of a ```score_name```. Again, we pick one at random since it does not seem to change the description.
```
learner.show_algo_params('pc-all', test_name='bdeu-test')
```
Now that we know possible configuration options, we can select a few at random and check the result.
```
learner.run_search(algo='pc-all', stableFAS=True, conflictRule=1, save_graph=False)
```
The output is worse than the one with FGES, but the situation might be reversed for a different problem, so having the ability to quickly switch between algorithms and hyperparameters is a handy tool. Some algorithms are only suited for certain types of data, some cannot accept domain knowledge, some produce outputs that differ from the mixed graph format (other options such as PAG are currently not supported by ```cause2e```'s graph handling). Feel free to play around with different algorithms and configurations to explore the possibilities of causal discovery!
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.