text stringlengths 2.5k 6.39M | kind stringclasses 3 values |
|---|---|
# Introduction
Try writing some **SELECT** statements of your own to explore a large dataset of air pollution measurements.
Run the cell below to set up the feedback system.
```
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql.ex2 import *
print("Setup Complete")
```
The code cell below fetches the `global_air_quality` table from the `openaq` dataset. We also preview the first five rows of the table.
```
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "openaq" dataset
dataset_ref = client.dataset("openaq", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "global_air_quality" table
table_ref = dataset_ref.table("global_air_quality")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the "global_air_quality" table
client.list_rows(table, max_results=5).to_dataframe()
```
# Exercises
### 1) Units of measurement
Which countries have reported pollution levels in units of "ppm"? In the code cell below, set `first_query` to an SQL query that pulls the appropriate entries from the `country` column.
In case it's useful to see an example query, here's some code from the tutorial:
```
query = """
SELECT city
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'US'
"""
```
```
# Query to select countries with units of "ppm"
first_query = ____ # Your code goes here
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 1 GB)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=1e9)
first_query_job = client.query(first_query, job_config=safe_config)
# API request - run the query, and return a pandas DataFrame
first_results = first_query_job.to_dataframe()
# View top few rows of results
print(first_results.head())
# Check your answer
q_1.check()
```
For the solution, uncomment the line below.
```
#q_1.solution()
```
### 2) High air quality
Which pollution levels were reported to be exactly 0?
- Set `zero_pollution_query` to select **all columns** of the rows where the `value` column is 0.
- Set `zero_pollution_results` to a pandas DataFrame containing the query results.
```
# Query to select all columns where pollution levels are exactly 0
zero_pollution_query = ____ # Your code goes here
# Set up the query
query_job = client.query(zero_pollution_query, job_config=safe_config)
# API request - run the query and return a pandas DataFrame
zero_pollution_results = ____ # Your code goes here
print(zero_pollution_results.head())
# Check your answer
q_2.check()
```
For the solution, uncomment the line below.
```
#q_2.solution()
```
That query wasn't too complicated, and it got the data you want. But these **SELECT** queries don't organizing data in a way that answers the most interesting questions. For that, we'll need the **GROUP BY** command.
If you know how to use [`groupby()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html) in pandas, this is similar. But BigQuery works quickly with far larger datasets.
Fortunately, that's next.
# Keep going
**[GROUP BY](#$NEXT_NOTEBOOK_URL$)** clauses and their extensions give you the power to pull interesting statistics out of data, rather than receiving it in just its raw format.
| github_jupyter |
```
import numpy as np
import pandas as pd
import json
import shap
import matplotlib.pyplot as plt
from matplotlib import rc
from colour import Color
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
import collections
import pickle
colors = ['#3f7f93','#da3b46','#F6AE2D', '#98b83b', '#825FC3']
cmp_5 = LinearSegmentedColormap.from_list('my_list', [Color(c1).rgb for c1 in colors], N=len(colors))
seed = 42
def abs_shap(df_shap, df, shap_plot, names, class_names, cmp):
''' A function to plot the bar plot for the mean abs SHAP values
arguments:
df_shap: the dataframe of the SHAP values
df: the dataframe for the feature values for which the SHAP values have been determined
shap_plot: The name of the output file for the plot
names: The names of the variables
class_names: names of the classes
cmp: the colour map
'''
rc('text', usetex=True)
plt.rcParams['text.latex.preamble'] = r"\usepackage{amsmath}"
plt.figure(figsize=(5,5))
shap.summary_plot(df_shap, df, color=cmp, class_names=class_names, class_inds='original', plot_size=(5,5), show=False)#, feature_names=names)
ax = plt.gca()
handles, labels = ax.get_legend_handles_labels()
ax.legend(reversed(handles), reversed(labels), loc='lower right', fontsize=15)
plt.xlabel(r'$\overline{|S_v|}$', fontsize=15)
ax = plt.gca()
ax.spines["top"].set_visible(True)
ax.spines["right"].set_visible(True)
ax.spines["left"].set_visible(True)
vals = ax.get_xticks()
ax.tick_params(axis='both', which='major', labelsize=15)
for tick in vals:
ax.axvline(x=tick, linestyle='dashed', alpha=0.7, color='#808080', zorder=0, linewidth=0.5)
plt.tight_layout()
plt.savefig(shap_plot, dpi=300)
rc('text', usetex=False)
def get_mclass(i, df_array, weight_array, ps_exp_class, seed=seed):
""" This function is used to create the confusion matrix
arguments:
i: integer corresponding to the class number
df_array: the array of the dataframes of the different classes
weight_array: the array of the weights for the different classes
ps_exp_class: the collection of the pseudo experiment events
seed: the seed for the random number generator
returns:
nevents: the number of events
sif: the significance
"""
mclass = []
nchannels = len(df_array)
for j in range(nchannels):
mclass.append(collections.Counter(classifier.predict(df_array[j].iloc[:,:-2].values))[i]/len(df_array[j])*weight_array[j]/weight_array[i])
sig = np.sqrt(ps_exp_class[i])*mclass[i]/np.sum(mclass)
nevents = np.round(ps_exp_class[i]/np.sum(mclass)*np.array(mclass)).astype(int)
if nchannels == 5: print('sig: {:2.2f}, klam events: {}, hhsm events: {}, tth events: {}, bbh events: {}, bbxaa events: {}'.format(sig, nevents[4], nevents[3], nevents[2], nevents[1], nevents[0]))
if nchannels == 4: print('sig: {:2.2f}, hhsm events: {}, tth events: {}, bbh events: {}, bbxaa events: {}'.format(sig, nevents[3], nevents[2], nevents[1], nevents[0]))
if nchannels == 2: print('sig: {:2.2f}, ku events: {}, hhsm events: {}'.format(sig, nevents[1], nevents[0]))
return nevents, sig
prefix = '../WORK/klm1/'
df_sig_test = pd.read_json(prefix+'test_files/sig_test.json')
df_bkg_test = pd.read_json(prefix+'test_files/bkg_test.json')
df_bbh_test = pd.read_json(prefix+'test_files/bbh_test.json')
df_tth_test = pd.read_json(prefix+'test_files/tth_test.json')
df_bbxaa_test = pd.read_json(prefix+'test_files/bbxaa_test.json')
X_shap = pd.read_json(prefix+'shapley_files/shapley_X.json')
with open(prefix+'shapley_files/shapley_values.json', 'r') as f:
shapley_values = json.load(f)['shap_values']
shapley_values = [np.array(elem) for elem in shapley_values]
weight_sig = df_sig_test['weight'].sum()
weight_bkg = df_bkg_test['weight'].mean()
weight_bbh = df_bbh_test['weight'].mean()
weight_tth = df_tth_test['weight'].mean()
weight_bbxaa = df_bbxaa_test['weight'].mean()
classifier = pickle.load(open(prefix+'hbb-BDT-5class-hhsm-klm1.csv.pickle.dat', 'rb'))
with open(prefix+'test_files/weights.json', 'r') as f:
weights = json.load(f)
class_names = [r'$bb\gamma\gamma$', r'$b\bar{b}h$', r'$t\bar{t}h$', r'$hh^{SM}$', r'$hh^{\kappa_u}$']
names = list(df_bbxaa_test.columns)[:-2]
shap_plot = '../plots/shap-klm1.pdf'
abs_shap(shapley_values, X_shap, shap_plot, names, class_names, cmp=cmp_5)
df_array = [df_bbxaa_test, df_bbh_test, df_tth_test, df_bkg_test, df_sig_test]
weight_array = [weights['weight_bbxaa']*1.5, weights['weight_bbh'],
weights['weight_tth']*1.2, weights['weight_bkg']*1.72, weights['weight_sig']*1.28]
ps_exp_class = collections.Counter(classifier.predict(pd.concat([df_array[4].iloc[:,:-2].sample(n=round(weight_array[4]), random_state=seed, replace=True),
df_array[3].iloc[:,:-2].sample(n=round(weight_array[3]), random_state=seed, replace=True),
df_array[2].iloc[:,:-2].sample(n=round(weight_array[2]), random_state=seed, replace=True),
df_array[1].iloc[:,:-2].sample(n=round(weight_array[1]), random_state=seed, replace=True),
df_array[0].iloc[:,:-2].sample(n=round(weight_array[0]), random_state=seed, replace=True)]).values))
nevents_ku, sig_ku = get_mclass(4, df_array, weight_array, ps_exp_class)
nevents_hhsm, sig_hhsm = get_mclass(3, df_array, weight_array, ps_exp_class)
nevents_tth, sig_tth = get_mclass(2, df_array, weight_array, ps_exp_class)
nevents_bbh, sig_bbh = get_mclass(1, df_array, weight_array, ps_exp_class)
nevents_bbxaa, sig_bbxaa = get_mclass(0, df_array, weight_array, ps_exp_class)
confusion = np.column_stack((nevents_ku, nevents_hhsm, nevents_tth, nevents_bbh, nevents_bbxaa))
```
| github_jupyter |
<a href="https://www.bigdatauniversity.com"><img src="https://ibm.box.com/shared/static/cw2c7r3o20w9zn8gkecaeyjhgw3xdgbj.png" width="400" align="center"></a>
<h1 align=center><font size="5"> SVM (Support Vector Machines)</font></h1>
In this notebook, you will use SVM (Support Vector Machines) to build and train a model using human cell records, and classify cells to whether the samples are benign or malignant.
SVM works by mapping data to a high-dimensional feature space so that data points can be categorized, even when the data are not otherwise linearly separable. A separator between the categories is found, then the data is transformed in such a way that the separator could be drawn as a hyperplane. Following this, characteristics of new data can be used to predict the group to which a new record should belong.
<h1>Table of contents</h1>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li><a href="#load_dataset">Load the Cancer data</a></li>
<li><a href="#modeling">Modeling</a></li>
<li><a href="#evaluation">Evaluation</a></li>
<li><a href="#practice">Practice</a></li>
</ol>
</div>
<br>
<hr>
```
import pandas as pd
import pylab as pl
import numpy as np
import scipy.optimize as opt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
%matplotlib inline
import matplotlib.pyplot as plt
```
<h2 id="load_dataset">Load the Cancer data</h2>
The example is based on a dataset that is publicly available from the UCI Machine Learning Repository (Asuncion and Newman, 2007)[http://mlearn.ics.uci.edu/MLRepository.html]. The dataset consists of several hundred human cell sample records, each of which contains the values of a set of cell characteristics. The fields in each record are:
|Field name|Description|
|--- |--- |
|ID|Clump thickness|
|Clump|Clump thickness|
|UnifSize|Uniformity of cell size|
|UnifShape|Uniformity of cell shape|
|MargAdh|Marginal adhesion|
|SingEpiSize|Single epithelial cell size|
|BareNuc|Bare nuclei|
|BlandChrom|Bland chromatin|
|NormNucl|Normal nucleoli|
|Mit|Mitoses|
|Class|Benign or malignant|
<br>
<br>
For the purposes of this example, we're using a dataset that has a relatively small number of predictors in each record. To download the data, we will use `!wget` to download it from IBM Object Storage.
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
```
#Click here and press Shift+Enter
!wget -O cell_samples.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/cell_samples.csv
```
### Load Data From CSV File
```
cell_df = pd.read_csv("cell_samples.csv")
cell_df.head()
cell_df.describe
```
The ID field contains the patient identifiers. The characteristics of the cell samples from each patient are contained in fields Clump to Mit. The values are graded from 1 to 10, with 1 being the closest to benign.
The Class field contains the diagnosis, as confirmed by separate medical procedures, as to whether the samples are benign (value = 2) or malignant (value = 4).
Lets look at the distribution of the classes based on Clump thickness and Uniformity of cell size:
```
ax = cell_df[cell_df['Class'] == 4][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='DarkBlue', label='malignant');
cell_df[cell_df['Class'] == 2][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='Yellow', label='benign', ax=ax);
plt.show()
```
## Data pre-processing and selection
Lets first look at columns data types:
```
cell_df.dtypes
```
It looks like the __BareNuc__ column includes some values that are not numerical. We can drop those rows:
```
cell_df = cell_df[pd.to_numeric(cell_df['BareNuc'], errors='coerce').notnull()]
cell_df['BareNuc'] = cell_df['BareNuc'].astype('int')
cell_df.dtypes
feature_df = cell_df[['Clump', 'UnifSize', 'UnifShape', 'MargAdh', 'SingEpiSize', 'BareNuc', 'BlandChrom', 'NormNucl', 'Mit']]
X = np.asarray(feature_df)
X[0:5]
```
We want the model to predict the value of Class (that is, benign (=2) or malignant (=4)). As this field can have one of only two possible values, we need to change its measurement level to reflect this.
```
cell_df['Class'] = cell_df['Class'].astype('int')
y = np.asarray(cell_df['Class'])
y [0:5]
```
## Train/Test dataset
Okay, we split our dataset into train and test set:
```
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
```
<h2 id="modeling">Modeling (SVM with Scikit-learn)</h2>
The SVM algorithm offers a choice of kernel functions for performing its processing. Basically, mapping data into a higher dimensional space is called kernelling. The mathematical function used for the transformation is known as the kernel function, and can be of different types, such as:
1.Linear
2.Polynomial
3.Radial basis function (RBF)
4.Sigmoid
Each of these functions has its characteristics, its pros and cons, and its equation, but as there's no easy way of knowing which function performs best with any given dataset, we usually choose different functions in turn and compare the results. Let's just use the default, RBF (Radial Basis Function) for this lab.
```
from sklearn import svm
clf = svm.SVC(kernel='rbf')
clf.fit(X_train, y_train)
```
After being fitted, the model can then be used to predict new values:
```
yhat = clf.predict(X_test)
yhat [0:5]
```
<h2 id="evaluation">Evaluation</h2>
```
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat, labels=[2,4])
np.set_printoptions(precision=2)
print (classification_report(y_test, yhat))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Benign(2)','Malignant(4)'],normalize= False, title='Confusion matrix')
```
You can also easily use the __f1_score__ from sklearn library:
```
from sklearn.metrics import f1_score
f1_score(y_test, yhat, average='weighted')
```
Lets try jaccard index for accuracy:
```
from sklearn.metrics import jaccard_similarity_score
jaccard_similarity_score(y_test, yhat)
```
<h2 id="practice">Practice</h2>
Can you rebuild the model, but this time with a __linear__ kernel? You can use __kernel='linear'__ option, when you define the svm. How the accuracy changes with the new kernel function?
```
# write your code here
```
Double-click __here__ for the solution.
<!-- Your answer is below:
clf2 = svm.SVC(kernel='linear')
clf2.fit(X_train, y_train)
yhat2 = clf2.predict(X_test)
print("Avg F1-score: %.4f" % f1_score(y_test, yhat2, average='weighted'))
print("Jaccard score: %.4f" % jaccard_similarity_score(y_test, yhat2))
-->
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="http://cocl.us/ML0101EN-SPSSModeler">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://cocl.us/ML0101EN_DSX">Watson Studio</a>
<h3>Thanks for completing this lesson!</h3>
<h4>Author: <a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a></h4>
<p><a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p>
<hr>
<p>Copyright © 2018 <a href="https://cocl.us/DX0108EN_CC">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.</p>
| github_jupyter |
# Use Spark to predict credit risk with `ibm-watson-machine-learning`
This notebook introduces commands for model persistance to Watson Machine Learning repository, model deployment, and scoring.
Some familiarity with Python is helpful. This notebook uses Python 3.6 and Apache® Spark 2.4.
You will use **German Credit Risk** dataset.
## Learning goals
The learning goals of this notebook are:
- Load a CSV file into an Apache® Spark DataFrame.
- Explore data.
- Prepare data for training and evaluation.
- Persist a pipeline and model in Watson Machine Learning repository from tar.gz files.
- Deploy a model for online scoring using Wastson Machine Learning API.
- Score sample scoring data using the Watson Machine Learning API.
- Explore and visualize prediction result using the plotly package.
## Contents
This notebook contains the following parts:
1. [Set up](#setup)
2. [Load and explore data](#load)
3. [Persist model](#persistence)
4. [Predict locally](#visualization)
5. [Deploy and score](#scoring)
6. [Clean up](#cleanup)
7. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Contact with your Cloud Pack for Data administrator and ask him for your account credentials
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`.
```
username = 'PASTE YOUR USERNAME HERE'
password = 'PASTE YOUR PASSWORD HERE'
url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = {
"username": username,
"password": password,
"url": url,
"instance_id": 'openshift',
"version": '3.5'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.
- Click New Deployment Space
- Create an empty space
- Go to space `Settings` tab
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
### Test Spark
```
try:
from pyspark.sql import SparkSession
except:
print('Error: Spark runtime is missing. If you are using Watson Studio change the notebook runtime to Spark.')
raise
```
<a id="load"></a>
## 2. Load and explore data
In this section you will load the data as an Apache® Spark DataFrame and perform a basic exploration.
The csv file for German Credit Risk is available on the same repository as this notebook. Load the file to Apache® Spark DataFrame using code below.
```
import os
from wget import download
sample_dir = 'spark_sample_model'
if not os.path.isdir(sample_dir):
os.mkdir(sample_dir)
filename = os.path.join(sample_dir, 'credit_risk_training.csv')
if not os.path.isfile(filename):
filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/data/credit_risk/credit_risk_training.csv', out=sample_dir)
spark = SparkSession.builder.getOrCreate()
df_data = spark.read\
.format('org.apache.spark.sql.execution.datasources.csv.CSVFileFormat')\
.option('header', 'true')\
.option('inferSchema', 'true')\
.load(filename)
```
Explore the loaded data by using the following Apache® Spark DataFrame methods:
- print schema
- print top ten records
- count all records
```
df_data.printSchema()
```
As you can see, the data contains 21 fields. Risk field is the one we would like to predict (label).
```
df_data.show(n=5, truncate=False, vertical=True)
print("Number of records: " + str(df_data.count()))
```
As you can see, the data set contains 5000 records.
### 2.1 Prepare data
In this subsection you will split your data into: train, test and predict datasets.
```
splitted_data = df_data.randomSplit([0.8, 0.18, 0.02], 24)
train_data = splitted_data[0]
test_data = splitted_data[1]
predict_data = splitted_data[2]
print("Number of training records: " + str(train_data.count()))
print("Number of testing records : " + str(test_data.count()))
print("Number of prediction records : " + str(predict_data.count()))
```
As you can see our data has been successfully split into three datasets:
- The train data set, which is the largest group, is used for training.
- The test data set will be used for model evaluation and is used to test the assumptions of the model.
- The predict data set will be used for prediction.
<a id="persistence"></a>
## 3. Persist model
In this section you will learn how to store your pipeline and model in Watson Machine Learning repository by using python client libraries.
**Note**: Apache® Spark 2.4 is required.
### 3.1: Save pipeline and model
In this subsection you will learn how to save pipeline and model artifacts to your Watson Machine Learning instance.
**Download pipeline and model archives**
```
import os
from wget import download
sample_dir = 'spark_sample_model'
if not os.path.isdir(sample_dir):
os.mkdir(sample_dir)
pipeline_filename = os.path.join(sample_dir, 'credit_risk_spark_pipeline.tar.gz')
if not os.path.isfile(pipeline_filename):
pipeline_filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/models/spark/credit-risk/model/credit_risk_spark_pipeline.tar.gz', out=sample_dir)
model_filename = os.path.join(sample_dir, 'credit_risk_spark_model.gz')
if not os.path.isfile(model_filename):
model_filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/models/spark/credit-risk/model/credit_risk_spark_model.gz', out=sample_dir)
```
**Store piepline and model**
To be able to store your Spark model, you need to provide a training data reference, this will allow to read the model schema automatically.
```
training_data_references = [
{
"type": "fs",
"connection": {},
"location": {},
"schema": {
"id": "training_schema",
"fields": [
{
"metadata": {},
"name": "CheckingStatus",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "LoanDuration",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "CreditHistory",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "LoanPurpose",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "LoanAmount",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "ExistingSavings",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "EmploymentDuration",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "InstallmentPercent",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "Sex",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "OthersOnLoan",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "CurrentResidenceDuration",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "OwnsProperty",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "Age",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "InstallmentPlans",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "Housing",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "ExistingCreditsCount",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "Job",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "Dependents",
"nullable": True,
"type": "integer"
},
{
"metadata": {},
"name": "Telephone",
"nullable": True,
"type": "string"
},
{
"metadata": {},
"name": "ForeignWorker",
"nullable": True,
"type": "string"
},
{
"metadata": {
"modeling_role": "target"
},
"name": "Risk",
"nullable": True,
"type": "string"
}
]
}
}
]
published_model_details = client.repository.store_model(
model=model_filename,
meta_props={
client.repository.ModelMetaNames.NAME:'Credit Risk model',
client.repository.ModelMetaNames.TYPE: "mllib_2.4",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_id_by_name('spark-mllib_2.4'),
client.repository.ModelMetaNames.TRAINING_DATA_REFERENCES: training_data_references,
client.repository.ModelMetaNames.LABEL_FIELD: "Risk",
},
training_data=train_data,
pipeline=pipeline_filename)
model_uid = client.repository.get_model_uid(published_model_details)
print(model_uid)
client.repository.get_model_details(model_uid)
```
Get saved model metadata from Watson Machine Learning.
**Tip**: Use `client.repository.ModelMetaNames.show()` to get the list of available props.
```
client.repository.ModelMetaNames.show()
```
### 3.2: Load model
In this subsection you will learn how to load back saved model from specified instance of Watson Machine Learning.
```
loaded_model = client.repository.load(model_uid)
```
You can print for example model name to make sure that model has been loaded correctly.
```
print(type(loaded_model))
```
<a id="visualization"></a>
## 4. Predict locally
In this section you will learn how to score test data using loaded model.
### 4.1: Make local prediction using previously loaded model and test data
In this subsection you will score *predict_data* data set.
```
predictions = loaded_model.transform(predict_data)
```
Preview the results by calling the *show()* method on the predictions DataFrame.
```
predictions.show(5)
```
By tabulating a count, you can see which product line is the most popular.
```
predictions.select("predictedLabel").groupBy("predictedLabel").count().show(truncate=False)
```
<a id="scoring"></a>
## 5. Deploy and score
In this section you will learn how to create online scoring and to score a new data record using `ibm-watson-machine-learning`.
**Note:** You can also use REST API to deploy and score.
For more information about REST APIs, see the [Swagger Documentation](https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_create).
### 5.1: Create online scoring endpoint
Now you can create an online scoring endpoint.
#### Create online deployment for published model
```
deployment_details = client.deployments.create(
model_uid,
meta_props={
client.deployments.ConfigurationMetaNames.NAME: "Credit Risk model deployment",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
)
deployment_details
```
Now, you can send new scoring records (new data) for which you would like to get predictions. To do that, execute the following sample code:
```
fields = ["CheckingStatus", "LoanDuration", "CreditHistory", "LoanPurpose", "LoanAmount", "ExistingSavings",
"EmploymentDuration", "InstallmentPercent", "Sex", "OthersOnLoan", "CurrentResidenceDuration",
"OwnsProperty", "Age", "InstallmentPlans", "Housing", "ExistingCreditsCount", "Job", "Dependents",
"Telephone", "ForeignWorker"]
values = [
["no_checking", 13, "credits_paid_to_date", "car_new", 1343, "100_to_500", "1_to_4", 2, "female", "none", 3,
"savings_insurance", 46, "none", "own", 2, "skilled", 1, "none", "yes"],
["no_checking", 24, "prior_payments_delayed", "furniture", 4567, "500_to_1000", "1_to_4", 4, "male", "none",
4, "savings_insurance", 36, "none", "free", 2, "management_self-employed", 1, "none", "yes"],
["0_to_200", 26, "all_credits_paid_back", "car_new", 863, "less_100", "less_1", 2, "female", "co-applicant",
2, "real_estate", 38, "none", "own", 1, "skilled", 1, "none", "yes"],
["0_to_200", 14, "no_credits", "car_new", 2368, "less_100", "1_to_4", 3, "female", "none", 3, "real_estate",
29, "none", "own", 1, "skilled", 1, "none", "yes"],
["0_to_200", 4, "no_credits", "car_new", 250, "less_100", "unemployed", 2, "female", "none", 3,
"real_estate", 23, "none", "rent", 1, "management_self-employed", 1, "none", "yes"],
["no_checking", 17, "credits_paid_to_date", "car_new", 832, "100_to_500", "1_to_4", 2, "male", "none", 2,
"real_estate", 42, "none", "own", 1, "skilled", 1, "none", "yes"],
["no_checking", 33, "outstanding_credit", "appliances", 5696, "unknown", "greater_7", 4, "male",
"co-applicant", 4, "unknown", 54, "none", "free", 2, "skilled", 1, "yes", "yes"],
["0_to_200", 13, "prior_payments_delayed", "retraining", 1375, "100_to_500", "4_to_7", 3, "male", "none", 3,
"real_estate", 37, "none", "own", 2, "management_self-employed", 1, "none", "yes"]
]
payload_scoring = {"input_data": [{"fields": fields, "values": values}]}
deployment_id = client.deployments.get_id(deployment_details)
client.deployments.score(deployment_id, payload_scoring)
```
<a id="cleanup"></a>
## 6. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 7. Summary and next steps
You successfully completed this notebook! You learned how to use Apache Spark machine learning as well as Watson Machine Learning for model creation and deployment.
Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/analyze-data/wml-setup.html) for more samples, tutorials, documentation, how-tos, and blog posts.
### Authors
**Amadeusz Masny**, Python Software Developer in Watson Machine Learning at IBM
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
| github_jupyter |
```
%pylab inline
import pandas as pd
import numpy as np
import pickle,itertools,sys,pdb
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import graphviz
from ultron.factor.genetic.accumulators import mutated_pool, cross_pool
from ultron.sentry.Analysis.SecurityValueHolders import SecurityValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityCombinedValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityLatestValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityCurrentValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityDiffValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecuritySignValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityExpValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityLogValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecuritySqrtValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityAbsValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityNormInvValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityCeilValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityFloorValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityRoundValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecuritySigmoidValueHolder
from ultron.sentry.Analysis.TechnicalAnalysis.StatelessTechnicalAnalysers import SecurityTanhValueHolder
from ultron.sentry.Analysis.CrossSectionValueHolders import CSRankedSecurityValueHolder
from ultron.sentry.Analysis.CrossSectionValueHolders import CSZScoreSecurityValueHolder
from ultron.sentry.Analysis.CrossSectionValueHolders import CSPercentileSecurityValueHolder
from ultron.sentry.Analysis.CrossSectionValueHolders import CSResidueSecurityValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityAddedValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecuritySubbedValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityMultipliedValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityDividedValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityLtOperatorValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityLeOperatorValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityGtOperatorValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityGeOperatorValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityEqOperatorValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityNeOperatorValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityAndOperatorValueHolder
from ultron.sentry.Analysis.SecurityValueHolders import SecurityOrOperatorValueHolder
# 读取算子
mutated_list = list(mutated_pool.values())
cross_list = list(cross_pool.values())
with open('factor_data.pkl','rb') as file2:
total_data = pickle.load(file2)
facotr_sets = [i for i in list(set(total_data.columns)) if i not in ['trade_date','code','ret']]
#合并函数
mutated_sets = [{'activy':1,'function': f} for f in mutated_list]
cross_sets = [{'activy':2,'function': f} for f in cross_list]
function_sets = mutated_sets + cross_sets
def calcu_program(max_depth=4):
n_features = 2
function_obj = function_sets[np.random.randint(0,len(function_sets)-1)] # 随机选择函数
program = [function_obj]
terminal_stack = [function_obj['activy']]
while terminal_stack:
depth = len(terminal_stack)
choice = n_features + len(function_sets)
choice = np.random.randint(0,choice)
if depth < max_depth and choice <= len(function_sets):
function_obj = function_sets[np.random.randint(0,len(function_sets)-1)]
program.append(function_obj)
terminal_stack.append(function_obj['activy'])
else:
factor = facotr_sets[np.random.randint(0,len(facotr_sets)-1)]
program.append(factor)
terminal_stack[-1] -= 1
while terminal_stack[-1] == 0:
terminal_stack.pop()
if not terminal_stack:
return program
terminal_stack[-1] -= 1
def draw_program(program):
fade_nodes = None
terminals = []
if fade_nodes is None:
fade_nodes = []
output = 'digraph program {\nnode [style=filled]\n'
for i, node in enumerate(program):
fill = '#cecece'
if node in function_sets:
if i not in fade_nodes:
fill = '#2a5caa'
terminals.append([node['activy'], i])
output += ('%d [label="%s", fillcolor="%s"] ;\n'
% (i, node['function'].__name__, fill))
else:
if i not in fade_nodes:
fill = '#60a6f6'
if node in facotr_sets:
feature_name = node
else:
feature_name = 'X%s' % node
output += ('%d [label="%s", fillcolor="%s"] ;\n'
% (i, feature_name, fill))
if i == 0 :
output += '}'
return output
terminals[-1][0] -= 1
terminals[-1].append(i)
while terminals[-1][0] == 0:
output += '%d -> %d ;\n' % (terminals[-1][1],
terminals[-1][-1])
terminals[-1].pop()
if len(terminals[-1]) == 2:
parent = terminals[-1][-1]
terminals.pop()
if not terminals:
output += '}'
return output
terminals[-1].append(parent)
terminals[-1][0] -= 1
graph = graphviz.Source(draw_program(calcu_program()))
graph
graph.render('test-table3.gv', view=True)
```
# 繁衍计算
```
program = calcu_program()
graphviz.Source(draw_program(program))
def get_subtree(program):
probs = np.array([0.9 if node in function_sets else 0.1 for node in program])
probs = np.cumsum(probs / probs.sum())
start = np.searchsorted(probs, np.random.uniform(0, 1))
stack = 1
end = start
while stack > end - start:
node = program[end]
if node in function_sets:
stack += node['activy']
end += 1
return start, end
```
## 交叉计算
```
copy_program = program
donor_program = calcu_program()
start, end = get_subtree(copy_program)
removed = range(start, end)
donor_start, donor_end = get_subtree(donor_program)
donor_removed = list(set(range(len(donor_program))) -
set(range(donor_start, donor_end)))
crossover_program = copy_program[:start] + donor_program[donor_start:donor_end] + copy_program[end:]
graphviz.Source(draw_program(crossover_program))
```
## 变异计算
#### 树突变 - 等同于交叉计算
```
copy_program = program
chicken_program = calcu_program()
start, end = get_subtree(copy_program)
removed = range(start, end)
chicken_start, chicken_end = get_subtree(chicken_program)
chicken_removed = list(set(range(len(chicken_program))) -
set(range(chicken_start, chicken_end)))
crossover_program = copy_program[:start] + chicken_program[chicken_start:chicken_end] + copy_program[end:]
graphviz.Source(draw_program(crossover_program))
```
#### 提升突变
```
copy_program = program
start, end = get_subtree(copy_program)
subtree = program[start:end]
sub_start, sub_end = get_subtree(subtree)
hoist = subtree[sub_start:sub_end]
hosit_program = copy_program[:start] + hoist + copy_program[end:]
graphviz.Source(draw_program(hosit_program))
```
#### 节点突变
```
copy_program = copy(program)
mutate = np.where(np.random.uniform(size=len(copy_program)) < 0.5)[0]
for node in mutate:
if copy_program[node] in function_sets:
activy = copy_program[node]['activy']
#找到参数个数替换
if activy == 1:
replace_node = mutated_sets[np.random.randint(0,len(mutated_sets)-1)]
else:
replace_node = cross_sets[np.random.randint(0,len(cross_sets)-1)]
copy_program[node] = replace_node
else:
factor = facotr_sets[np.random.randint(0,len(facotr_sets)-1)]
copy_program[node] = factor
graphviz.Source(draw_program(copy_program))
```
## 计算因子值
```
def create_formual(apply_formual):
function = apply_formual[0]
formula = function['function'].__name__
formula +='('
for i in range(0,function['activy']):
if i != 0:
formula += ','
if apply_formual[i+1] in facotr_sets:
formula += '\'' + apply_formual[i+1] + '\''
else:
formula += apply_formual[i+1]
formula += ')'
return formula
apply_stack = []
for node in program:
if node in function_sets:
apply_stack.append([node])
else:
apply_stack[-1].append(node)
while len(apply_stack[-1]) == apply_stack[-1][0]['activy'] + 1:
result = create_formual(apply_stack[-1])
if len(apply_stack) != 1:
apply_stack.pop()
apply_stack[-1].append(result)
else:
print(result)
break
%%time
rt = eval(result).transform(total_data.set_index(['trade_date']), category_field='code', dropna=False)
rt
```
| github_jupyter |
```
import sys
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from astropy import constants as const
# remove this line if you installed platypos with pip
sys.path.append('/work2/lketzer/work/gitlab/platypos_group/platypos/')
import platypos
from platypos import Planet_LoFo14
from platypos import Planet_Ot20
# import the classes with fixed step size for completeness
from platypos.planet_LoFo14_PAPER import Planet_LoFo14_PAPER
from platypos.planet_Ot20_PAPER import Planet_Ot20_PAPER
import platypos.planet_models_LoFo14 as plmoLoFo14
```
# Create Planet object and stellar evolutionary track
## Example planet 1.1 - V1298Tau c with 5 Eearth mass core and measured radius (var. step)
```
# (David et al. 2019, Chandra observation)
L_bol, mass_star, radius_star = 0.934, 1.101, 1.345 # solar units
age_star = 23. # Myr
Lx_age = Lx_chandra = 1.3e30 # erg/s in energy band: (0.1-2.4 keV)
Lx_age_error = 1.4e29
# use dictionary to store star-parameters
star_V1298Tau = {'star_id': 'V1298Tau', 'mass': mass_star, 'radius': radius_star, 'age': age_star, 'L_bol': L_bol, 'Lx_age': Lx_age}
Lx_1Gyr, Lx_5Gyr = 2.10*10**28, 1.65*10**27
track_low = {"t_start": star_V1298Tau["age"], "t_sat": star_V1298Tau["age"], "t_curr": 1000., "t_5Gyr": 5000., "Lx_max": Lx_age,
"Lx_curr": Lx_1Gyr, "Lx_5Gyr": Lx_5Gyr, "dt_drop": 20., "Lx_drop_factor": 16.}
track_med = {"t_start": star_V1298Tau["age"], "t_sat": star_V1298Tau["age"], "t_curr": 1000., "t_5Gyr": 5000., "Lx_max": Lx_age,
"Lx_curr": Lx_1Gyr, "Lx_5Gyr": Lx_5Gyr, "dt_drop": 0., "Lx_drop_factor": 0.}
track_high = {"t_start": star_V1298Tau["age"], "t_sat": 240., "t_curr": 1000., "t_5Gyr": 5000., "Lx_max": Lx_age,
"Lx_curr": Lx_1Gyr, "Lx_5Gyr": Lx_5Gyr, "dt_drop": 0., "Lx_drop_factor": 0.}
# planet c
planet = {"core_mass": 5.0, "radius": 5.59, "distance": 0.0825, "metallicity": "solarZ"}
pl = Planet_LoFo14(star_V1298Tau, planet)
pl.__dict__
```
### Example planet 1.1.1 - V1298Tau c with 5 Eearth mass core and measured radius (fixed step)
```
pl = Planet_LoFo14_PAPER(star_V1298Tau, planet)
```
## Example planet 1.2 - V1298Tau c with mass estimate from Otegi et al. (2020) and measured radius (var step)
```
pl = Planet_Ot20(star_V1298Tau, planet)
pl.__dict__
```
### Example planet 1.2.1 - V1298Tau c with mass estimate from Otegi et al. (2020) and measured radius (fixed step)
```
pl = Planet_Ot20_PAPER(star_V1298Tau, planet)
pl.__dict__
```
## Example planet 2 - artificial planet with specified core mass and envelope mass fraction
```
Lx_1Gyr, Lx_5Gyr = 2.10*10**28, 1.65*10**27
dict_star = {'star_id': 'star_age1.0_mass0.89',
'mass': 0.8879632311581124,
'radius': None,
'age': 1.0,
'L_bol': 1.9992811847525246e+33/const.L_sun.cgs.value,
'Lx_age': 1.298868513129789e+30}
dict_pl = {'distance': 0.12248611607793611,
'metallicity': 'solarZ',
'fenv': 3.7544067802231664,
'core_mass': 4.490153906104026}
track = {"t_start": dict_star["age"], "t_sat": 100., "t_curr": 1000., "t_5Gyr": 5000., "Lx_max": Lx_age,
"Lx_curr": Lx_1Gyr, "Lx_5Gyr": Lx_5Gyr, "dt_drop": 0., "Lx_drop_factor": 0.}
pl = Planet_LoFo14(dict_star, dict_pl)
#pl.__dict__
```
# Evolve & create outputs
```
%%time
folder_id = "dummy"
path_save = os.getcwd() + "/" + folder_id +"/"
if not os.path.exists(path_save):
os.makedirs(path_save)
else:
os.system("rm -r " + path_save[:-1])
os.makedirs(path_save)
t_final = 5007.
pl.evolve_forward_and_create_full_output(t_final, 0.1, 0.1, "yes", "yes", track_high, path_save, folder_id)
```
# Read in results and plot
```
df_pl = pl.read_results(path_save)
df_pl.head()
df_pl.tail()
# fig, ax = plt.subplots(figsize=(10,5))
# ax.plot(df_pl["Time"], df_pl["Lx"])
# ax.loglog()
# plt.show()
fig, ax = plt.subplots(figsize=(10,5))
age_arr = np.logspace(np.log10(pl.age), np.log10(t_final), 100)
if (type(pl) == platypos.planet_LoFo14.Planet_LoFo14
or type(pl) == platypos.planet_LoFo14_PAPER.Planet_LoFo14_PAPER):
ax.plot(age_arr, plmoLoFo14.calculate_planet_radius(pl.core_mass, pl.fenv, age_arr, pl.flux, pl.metallicity), \
lw=2.5, label='thermal contraction only', color="blue")
ax.plot(df_pl["Time"], df_pl["Radius"],
marker="None", ls="--", label='with photoevaporation', color="red")
else:
ax.plot(df_pl["Time"], df_pl["Radius"], marker="None", ls="--", label='with photoevaporation', color="red")
ax.legend(fontsize=10)
ax.set_xlabel("Time [Myr]", fontsize=16)
ax.set_ylabel("Radius [R$_\oplus$]", fontsize=16)
ax.set_xscale('log')
#ax.set_ylim(5.15, 5.62)
plt.show()
```
| github_jupyter |
### Prepare the Dataset for Building a Predictive Model
As a first step we will build a graph convolution model predict ERK2 activity. We will train the model to distinguish a set of ERK2 active compounds from a set of decoy compounds. The active and decoy compounds are derived from the DUD-E database. In order to generate the best model, we would like to decoys with property distributions similar to those of our active compounds. Let's say this was not the case and the inactive compounds had lower molecular weight than the active compounds. In this case our classifer may be trained to simply separate low molecular compounds from
high molecular weight compounds. This classifer will have very limited utility in preactice.
As a first step, we will examine a few calculated properties of our active and decoy molecules. In order to build a reliable model, we need to ensure that the properties of the active molecules are similar to those of the decoy molecules.
First lets import the libraries we will need.
```
from rdkit import Chem
from rdkit.Chem import Draw
from rdkit.Chem.Draw import IPythonConsole
import pandas as pd
from rdkit.Chem import PandasTools
from rdkit.Chem import Descriptors
from rdkit.Chem import rdmolops
import seaborn as sns
```
Now we can read a SMILES file into a Pandas dataframe and add an RDKit molecule to the dataframe.
```
active_df = pd.read_csv("mk01/actives_final.ism",header=None,sep=" ")
active_rows,active_cols = active_df.shape
active_df.columns = ["SMILES","ID","ChEMBL_ID"]
active_df["label"] = ["Active"]*active_rows
PandasTools.AddMoleculeColumnToFrame(active_df,"SMILES","Mol")
```
Let's define a function to add caculated properties to a dataframe
```
def add_property_columns_to_df(df_in):
df_in["mw"] = [Descriptors.MolWt(mol) for mol in df_in.Mol]
df_in["logP"] = [Descriptors.MolLogP(mol) for mol in df_in.Mol]
df_in["charge"] = [rdmolops.GetFormalCharge(mol) for mol in df_in.Mol]
```
With this function in hand, we can calculate the molecular weight, LogP and formal charge of the molecules. Once we have these properties we can compare the distributions for the active and decoy sets.
```
add_property_columns_to_df(active_df)
```
Let's look at the frist few rows of our dataframe to ensure that it makes sense.
```
active_df.head()
```
Now let's do the same thing with the decoy molecules
```
decoy_df = pd.read_csv("mk01/decoys_final.ism",header=None,sep=" ")
decoy_df.columns = ["SMILES","ID"]
decoy_rows, decoy_cols = decoy_df.shape
decoy_df["label"] = ["Decoy"]*decoy_rows
PandasTools.AddMoleculeColumnToFrame(decoy_df,"SMILES","Mol")
add_property_columns_to_df(decoy_df)
tmp_df = active_df.append(decoy_df)
```
With properties calculated for both the active and the decoy sets, we can compare the properties of the two compound sets. To do the comparison, we will use violin plots. A violin plot can be thought of as analogous to a boxplot. The violin plot provides a mirrored, horizontal view of a frequency distribution. Ideally, we would
like to see similar distributions for the active and decoy sets.
```
sns.violinplot(tmp_df["label"],tmp_df["mw"])
```
An examination of the distributions in the figures above show that the molecular weight distributions for the two sets
are roughly equivalent. The decoy set has more low molecular weight molecules, but the center of the distribution, show as a box in the middle of each violin plot is in a similar location in both plots.
We can use violin plots to perform a similar comparison of the LogP distributions. Again, we can see that the
distributions are similar with a few more decoys at the lower end of the distribution.
```
sns.violinplot(tmp_df["label"],tmp_df["logP"])
```
Finally, we will do the same comparison with the formal charges of the molecules.
```
sns.violinplot(tmp_df["label"],tmp_df["charge"])
```
In this case, we see a signficant difference. All of the active molecules are neutral, while some of the decoys
are charged. Let see what fraction of the decoy molecules are charged. We can do this by creating a new dataframe
with just the charged molecules.
```
charged = decoy_df[decoy_df["charge"] != 0]
```
A pandas dataframe has a property, shape, that returns the number of rows and columns in the dataframe. As such,
element[0] in the shape property will be the number of rows. Let's divide the number of rows in our dataframe of
charged molecules by the total number of rows in the decoy dataframe.
```
charged.shape[0]/decoy_df.shape[0]
```
The fact that 16% of the decoy compounds are charged, while none of the active compounds are is a concern. An examination of both sets indicate that charge states were assigned to the decoys, but not to the active molecules. In order to be consistent, we will use some code from the RDKit Cookbook to neutralize the molecules. First, we will import an RDKit function to neutralize charges.
```
from neutralize import NeutraliseCharges
```
Now we will create a new dataframe with the SMILES, ID, and label for the decoys.
```
revised_decoy_df = decoy_df[["SMILES","ID","label"]].copy()
```
With this new dataframe in hand, we can replace the SMILES with the SMILES for the neutral form of the molecule. The
NeutraliseCharges function returns two values. The first is the SMILES for the neutral form of the molecule and the second is a boolean variable indicating whether the molecule was changed. In the code below, we only need the SMILES, so we will use the first element of the tuple returned by NeutraliseCharges.
```
revised_decoy_df["SMILES"] = [NeutraliseCharges(x)[0] for x in revised_decoy_df["SMILES"]]
```
Once we've replaced the SMILES, we can add a molecule column to our new dataframe and calculated properties again.
```
PandasTools.AddMoleculeColumnToFrame(revised_decoy_df,"SMILES","Mol")
add_property_columns_to_df(revised_decoy_df)
```
We can now append the dataframe with the active molecules to the one with the revised, neutral decoys and calculate
another box plot.
```
new_tmp_df = active_df.append(revised_decoy_df)
sns.violinplot(new_tmp_df["label"],new_tmp_df["charge"])
```
An examination of the plot about show that there are very few charged molecules in the decoy set. We can use the same
technique we used above to create a dataframe with only the charged molecules. We can then use this dataframe to determine the number of charged molecules remaining in the set.
```
charged = revised_decoy_df[revised_decoy_df["charge"] != 0]
charged.shape[0]/revised_decoy_df.shape[0]
```
We have now reduced the fraction of charged compounds from 16% to 0.3%. We can now be confident that our active and decoy sets are reasonbly well balanced.
In order to use these datasets with DeepChem we need to write the molecules out as a csv file consisting of SMILES, Name, and an integer value indicating whether the compounds are active (labeled as 1) or inactive (labeled as 0).
```
active_df["is_active"] = [1] * active_df.shape[0]
revised_decoy_df["is_active"] = [0] * revised_decoy_df.shape[0]
combined_df = active_df.append(revised_decoy_df)[["SMILES","ID","is_active"]]
combined_df.head()
```
Our final step in this section is to save our new combined_df as a csv file. The index=False option causes Pandas to not include the row number in the first column.
```
combined_df.to_csv("dude_erk1_mk01.csv")
```
| github_jupyter |
```
import cv2
import numpy as np
from matplotlib import pyplot as plt
import os
import xlsxwriter
import pandas as pd # Excel
import struct # Binary writing
import scipy.io as sio # Read .mat files
import h5py
import time
from grading__old import *
from ipywidgets import FloatProgress
from IPython.display import display
import scipy.signal
import scipy.ndimage
import sklearn.metrics as skmet
import sklearn.decomposition as skdec
import sklearn.linear_model as sklin
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import normalize
from sklearn import svm
from sklearn import neighbors
def pipeline_lbp(impath, savepath, save, dtype='dat'):
#Start time
start_time = time.time()
# Calculate MRELBP from dataset
# Parameters
dict = {'N':8, 'R':9,'r':3,'wc':5,'wr':(5,5)}
mapping = getmapping(dict['N']) # mapping
files = os.listdir(impath)
files.sort()
#print(files[32 * 2])
#files.pop(32 * 2)
#files.pop(32 * 2)
#print(files)
features = None # Reset feature array
p = FloatProgress(min=0, max=len(files), description='Features:')
display(p)
for k in range(len(files)):
#Load file
if dtype == 'dat':
p.value += 2
if k > len(files) / 2 - 1:
break
file = os.path.join(impath,files[2 * k])
try:
Mz = loadbinary(file, np.float64)
except:
continue
file = os.path.join(impath,files[2 * k + 1])
try:
sz = loadbinary(file, np.float64)
except:
continue
else:
file = os.path.join(impath,files[k])
p.value += 1
try:
file = sio.loadmat(file)
Mz = file['Mz']
sz = file['sz']
except NotImplementedError:
file = h5py.File(file)
Mz = file['Mz'][()]
sz = file['sz'][()]
#Combine mean and sd images
image = Mz+sz
#Grayscale normalization
image = localstandard(image,23,5,5,1)
# LBP
Chist,Lhist,Shist,Rhist, lbpIL, lbpIS, lbpIR = MRELBP(image,dict['N'],dict['R'],dict['r'],dict['wc'],dict['wr'])
f1 = Chist
f2 = maplbp(Lhist,mapping)
f3 = maplbp(Shist,mapping)
f4 = maplbp(Rhist,mapping)
#Concatenate features
f = np.concatenate((f1.T,f2.T,f3.T,f4.T),axis=0)
try:
features = np.concatenate((features,f),axis=1)
except ValueError:
features = f
# Save images
if dtype == 'dat':
cv2.imwrite(savepath + '\\' + files[2 * k][:-9] + '.png', lbpIS)
else:
cv2.imwrite(savepath + '\\' + files[k][:-9] + '.png', lbpIS)
# Plot LBP images
#plt.imshow(lbpIS); plt.show()
#plt.imshow(lbpIL); plt.show()
#plt.imshow(lbpIR); plt.show()
# Save features
writer = pd.ExcelWriter(save + r'\LBP_features_python.xlsx')
df1 = pd.DataFrame(features)
df1.to_excel(writer, sheet_name='LBP_features')
writer.save()
t = time.time()-start_time
print('Elapsed time: {0}s'.format(t))
def pipeline_load(featurepath, gpath, save, choice):
#Start time
start_time = time.time()
# Load grades to array
grades = pd.read_excel(gpath, 'Sheet1')
grades = pd.DataFrame(grades).values
fnames = grades[:,0].astype('str')
g = list(grades[:,choice].astype('int'))
#g.pop(32)
g = np.array(g)
print('Max grade: {0}, min grade: {1}'.format(max(g), min(g)))
# Load features
features = pd.read_excel(featurepath, 'LBP_features')
features = pd.DataFrame(features).values.astype('int')
print(features.shape)
#PCA
# PCA parameters: whitening, svd solver (auto/full)
pca, score = ScikitPCA(features.T, 10, True, 'auto')
#pca, score = PCA(features,10)
print(score[0,:])
print(score.shape)
# Regression
if min(g) > 0:
g = g - min(g)
pred1 = regress(score, g)
pred2 = logreg(score, g>min(g))
for p in range(len(pred1)):
if pred1[p]<0:
pred1[p] = 0
if pred1[p] > max(g):
pred1[p]=max(g)
#Plotting PCA
a = g
b = np.round(pred1).astype('int')
# ROC curve
C1 = skmet.confusion_matrix(a,b)
MSE1 = skmet.mean_squared_error(a,pred1)
fpr, tpr, thresholds = skmet.roc_curve(a>0, np.round(pred1)>0, pos_label=1)
AUC1 = skmet.auc(fpr,tpr)
AUC2 = skmet.roc_auc_score(a>0,pred2)
m, b = np.polyfit(a, pred1.flatten(), 1)
R2 = skmet.r2_score(a,pred1.flatten())
fig0 = plt.figure(figsize=(6,6))
ax0 = fig0.add_subplot(111)
ax0.plot(fpr,tpr)
# Save prediction
stats = np.zeros(len(g))
stats[0] = MSE1
stats[1] = AUC1
stats[2] = AUC2
tuples = list(zip(fnames, g, pred1[:,0], abs(g - pred1[:,0]), pred2, stats))
writer = pd.ExcelWriter(save + r'\prediction_python.xlsx')
df1 = pd.DataFrame(tuples, columns=['Sample', 'Actual grade', 'Prediction', 'Difference', 'Logistic prediction', 'MSE, AUC1, AUC2'])
df1.to_excel(writer, sheet_name='Prediction')
writer.save()
print('Confusion matrix')
print(C1)
print('Mean squared error, Area under curve 1 and 2')
print(MSE1, AUC1, AUC2)#,MSE2,MSE3,MSE4)
print('R2 score')
print(R2)
#print('Sample, grade, prediction')
#for k in range(len(fnames)):
# print(fnames[k],a[k],pred1[k])#,pred3[k])
x = score[:,0]
y = score[:,1]
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
ax1.scatter(score[g<2,0],score[g<2,1],marker='o',color='b',label='Normal')
ax1.scatter(score[g>1,0],score[g>1,1],marker='s',color='r',label='OA')
for k in range(len(g)):
txt = fnames[k][0:-4]+str(g[k])
if g[k] >= 2:
ax1.scatter(x[k],y[k],marker='s',color='r')
else:
ax1.scatter(x[k],y[k],marker='o',color='b')
# Scatter plot actual vs prediction
fig = plt.figure(figsize=(6,6))
ax2 = fig.add_subplot(111)
ax2.scatter(a,pred1.flatten())
ax2.plot(a,m*a,'-',color='r')
ax2.set_xlabel('Actual grade')
ax2.set_ylabel('Predicted')
for k in range(len(g)):
txt = fnames[k]
txt = txt+str(g[k])
ax2.annotate(txt,xy=(a[k],pred1[k]),color='r')
plt.show()
```
### Load features
```
featurepath = r'Z:\3DHistoData\Grading\LBP_features_surface.xlsx'
gpath = r'Z:\3DHistoData\Grading\PTAgreiditjanaytteet.xls'
save = r'Z:\3DHistoData\Grading'
total = 1
surf = 2
deep = 5
cc = 6
deepcell = 7
deepECM = 8
ccECM = 9
ccVasc = 10
choice = surf
pipeline_load(featurepath, gpath, save, choice)
```
### Calculate LBP features from .dat mean and std images
```
impath = r'Z:\3DHistoData\SurfaceImages\Deep'
impath = r'V:\Tuomas\PTASurfaceImages'
dtype = 'dat'
dtype = 'mat'
savepath = r'Z:\3DHistoData\Grading\LBP'
save = r'Z:\3DHistoData\Grading'
pipeline_lbp(impath, savepath, save, dtype)
```
| github_jupyter |
# Exercises
## Simple array manipulation
Investigate the behavior of the statements below by looking
at the values of the arrays a and b after assignments:
```
a = np.arange(5)
b = a
b[2] = -1
b = a[:]
b[1] = -1
b = a.copy()
b[0] = -1
```
Generate a 1D NumPy array containing numbers from -2 to 2
in increments of 0.2. Use optional start and step arguments
of **np.arange()** function.
Generate another 1D NumPy array containing 11 equally
spaced values between 0.5 and 1.5. Extract every second
element of the array
Create a 4x4 array with arbitrary values.
Extract every element from the second row
Extract every element from the third column
Assign a value of 0.21 to upper left 2x2 subarray.
## Simple plotting
Plot to the same graph **sin** and **cos** functions in the interval $[-\pi/2, \pi/2]$. Use $\theta$ as x-label and insert also legends.
## Pie chart
The file "../data/csc_usage.txt" contains the usage of CSC servers by different disciplines. Plot a pie chart about the resource usage.
## Bonus exercises
### Numerical derivative with finite differences
Derivatives can be calculated numerically with the finite-difference method
as:
$$ f'(x_i) = \frac{f(x_i + \Delta x)- f(x_i - \Delta x)}{2 \Delta x} $$
Construct 1D Numpy array containing the values of xi in the interval $[0, \pi/2]$ with spacing
$\Delta x = 0.1$. Evaluate numerically the derivative of **sin** in this
interval (excluding the end points) using the above formula. Try to avoid
`for` loops. Compare the result to function **cos** in the same interval.
### Game of Life
Game of life is a cellular automaton devised by John Conway
in 70's: http://en.wikipedia.org/wiki/Conway's_Game_of_Life
The game consists of two dimensional orthogonal grid of
cells. Cells are in two possible states, alive or dead. Each cell
interacts with its eight neighbours, and at each time step the
following transitions occur:
* Any live cell with fewer than two live neighbours dies, as if
caused by underpopulation
* Any live cell with more than three live neighbours dies, as if
by overcrowding
* Any live cell with two or three live neighbours lives on to
the next generation
* Any dead cell with exactly three live neighbours becomes a
live cell
The initial pattern constitutes the seed of the system, and
the system is left to evolve according to rules. Deads and
births happen simultaneously.
Implement the Game of Life using Numpy, and visualize the
evolution with Matplotlib's **imshow**. Try first 32x32
square grid and cross-shaped initial pattern:

Try also other grids and initial patterns (e.g. random
pattern). Try to avoid **for** loops.
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/2_transfer_learning_roadmap/3_effect_of_number_of_classes_in_dataset/3)%20Understand%20transfer%20learning%20and%20the%20role%20of%20number%20of%20dataset%20classes%20in%20it%20-%20Keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Goals
### 1. Visualize deep learning network
### 2. Understand how the final layer would change when number of classes in dataset changes
# What do you do with a deep learning model in transfer learning
- These are the steps already done by contributors in pytorch, keras and mxnet
- You take a deep learning architecture, such as resnet, densenet, or even custom network
- Train the architecture on large datasets such as imagenet, coco, etc
- The trained wieghts become your starting point for transfer learning
- The final layer of this pretrained model has number of neurons = number of classes in the large dataset
- In transfer learning
- You take the network and load the pretrained weights on the network
- Then remove the final layer that has the extra(or less) number of neurons
- You add a new layer with number of neurons = number of classes in your custom dataset
- Optionally you can add more layers in between this newly added final layer and the old network
- Now you have two parts in your network
- One that already existed, the pretrained one, the base network
- The new sub-network or a single layer you added
- The hyper-parameter we can see here: Freeze base network
- Freezing base network makes the base network untrainable
- The base network now acts as a feature extractor and only the next half is trained
- If you do not freeze the base network the entire network is trained
(You will take this part in next sessions)
# Table of Contents
## [Install](#0)
## [Setup Default Params with Cats-Dogs dataset](#1)
## [Visualize network](#2)
## [Reset Default Params with new dataset - Logo classification](#3)
## [Visualize the new network](#4)
<a id='0'></a>
# Install Monk
## Using pip (Recommended)
- colab (gpu)
- All bakcends: `pip install -U monk-colab`
- kaggle (gpu)
- All backends: `pip install -U monk-kaggle`
- cuda 10.2
- All backends: `pip install -U monk-cuda102`
- Gluon bakcned: `pip install -U monk-gluon-cuda102`
- Pytorch backend: `pip install -U monk-pytorch-cuda102`
- Keras backend: `pip install -U monk-keras-cuda102`
- cuda 10.1
- All backend: `pip install -U monk-cuda101`
- Gluon bakcned: `pip install -U monk-gluon-cuda101`
- Pytorch backend: `pip install -U monk-pytorch-cuda101`
- Keras backend: `pip install -U monk-keras-cuda101`
- cuda 10.0
- All backend: `pip install -U monk-cuda100`
- Gluon bakcned: `pip install -U monk-gluon-cuda100`
- Pytorch backend: `pip install -U monk-pytorch-cuda100`
- Keras backend: `pip install -U monk-keras-cuda100`
- cuda 9.2
- All backend: `pip install -U monk-cuda92`
- Gluon bakcned: `pip install -U monk-gluon-cuda92`
- Pytorch backend: `pip install -U monk-pytorch-cuda92`
- Keras backend: `pip install -U monk-keras-cuda92`
- cuda 9.0
- All backend: `pip install -U monk-cuda90`
- Gluon bakcned: `pip install -U monk-gluon-cuda90`
- Pytorch backend: `pip install -U monk-pytorch-cuda90`
- Keras backend: `pip install -U monk-keras-cuda90`
- cpu
- All backend: `pip install -U monk-cpu`
- Gluon bakcned: `pip install -U monk-gluon-cpu`
- Pytorch backend: `pip install -U monk-pytorch-cpu`
- Keras backend: `pip install -U monk-keras-cpu`
## Install Monk Manually (Not recommended)
### Step 1: Clone the library
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
### Step 2: Install requirements
- Linux
- Cuda 9.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt`
- Cuda 9.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt`
- Cuda 10.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt`
- Cuda 10.1
- `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt`
- Cuda 10.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt`
- Windows
- Cuda 9.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt`
- Cuda 9.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt`
- Cuda 10.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt`
- Cuda 10.1 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt`
- Cuda 10.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt`
- Mac
- CPU (Non gpu system)
- `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt`
- Misc
- Colab (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt`
- Kaggle (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt`
### Step 3: Add to system path (Required for every terminal or kernel run)
- `import sys`
- `sys.path.append("monk_v1/");`
## Dataset - Sample
- one having 2 classes
- other having 16 classes
```
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1jE-ckk0JbrdbJvIBaKMJWkTfbRDR2MaF' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1jE-ckk0JbrdbJvIBaKMJWkTfbRDR2MaF" -O study_classes.zip && rm -rf /tmp/cookies.txt
! unzip -qq study_classes.zip
```
# Imports
```
#Using keras backend
# When installed using pip
from monk.keras_prototype import prototype
# When installed manually (Uncomment the following)
#import os
#import sys
#sys.path.append("monk_v1/");
#sys.path.append("monk_v1/monk/");
#from monk.keras_prototype import prototype
```
### Creating and managing experiments
- Provide project name
- Provide experiment name
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "study-num-classes");
```
### This creates files and directories as per the following structure
workspace
|
|--------Project
|
|-----study-num-classes
|
|-----experiment-state.json
|
|-----output
|
|------logs (All training logs and graphs saved here)
|
|------models (all trained models saved here)
<a id='1'></a>
# Setup Default Params with Cats-Dogs dataset
```
gtf.Default(dataset_path="study_classes/dogs_vs_cats",
model_name="resnet50",
num_epochs=5);
```
### From Data summary - Num classes: 2
<a id='2'></a>
# Visualize network
```
gtf.Visualize_With_Netron(data_shape=(3, 224, 224), port=8081);
```
## The final layer
```
from IPython.display import Image
Image(filename='imgs/2_classes_base_keras.png')
```
<a id='3'></a>
# Reset Default Params with new dataset - Logo classification
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "study-num-classes");
gtf.Default(dataset_path="study_classes/logos",
model_name="resnet50",
num_epochs=5);
```
### From Data summary - Num classes: 16
<a id='4'></a>
# Visualize network
```
gtf.Visualize_With_Netron(data_shape=(3, 224, 224), port=8082);
```
## The final layer
```
from IPython.display import Image
Image(filename='imgs/16_classes_base_keras.png')
```
# Goals Completed
### 1. Visualize deep learning network
### 2. Understand how the final layer would change when number of classes in dataset changes
| github_jupyter |
```
import swat
import pandas as pd
import os
from sys import platform
import riskpy
from os.path import join as path
if "CASHOST" in os.environ:
# Create a session to the CASHOST and CASPORT variables set in your environment
conn = riskpy.SessionContext(session=swat.CAS(),
caslib="CASUSER")
else:
# Otherwise set this to your host and port:
host = "riskpy.rqs-cloud.sashq-d.openstack.sas.com"
port = 5570
conn = riskpy.SessionContext(session=swat.CAS(host, port), caslib="CASUSER")
base_dir = '.'
# Set output location
if platform == "win32":
# Windows...
output_dir = 'u:\\temp'
else:
# platform == "linux" or platform == "linux2" or platform == "darwin":
output_dir = '/tmp'
mkt_data = riskpy.MarketData(
current = pd.DataFrame(data={'uerate': 6.0}, index=[0]),
risk_factors = ['uerate'])
my_scens = riskpy.Scenarios(
name = "my_scens",
market_data = mkt_data,
data = path("datasources","CreditRisk",'uerate_scenario.xlsx'))
my_scens
cpty_df = pd.read_excel(path("datasources","CreditRisk",'uerate_cpty.xlsx'))
loan_groups = riskpy.Counterparties(data=pd.read_excel(
path("datasources","CreditRisk",'uerate_cpty.xlsx')))
loan_groups.mapping = {"cpty1": "score_uerate"}
loan_groups
score_code_file=(path("methods","CreditRisk",'score_uerate.sas'))
scoring_methods = riskpy.MethodLib(
method_code=path("methods","CreditRisk",'score_uerate.sas'))
scoring_methods
my_scores = riskpy.Scores(counterparties=loan_groups,
scenarios=my_scens,
method_lib=scoring_methods)
my_scores.generate(session_context=conn, write_allscore=True)
print(my_scores.allscore.head())
allscore_file = path(output_dir, 'simple_allscores.xlsx')
my_scores.allscore.to_excel(allscore_file)
my_scores
portfolio = riskpy.Portfolio(
data=path("datasources","CreditRisk",'retail_portfolio.xlsx'),
class_variables = ["region", "cptyid"])
eval_methods = riskpy.MethodLib(
method_code=path("methods","CreditRisk",'credit_method2.sas'))
my_values = riskpy.Values(
session_context=conn,
portfolio=portfolio,
output_variables=["Expected_Credit_Loss"],
scenarios=my_scens,
scores=my_scores,
method_lib=eval_methods,
mapping = {"Retail": "ecl_method"})
my_values
my_values.evaluate(write_prices=True)
allprice_df = my_values.fetch_prices(max_rows=100000)
print(my_values.allprice.head())
allprice_file = path(output_dir, 'creditrisk_allprice.xlsx')
allprice_df.to_excel(allprice_file)
results = riskpy.Results(
session_context=conn,
values=my_values,
requests=["_TOP_", ["region"]],
out_type="values"
)
results_df = results.query().to_frame()
print(results_df.head())
rollup_file = path(output_dir, 'creditrisk_rollup_by_region.xlsx')
results_df.to_excel(rollup_file)
results
```
| github_jupyter |
# Semantic Function Species (part 2)
```
from scripts.imports import *
out = Exporter(
paths['outdir'],
'semantics'
)
from IPython.display import HTML, display
df.columns
```
# Miscellaneous Functions
```
df[df.funct_type == 'secondary'].function.value_counts()
funct2names = {
'purposive_ext':['purpext', 'Purposive Extent'],
'dist_posterior': ['distpost', 'Distance Posterior'],
'anterior_limitive': ['antlimit', 'Anterior Limitive'],
'dist_prospective': ['distprosp', 'Distance Prospective'],
'purposive': ['purp', 'Purposive'],
'anterior_dur_except': ['antdurex', 'Anterior Durative with "Except"'],
'posterior_dur_future': ['postdurfut', 'Posterior Durative Future'],
}
# automatically show examples
for funct in funct2names:
exdf = df[df.function == funct].sort_values(by='notes').head(10)
print(funct)
display(
ts.show(exdf, extra=['notes'], spread=-1)
)
print('-'*50)
```
# Pull Out Examples
## Purposive Extent
```
purpext_df = df[df.function == 'purposive_ext']
out.number(
purpext_df.shape[0],
'purpext_N'
)
antlim_df = df[df.function == 'anterior_limitive']
out.number(
antlim_df.shape[0],
'antlim_N'
)
```
# Difficult Cases
# Compound Time Adverbials
```
compound_ct = df[df.funct_type == 'compound'].function.value_counts()
out.table(
compound_ct,
'compound_funct_ct',
caption='Sampled Compound Time Adverbial Frequencies',
)
comp_clusters = {
'begin-to-end': [
'begin_to_end',
'habitual + begin_to_end',
'begin_to_end_habitual',
'simultaneous + begin_to_end',
'simultaneous + multi_begin_to_end',
'posterior_dur + begin_to_end + atelic_ext',
'begin_to_end + multi_antdur',
],
'coordinated location': [
'simultaneous_calendar',
'multi_simuls',
'simultaneous + anterior',
'simul_to_end',
'simultaneous + anterior_limitive?',
'simultaneous + anterior_dist',
'simultaneous + posterior',
'simultaneous + posteriors',
'simultaneous + dist_posterior',
'posterior + simultaneous',
'multi_antdur',
'anterior_dur + anterior',
'anterior + posterior',
'multi_posterior_dur',
'simultaneous + purposive_ext',
],
'coordinated extent': [
'multi_atelic_ext',
],
'location + extent': [
'simultaneous + atelic_ext',
'atelic_ext + simultaneous',
'anterior + atelic_ext',
'anterior + distance',
'atelic_ext + anterior + atelic_ext',
'anterior_dur + duration',
'dur_to_end',
'posterior + atelic_ext',
'dist_fut + atelic_ext',
'reg_recurr + atelic_ext',
'atelic_ext + habitual',
],
'distance sequential': [
'posterior + distance',
],
}
attested = set(cl for name, functs in comp_clusters.items() for cl in functs)
set(compound_ct.index) - attested
```
## Auto Export Examples for Compounds
```
for cluster, labels in comp_clusters.items():
display(HTML(f'<h2>{cluster.title()}</h2>'))
for label in labels:
print(label)
ex_df = df[df.function == label]
display(
ts.show(
ex_df
)
)
display(HTML('<hr>'))
```
# Manually Extract Specific Cases
## Begin-to-end
```
b2edf = df[df.function.isin(comp_clusters['begin-to-end'])]
out.number(
b2edf.shape[0],
'begintoend_N'
)
```
## Calendricals
```
caldf = df[df.function == 'simultaneous_calendar']
out.number(
caldf.shape[0],
'N_simul_calendar'
)
caldf.times_utf8.value_counts()
```
| github_jupyter |
TSG108 - View the controller upgrade config map
===============================================
Description
-----------
When running a Big Data Cluster upgrade using `azdata bdc upgrade`:
`azdata bdc upgrade --name <namespace> --tag <tag>`
It may fail with:
> Upgrading cluster to version 15.0.4003.10029\_2
>
> NOTE: Cluster upgrade can take a significant amount of time depending
> on configuration, network speed, and the number of nodes in the
> cluster.
>
> Upgrading Control Plane. Control plane upgrade failed. Failed to
> upgrade controller.
Steps
-----
Use these steps to troubelshoot the problem.
### Common functions
Define helper functions used in this notebook.
```
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False, regex_mask=None):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
cmd_display = cmd
if regex_mask is not None:
regex = re.compile(regex_mask)
cmd_display = re.sub(regex, '******', cmd)
print(f"START: {cmd_display} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd_display} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd_display} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
# Hints for tool retry (on transient fault), known errors and install guide
#
retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', ], 'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', ], 'python': [ ], }
error_hints = {'azdata': [['Please run \'azdata login\' to first authenticate', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Can\'t open lib \'ODBC Driver 17 for SQL Server', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ['NameError: name \'azdata_login_secret_name\' is not defined', 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', 'TSG124 - \'No credentials were supplied\' error from azdata login', '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', 'TSG126 - azdata fails with \'accept the license terms to use this product\'', '../repair/tsg126-accept-license-terms.ipynb'], ], 'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb'], ], 'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP012 - Install unixodbc for Mac', '../install/sop012-brew-install-odbc-for-sql-server.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb'], ], }
install_hint = {'azdata': [ 'SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb' ], 'kubectl': [ 'SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb' ], }
print('Common functions defined successfully.')
```
### Get the Kubernetes namespace for the big data cluster
Get the namespace of the Big Data Cluster use the kubectl command line
interface .
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
```
### View the upgrade configmap
```
run(f'kubectl get configmap -n {namespace} controller-upgrade-configmap -o yaml')
print("Notebook execution is complete.")
```
Related
-------
- [TSG109 - Set upgrade timeouts](../repair/tsg109-upgrade-stalled.ipynb)
| github_jupyter |
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import os
import numpy as np
import random
import math
import string
import tensorflow as tf
import zipfile
from six.moves import range
from six.moves.urllib.request import urlretrieve
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
def read_data(filename):
with zipfile.ZipFile(filename) as f:
name = f.namelist()[0]
data = tf.compat.as_str(f.read(name))
return data
text = read_data(filename)
print('Data size %d' % len(text))
valid_size = 1000
valid_text = text[:valid_size]
train_text = text[valid_size:]
train_size = len(train_text)
print(train_size, train_text[:64])
print(valid_size, valid_text[:64])
vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' '
first_letter = ord(string.ascii_lowercase[0])
def char2id(char):
if char in string.ascii_lowercase:
return ord(char) - first_letter + 1
elif char == ' ':
return 0
else:
print('Unexpected character: %s' % char)
return 0
def id2char(dictid):
if dictid > 0:
return chr(dictid + first_letter - 1)
else:
return ' '
print(char2id('a'), char2id('z'), char2id(' '), char2id('ï'))
print(id2char(1), id2char(26), id2char(0))
batch_size=64
num_unrollings=10
embedding_size=27
class BatchGenerator(object):
def __init__(self, text, batch_size, num_unrollings):
self._text = text
self._text_size = len(text)
self._batch_size = batch_size
self._num_unrollings = num_unrollings
segment = self._text_size // batch_size
self._cursor = [ offset * segment for offset in range(batch_size)]
self._last_batch = self._next_batch()
def _next_batch(self):
"""Generate a single batch from the current cursor position in the data."""
batch = np.zeros(shape=(self._batch_size,2), dtype=np.int32)
for b in range(self._batch_size):
batch[b,0] = char2id(self._text[self._cursor[b]])
batch[b,1] = char2id(self._text[self._cursor[b]+1])
self._cursor[b] = (self._cursor[b] +1) % self._text_size
return batch
def next(self):
"""Generate the next array of batches from the data. The array consists of
the last batch of the previous array, followed by num_unrollings new ones.
"""
batches = [self._last_batch]
for step in range(self._num_unrollings):
batches.append(self._next_batch())
self._last_batch = batches[-1]
return batches
def characters(batches):
"""Turn a 1-hot encoding or a probability distribution over the possible
characters back into its (most likely) character representation."""
batches=list(map(list, zip(*batches)))
s=list("")
for b in batches:
ss=""
for c in b:
ss+=id2char(int(c[0]))
#ss+=id2char(int(c[1]))
s.append(ss)
return s
def characters2(probabilities):
"""Turn a 1-hot encoding or a probability distribution over the possible
characters back into its (most likely) character representation."""
s=[id2char(np.floor_divide(c,27)) for c in np.argmax(probabilities, 1)]
s+=[id2char(c-27*np.floor_divide(c,27)) for c in np.argmax(probabilities, 1)]
return s
def batches2string(batches):
"""Convert a sequence of batches back into their (most likely) string
representation."""
s = [''] * batches[0].shape[0]
for b in batches:
s = [''.join(x) for x in zip(s, characters(b))]
return s
train_batches = BatchGenerator(train_text, batch_size, num_unrollings)
valid_batches = BatchGenerator(valid_text, 1, 1)
print((np.array(train_batches.next())).shape)
print(characters(train_batches.next()))
print(characters(train_batches.next()))
print(characters(valid_batches.next()))
print(characters(valid_batches.next()))
# ==========================
# OTHER EVALUATION FUNCTIONS
# ==========================
def logprob(predictions, labels):
"""Log-probability of the true labels in a predicted batch."""
predictions[predictions < 1e-10] = 1e-10
return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0]
def sample_distribution(distribution):
"""Sample one element from a distribution assumed to be an array of normalized
probabilities.
"""
r = random.uniform(0, 1)
s = 0
for i in range(len(distribution)):
s += distribution[i]
if s >= r:
return i
return len(distribution) - 1
def sample(prediction):
"""Turn a (column) prediction into 1-hot encoded samples."""
p = np.zeros(shape=[1, vocabulary_size], dtype=np.float)
p[0, sample_distribution(prediction[0])] = 1.0
return p
def random_distribution():
"""Generate a random column of probabilities."""
b = np.random.uniform(0.0, 26.0, size=[1, 1])
return b[:,None]
num_nodes = 64
vocabulary_size=27*27
graph = tf.Graph()
with graph.as_default():
# Parameters:
# first parameter of each gate in 1 matrix:
i_all = tf.Variable(tf.truncated_normal([embedding_size, 4*num_nodes], -0.1, 0.1))
# second parameter of each gate in 1 matrix
o_all = tf.Variable(tf.truncated_normal([num_nodes, 4*num_nodes], -0.1, 0.1))
# Input gate: input, previous output, and bias.
ib = tf.Variable(tf.zeros([1, num_nodes]))
# Forget gate: input, previous output, and bias.
fb = tf.Variable(tf.zeros([1, num_nodes]))
# Memory cell: input, state and bias.
cb = tf.Variable(tf.zeros([1, num_nodes]))
# Output gate: input, previous output, and bias.
ob = tf.Variable(tf.zeros([1, num_nodes]))
# Variables saving state across unrollings.
saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
# Classifier weights and biases.
#embeddings
embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size],-0.1,0.1))
b = tf.Variable(tf.zeros([vocabulary_size]))
# Definition of the cell computation.
def lstm_cell(i, o, state):
"""Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf
Note that in this formulation, we omit the various connections between the
previous state and the gates."""
input_mat=tf.matmul(i,i_all)
output_mat=tf.matmul(o,o_all)
input_gate = tf.sigmoid(input_mat[:,0:num_nodes] + output_mat[:,0:num_nodes] + ib)
forget_gate = tf.sigmoid(input_mat[:,num_nodes:2*num_nodes] + output_mat[:,num_nodes:2*num_nodes] + fb)
update = input_mat[:,2*num_nodes:3*num_nodes] + output_mat[:,2*num_nodes:3*num_nodes] + cb
state = forget_gate * state + input_gate * tf.tanh(update)
output_gate = tf.sigmoid(input_mat[:,3*num_nodes:] + output_mat[:,3*num_nodes:] + ob)
return output_gate * tf.tanh(state), state
# Input data.
train_data = list()
for _ in range(num_unrollings+1):
train_data.append(tf.placeholder(tf.int32, shape=[batch_size,2]))
train_inputs = train_data[:num_unrollings]
train_labels = train_data[1:] # labels are inputs shifted by one time step.
# Unrolled LSTM loop.
outputs = list()
output = saved_output
state = saved_state
for i in train_inputs:
i_concat=27*i[:,0]+i[:,1]
embedded_i=tf.nn.embedding_lookup(embeddings,i_concat)
output, state = lstm_cell(embedded_i, output, state)
outputs.append(output)
# State saving across unrollings.
with tf.control_dependencies([saved_output.assign(output),
saved_state.assign(state)]):
# Classifier.
outputs_concat=tf.concat(outputs,0)
#try to compute similarity here as well
logits=tf.nn.xw_plus_b(outputs_concat,w,b)
print((logits).shape)
#Compute one hot encodings
label_batch=tf.concat(train_labels,0)
label_batch=27*label_batch[:,0]+label_batch[:,1]
print(label_batch.shape)
sparse_labels = tf.reshape(label_batch, [-1, 1])
derived_size = tf.shape(label_batch)[0]
indices = tf.reshape(tf.range(0, derived_size, 1), [-1, 1])
print(indices.shape,'indices.shape')
concated = tf.concat([indices, sparse_labels],1)
outshape = tf.stack([derived_size, vocabulary_size])
labels = tf.sparse_to_dense(concated, outshape, 1.0, 0.0)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits))
# Optimizer.
global_step = tf.Variable(0)
learning_rate = tf.train.exponential_decay(
10.0, global_step, 5000, 0.1, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
optimizer = optimizer.apply_gradients(
zip(gradients, v), global_step=global_step)
# Predictions.
train_prediction = tf.nn.softmax(logits)
print(train_prediction.shape)
# Sampling and validation eval: batch 1, no unrolling.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
#valid_dataset=np.array([i for i in range(27)])
#valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
sample_input = tf.placeholder(tf.int32, shape=[1])
saved_sample_output = tf.Variable(tf.zeros([1, num_nodes]))
saved_sample_state = tf.Variable(tf.zeros([1, num_nodes]))
reset_sample_state = tf.group(
saved_sample_output.assign(tf.zeros([1, num_nodes])),
saved_sample_state.assign(tf.zeros([1, num_nodes])))
embedded_sample=tf.nn.embedding_lookup(embeddings,sample_input)
sample_output, sample_state = lstm_cell(embedded_sample, saved_sample_output, saved_sample_state)
with tf.control_dependencies([saved_sample_output.assign(sample_output),
saved_sample_state.assign(sample_state)]):
sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b))
#similarity = tf.matmul(sample_prediction, tf.transpose(normalized_embeddings))
print(sample_prediction.shape)
num_steps = 50001
summary_frequency = 500
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
mean_loss = 0
for step in range(num_steps):
batches = train_batches.next()
feed_dict = dict()
for i in range(num_unrollings + 1):
feed_dict[train_data[i]] = batches[i]
#print((feed_dict[train_data[i]]).shape)
_, l, predictions, lr,train_lab = session.run([optimizer, loss, train_prediction, learning_rate,train_labels], feed_dict=feed_dict)
mean_loss += l
if step % summary_frequency == 0:
if step > 0:
mean_loss = mean_loss / summary_frequency
# The mean loss is an estimate of the loss over the last few batches.
print('Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr))
mean_loss = 0
#print(train_lab)
#print(labels.shape[0])
#print('Minibatch perplexity: ',np.exp(logprob(predictions, labels)))
if step % (summary_frequency * 2) == 0:
# Generate some samples.
print('=' * 80)
for _ in range(5):
feed = np.zeros(shape=(1,), dtype=np.int32)
feed[0,] =np.random.randint(0,729)
sentence=id2char(np.floor_divide(feed[0],27))
sentence+=id2char(feed[0]-27*np.floor_divide(feed[0],27))
reset_sample_state.run()
for _ in range(50):
prediction=sample_prediction.eval({sample_input:feed})
k = sample(prediction)
k=characters2(k)
#print(k)
feed = np.zeros(shape=(1,), dtype=np.int32)
feed[0,] = 27*char2id(k[0])+char2id(k[1])
sentence += k[0]
#sentence+=k[1]
#feed = np.zeros(shape=(1,), dtype=np.int32)
#feed[0,] = np.argmax(prediction)
#print(feed.shape)
#sentence += id2char(feed[0,])
print(sentence)
print('=' * 80)
# Measure validation set perplexity.
reset_sample_state.run()
valid_logprob = 0
#for _ in range(valid_size):
# b = valid_batches.next()
#predictions = sample_prediction.eval({sample_input: b[0]})
#valid_logprob = valid_logprob + logprob(predictions, b[1])
#print('Validation set perplexity: %.2f' % float(np.exp(valid_logprob / valid_size)))
```
| github_jupyter |
## February and April 2020 precipitation anomalies
In this notebook, we will analyze precipitation anomalies of February and April 2020, which seemed to be very contrasting in weather. We use the EOBS dataset.
### Import packages
```
##This is so variables get printed within jupyter
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
##import packages
import os
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import cartopy
import cartopy.crs as ccrs
import matplotlib.ticker as mticker
os.chdir(os.path.abspath('../../')) # Change the working directory to UNSEEN-open
os.getcwd() #print the working directory
### Set plot font size
plt.rcParams['font.size'] = 10 ## change font size
```
### Load EOBS
I downloaded EOBS (from 1950 - 2019) and the most recent EOBS data (2020) [here](https://surfobs.climate.copernicus.eu/dataaccess/access_eobs.php). Note, you have to register as E-OBS user.
The data has a daily timestep. I resample the data into monthly average mm/day. I chose not to use the total monthly precipitation because of leap days.
```
EOBS = xr.open_dataset('../UK_example/EOBS/rr_ens_mean_0.25deg_reg_v20.0e.nc') ## open the data
EOBS = EOBS.resample(time='1m').mean() ## Monthly averages
# EOBS = EOBS.sel(time=EOBS['time.month'] == 2) ## Select only February
EOBS
```
Here I define the attributes, that xarray uses when plotting
```
EOBS['rr'].attrs = {'long_name': 'rainfall', ##Define the name
'units': 'mm/day', ## unit
'standard_name': 'thickness_of_rainfall_amount'} ## original name, not used
EOBS['rr'].mean('time').plot() ## and show the 1950-2019 average February precipitation
```
The 2020 data file is separate and needs the same preprocessing:
```
EOBS2020 = xr.open_dataset('../UK_example/EOBS/rr_0.25deg_day_2020_grid_ensmean.nc.1') #open
EOBS2020 = EOBS2020.resample(time='1m').mean() #Monthly mean
EOBS2020['rr'].sel(time='2020-04').plot() #show map
EOBS2020 ## display dataset
```
### Plot the 2020 event
I calculate the anomaly (deviation from the mean in mm/d) and divide this by the standard deviation to obtain the standardized anomalies.
```
EOBS2020_anomaly = EOBS2020['rr'].groupby('time.month') - EOBS['rr'].groupby('time.month').mean('time')
EOBS2020_anomaly
EOBS2020_sd_anomaly = EOBS2020_anomaly.groupby('time.month') / EOBS['rr'].groupby('time.month').std('time')
EOBS2020_sd_anomaly.attrs = {
'long_name': 'Monthly precipitation standardized anomaly',
'units': '-'
}
EOBS2020_sd_anomaly
```
I select February and April (tips on how to select this are appreciated)
```
EOBS2020_sd_anomaly
# EOBS2020_sd_anomaly.sel(time = ['2020-02','2020-04']) ## Dont know how to select this by label?
EOBS2020_sd_anomaly[[1,3],:,:] ## Dont know how to select this by label?
```
And plot using cartopy!
```
EOBS_plots = EOBS2020_sd_anomaly[[1, 3], :, :].plot(
transform=ccrs.PlateCarree(),
robust=True,
extend = 'both',
col='time',
cmap=plt.cm.twilight_shifted_r,
subplot_kws={'projection': ccrs.EuroPP()})
for ax in EOBS_plots.axes.flat:
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.coastlines(resolution='50m')
gl = ax.gridlines(crs=ccrs.PlateCarree(),
draw_labels=False,
linewidth=1,
color='gray',
alpha=0.5,
linestyle='--')
# plt.savefig('graphs/February_April_2020_precipAnomaly.png', dpi=300)
```
| github_jupyter |
# Collaboration Patterns By Year (International, Domestic, Internal)
Using the count capability of the API, Dimensions allows you to quickly identify international, domestic, and inernal Collaboration
This notebook shows how to quickly identify international, domestic, and internal collaboration using the [Organizations data source](https://docs.dimensions.ai/dsl/datasource-organizations.html) and the [Publications data source](https://docs.dimensions.ai/dsl/datasource-publications.html) available via the [Dimensions Analytics API](https://docs.dimensions.ai/dsl/).
## Prerequisites
Please install the latest versions of these libraries to run this notebook.
```
!pip install dimcli plotly -U --quiet
#
# load libraries
import dimcli
from dimcli.utils import *
import json, sys, time
import pandas as pd
import plotly.express as px # plotly>=4.8.1
if not 'google.colab' in sys.modules:
# make js dependecies local / needed by html exports
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
print("==\nLogging in..")
# https://digital-science.github.io/dimcli/getting-started.html#authentication
ENDPOINT = "https://app.dimensions.ai"
if 'google.colab' in sys.modules:
import getpass
KEY = getpass.getpass(prompt='API Key: ')
dimcli.login(key=KEY, endpoint=ENDPOINT)
else:
KEY = ""
dimcli.login(key=KEY, endpoint=ENDPOINT)
dsl = dimcli.Dsl()
```
## 1. Lookup the University that you are interested in
```
dsl.query("""
search organizations for "melbourne" return organizations
""").as_dataframe()
institution = "grid.1008.9"
```
## 2. Publications output by year
```
allpubs = dsl.query(f"""
search publications
where research_orgs.id = "{institution}"
and type="article"
and year > 2010
return year
""").as_dataframe()
allpubs.columns = ['year', 'pubs']
px.bar(allpubs, x="year", y="pubs")
```
## 3. International publications
```
international = dsl.query(f"""
search publications
where research_orgs.id = "{institution}"
and type="article"
and count(research_org_countries) > 1
and year > 2010
return year
""").as_dataframe()
international.columns = ['year', 'international_count']
px.bar(international, x="year", y="international_count")
```
## 4. Domestic
```
domestic = dsl.query(f"""
search publications
where research_orgs.id = "{institution}"
and type="article"
and count(research_org_countries) = 1
and year > 2010
return year
""").as_dataframe()
domestic.columns = ['year', 'domestic_count']
px.bar(domestic, x="year", y="domestic_count")
```
## 5. Internal
```
internal = dsl.query(f"""
search publications
where research_orgs.id = "{institution}"
and type="article"
and count(research_orgs) = 1
and year > 2010
return year
""").as_dataframe()
internal.columns = ['year', 'internal_count']
px.bar(internal, x="year", y="internal_count")
```
## 6. Joining up All metrics together
```
jdf = allpubs.set_index('year'). \
join(international.set_index('year')). \
join(domestic.set_index('year')). \
join(internal.set_index('year'))
jdf
px.bar(jdf, title="University of Melbourne: publications collaboration")
```
## 7. How does this compare to Australia?
```
auallpubs = dsl.query("""
search publications
where research_org_countries.name= "Australia"
and type="article"
and year > 2010
return year
""").as_dataframe()
auallpubs.columns = ['year', 'all_count']
auintpubs = dsl.query("""
search publications
where research_org_countries.name= "Australia"
and type="article"
and year > 2010
and count(research_org_countries) > 1
return year
""").as_dataframe()
auintpubs.columns = ['year', 'all_int_count']
audompubs = dsl.query("""
search publications
where research_org_countries.name= "Australia"
and type="article"
and year > 2010
and count(research_org_countries) = 1
return year
""").as_dataframe()
audompubs.columns = ['year', 'all_dom_count']
auinternalpubs = dsl.query("""
search publications
where
research_org_countries.name= "Australia"
and count(research_orgs) = 1
and type="article"
and year > 2010
return year
""").as_dataframe()
auinternalpubs.columns = ['year', 'all_internal_count']
audf = auallpubs.set_index('year'). \
join(auintpubs.set_index('year')). \
join(audompubs.set_index('year')). \
join(auinternalpubs.set_index('year')). \
sort_values(by=['year'])
px.bar(audf, title="Australia: publications collaboration")
```
## 8. How does this compare to a different Institution (University of Toronto)?
```
institution = "grid.17063.33"
allpubs = dsl.query(f"""
search publications
where research_orgs.id = "{institution}"
and type="article"
and year > 2010
return year
""").as_dataframe()
allpubs.columns = ['year', 'pubs']
international = dsl.query(f"""
search publications
where research_orgs.id = "{institution}"
and type="article"
and count(research_org_countries) > 1
and year > 2010
return year
""").as_dataframe()
international.columns = ['year', 'international_count']
domestic = dsl.query(f"""
search publications
where research_orgs.id = "{institution}"
and type="article"
and count(research_org_countries) = 1
and year > 2010
return year
""").as_dataframe()
domestic.columns = ['year', 'domestic_count']
internal = dsl.query(f"""
search publications
where research_orgs.id = "{institution}"
and type="article"
and count(research_orgs) = 1
and year > 2010
return year
""").as_dataframe()
internal.columns = ['year', 'internal_count']
jdf = allpubs.set_index('year'). \
join(international.set_index('year')). \
join(domestic.set_index('year')). \
join(internal.set_index('year'))
px.bar(jdf, title="Univ. of Toronto: publications collaboration")
```
---
## Want to learn more?
Check out the [Dimensions API Lab](https://api-lab.dimensions.ai/) website, which contains many tutorials and reusable Jupyter notebooks for scholarly data analytics.
| github_jupyter |
```
import warnings
warnings.filterwarnings("ignore")
import sys
import itertools
from keras.layers import Input, Dense, Reshape, Flatten
from keras import layers, initializers
from keras.models import Model, load_model
import keras.backend as K
import numpy as np
from seqtools import SequenceTools as ST
from gfp_gp import SequenceGP
from util import AA, AA_IDX
from util import build_vae
from sklearn.model_selection import train_test_split, ShuffleSplit
from keras.callbacks import EarlyStopping
import matplotlib.pyplot as plt
import pandas as pd
from gan import WGAN
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
import scipy.stats
from scipy.stats import norm
from scipy.optimize import minimize
from keras.utils.generic_utils import get_custom_objects
from util import one_hot_encode_aa, partition_data, get_balaji_predictions, get_samples
from util import convert_idx_array_to_aas, build_pred_vae_model, get_experimental_X_y
from util import get_gfp_X_y_aa
from losses import neg_log_likelihood
import json
plt.rcParams['figure.dpi'] = 300
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
def contain_tf_gpu_mem_usage() :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
contain_tf_gpu_mem_usage()
#Load GFP training dataset
it = 0
TRAIN_SIZE = 5000
train_size_str = "%ik" % (TRAIN_SIZE/1000)
num_models = [1, 5, 20][it]
RANDOM_STATE = it + 1
X_train, y_train, gt_train = get_experimental_X_y(random_state=RANDOM_STATE, train_size=TRAIN_SIZE)
#Print the 50th, 80th, 95th and 100th percentile of oracle scores
print(np.percentile(y_train, 50))
print(np.percentile(y_train, 80))
print(np.percentile(y_train, 95))
print(np.percentile(y_train, 100))
def build_model(M):
x = Input(shape=(M, 20,))
y = Flatten()(x)
y = Dense(50, activation='elu')(y)
y = Dense(2)(y)
model = Model(inputs=x, outputs=y)
return model
def evaluate_ground_truth(X_aa, ground_truth, save_file=None):
y_gt = ground_truth.predict(X_aa, print_every=100000)[:, 0]
if save_file is not None:
np.save(save_file, y_gt)
def train_and_save_oracles(X_train, y_train, n=10, suffix='', batch_size=100):
for i in range(n):
model = build_model(X_train.shape[1])
model.compile(optimizer='adam',
loss=neg_log_likelihood,
)
early_stop = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=5,
verbose=1)
model.fit(X_train, y_train,
epochs=100,
batch_size=batch_size,
validation_split=0.1,
callbacks=[early_stop],
verbose=2)
model.save("models/oracle_%i%s.h5" % (i, suffix))
import editdistance
def compute_edit_distance(seqs, opt_len=None) :
shuffle_index = np.arange(len(seqs))
shuffle_index = shuffle_index[::-1]
seqs_shuffled = [seqs[shuffle_index[i]] for i in range(len(seqs))]
edit_distances = np.ravel([float(editdistance.eval(seq_1, seq_2)) for seq_1, seq_2 in zip(seqs, seqs_shuffled)])
if opt_len is not None :
edit_distances /= opt_len
return edit_distances
def weighted_ml_opt(X_train, oracles, ground_truth, vae_0, weights_type='dbas',
LD=20, iters=20, samples=500, homoscedastic=False, homo_y_var=0.1,
quantile=0.95, verbose=False, alpha=1, train_gt_evals=None,
cutoff=1e-6, it_epochs=10, enc1_units=50):
assert weights_type in ['cbas', 'dbas','rwr', 'cem-pi', 'fbvae']
L = X_train.shape[1]
vae = build_vae(latent_dim=LD,
n_tokens=20, seq_length=L,
enc1_units=enc1_units)
traj = np.zeros((iters, 7))
oracle_samples = np.zeros((iters, samples))
gt_samples = np.zeros((iters, samples))
edit_distance_samples = np.zeros((iters, samples))
oracle_max_seq = None
oracle_max = -np.inf
gt_of_oracle_max = -np.inf
y_star = -np.inf
# FOR REVIEW:
all_seqs = pd.DataFrame(0, index=range(int((iters-1)*samples)), columns=['seq', 'val'])
l_ = 0
for t in range(iters):
### Take Samples ###
zt = np.random.randn(samples, LD)
if t > 0:
Xt_p = vae.decoder_.predict(zt)
Xt = get_samples(Xt_p)
else:
Xt = X_train
### Evaluate ground truth and oracle ###
yt, yt_var = get_balaji_predictions(oracles, Xt)
if homoscedastic:
yt_var = np.ones_like(yt) * homo_y_var
Xt_aa = np.argmax(Xt, axis=-1)
if t == 0 and train_gt_evals is not None:
yt_gt = train_gt_evals
else:
yt_gt = ground_truth.predict(Xt_aa, print_every=1000000)[:, 0]
### Calculate weights for different schemes ###
if t > 0:
if weights_type == 'cbas':
log_pxt = np.sum(np.log(Xt_p) * Xt, axis=(1, 2))
X0_p = vae_0.decoder_.predict(zt)
log_px0 = np.sum(np.log(X0_p) * Xt, axis=(1, 2))
w1 = np.exp(log_px0-log_pxt)
y_star_1 = np.percentile(yt, quantile*100)
if y_star_1 > y_star:
y_star = y_star_1
w2= scipy.stats.norm.sf(y_star, loc=yt, scale=np.sqrt(yt_var))
weights = w1*w2
elif weights_type == 'cem-pi':
pi = scipy.stats.norm.sf(max_train_gt, loc=yt, scale=np.sqrt(yt_var))
pi_thresh = np.percentile(pi, quantile*100)
weights = (pi > pi_thresh).astype(int)
elif weights_type == 'dbas':
y_star_1 = np.percentile(yt, quantile*100)
if y_star_1 > y_star:
y_star = y_star_1
weights = scipy.stats.norm.sf(y_star, loc=yt, scale=np.sqrt(yt_var))
elif weights_type == 'rwr':
weights = np.exp(alpha*yt)
weights /= np.sum(weights)
else:
weights = np.ones(yt.shape[0])
max_train_gt = np.max(yt_gt)
yt_max_idx = np.argmax(yt)
yt_max = yt[yt_max_idx]
if yt_max > oracle_max:
oracle_max = yt_max
try:
oracle_max_seq = convert_idx_array_to_aas(Xt_aa[yt_max_idx-1:yt_max_idx])[0]
except IndexError:
print(Xt_aa[yt_max_idx-1:yt_max_idx])
gt_of_oracle_max = yt_gt[yt_max_idx]
### Record and print results ##
if t == 0:
rand_idx = np.random.randint(0, len(yt), samples)
oracle_samples[t, :] = yt[rand_idx]
gt_samples[t, :] = yt_gt[rand_idx]
edit_distance_samples[t, :] = compute_edit_distance(convert_idx_array_to_aas(Xt_aa[rand_idx, ...]))
if t > 0:
oracle_samples[t, :] = yt
gt_samples[t, :] = yt_gt
edit_distance_samples[t, :] = compute_edit_distance(convert_idx_array_to_aas(Xt_aa))
traj[t, 0] = np.max(yt_gt)
traj[t, 1] = np.mean(yt_gt)
traj[t, 2] = np.std(yt_gt)
traj[t, 3] = np.max(yt)
traj[t, 4] = np.mean(yt)
traj[t, 5] = np.std(yt)
traj[t, 6] = np.mean(yt_var)
if verbose:
print(weights_type.upper(), t, traj[t, 0], color.BOLD + str(traj[t, 1]) + color.END,
traj[t, 2], traj[t, 3], color.BOLD + str(traj[t, 4]) + color.END, traj[t, 5], traj[t, 6], np.median(edit_distance_samples[t, :]))
### Train model ###
if t == 0:
vae.encoder_.set_weights(vae_0.encoder_.get_weights())
vae.decoder_.set_weights(vae_0.decoder_.get_weights())
vae.vae_.set_weights(vae_0.vae_.get_weights())
else:
cutoff_idx = np.where(weights < cutoff)
Xt = np.delete(Xt, cutoff_idx, axis=0)
yt = np.delete(yt, cutoff_idx, axis=0)
weights = np.delete(weights, cutoff_idx, axis=0)
vae.fit([Xt], [Xt, np.zeros(Xt.shape[0])],
epochs=it_epochs,
batch_size=10,
shuffle=False,
sample_weight=[weights, weights],
verbose=0)
max_dict = {'oracle_max' : oracle_max,
'oracle_max_seq': oracle_max_seq,
'gt_of_oracle_max': gt_of_oracle_max}
return traj, oracle_samples, gt_samples, edit_distance_samples, max_dict
def fb_opt(X_train, oracles, ground_truth, vae_0, weights_type='fbvae',
LD=20, iters=20, samples=500,
quantile=0.8, verbose=False, train_gt_evals=None,
it_epochs=10, enc1_units=50):
assert weights_type in ['fbvae']
L = X_train.shape[1]
vae = build_vae(latent_dim=LD,
n_tokens=20, seq_length=L,
enc1_units=enc1_units)
traj = np.zeros((iters, 7))
oracle_samples = np.zeros((iters, samples))
gt_samples = np.zeros((iters, samples))
edit_distance_samples = np.zeros((iters, samples))
oracle_max_seq = None
oracle_max = -np.inf
gt_of_oracle_max = -np.inf
y_star = - np.inf
for t in range(iters):
### Take Samples and evaluate ground truth and oracle ##
zt = np.random.randn(samples, LD)
if t > 0:
Xt_sample_p = vae.decoder_.predict(zt)
Xt_sample = get_samples(Xt_sample_p)
yt_sample, _ = get_balaji_predictions(oracles, Xt_sample)
Xt_aa_sample = np.argmax(Xt_sample, axis=-1)
yt_gt_sample = ground_truth.predict(Xt_aa_sample, print_every=1000000)[:, 0]
else:
Xt = X_train
yt, _ = get_balaji_predictions(oracles, Xt)
Xt_aa = np.argmax(Xt, axis=-1)
fb_thresh = np.percentile(yt, quantile*100)
if train_gt_evals is not None:
yt_gt = train_gt_evals
else:
yt_gt = ground_truth.predict(Xt_aa, print_every=1000000)[:, 0]
### Calculate threshold ###
if t > 0:
threshold_idx = np.where(yt_sample >= fb_thresh)[0]
n_top = len(threshold_idx)
sample_arrs = [Xt_sample, yt_sample, yt_gt_sample, Xt_aa_sample]
full_arrs = [Xt, yt, yt_gt, Xt_aa]
for l in range(len(full_arrs)):
sample_arr = sample_arrs[l]
full_arr = full_arrs[l]
sample_top = sample_arr[threshold_idx]
full_arr = np.concatenate([sample_top, full_arr])
full_arr = np.delete(full_arr, range(full_arr.shape[0]-n_top, full_arr.shape[0]), axis=0)
full_arrs[l] = full_arr
Xt, yt, yt_gt, Xt_aa = full_arrs
yt_max_idx = np.argmax(yt)
yt_max = yt[yt_max_idx]
if yt_max > oracle_max:
oracle_max = yt_max
try:
oracle_max_seq = convert_idx_array_to_aas(Xt_aa[yt_max_idx-1:yt_max_idx])[0]
except IndexError:
print(Xt_aa[yt_max_idx-1:yt_max_idx])
gt_of_oracle_max = yt_gt[yt_max_idx]
### Record and print results ##
rand_idx = np.random.randint(0, len(yt), samples)
oracle_samples[t, :] = yt[rand_idx]
gt_samples[t, :] = yt_gt[rand_idx]
edit_distance_samples[t, :] = compute_edit_distance(convert_idx_array_to_aas(Xt_aa[rand_idx, ...]))
traj[t, 0] = np.max(yt_gt)
traj[t, 1] = np.mean(yt_gt)
traj[t, 2] = np.std(yt_gt)
traj[t, 3] = np.max(yt)
traj[t, 4] = np.mean(yt)
traj[t, 5] = np.std(yt)
if t > 0:
traj[t, 6] = n_top
else:
traj[t, 6] = 0
if verbose:
print(weights_type.upper(), t, traj[t, 0], color.BOLD + str(traj[t, 1]) + color.END,
traj[t, 2], traj[t, 3], color.BOLD + str(traj[t, 4]) + color.END, traj[t, 5], traj[t, 6], np.median(edit_distance_samples[t, :]))
### Train model ###
if t == 0:
vae.encoder_.set_weights(vae_0.encoder_.get_weights())
vae.decoder_.set_weights(vae_0.decoder_.get_weights())
vae.vae_.set_weights(vae_0.vae_.get_weights())
else:
vae.fit([Xt], [Xt, np.zeros(Xt.shape[0])],
epochs=1,
batch_size=10,
shuffle=False,
verbose=0)
max_dict = {'oracle_max' : oracle_max,
'oracle_max_seq': oracle_max_seq,
'gt_of_oracle_max': gt_of_oracle_max}
return traj, oracle_samples, gt_samples, edit_distance_samples, max_dict
def train_experimental_oracles():
TRAIN_SIZE = 5000
train_size_str = "%ik" % (TRAIN_SIZE/1000)
i = 1
num_models = [1, 5, 20]
for i in range(len(num_models)):
RANDOM_STATE = i+1
nm = num_models[i]
X_train, y_train, _ = get_experimental_X_y(random_state=RANDOM_STATE, train_size=TRAIN_SIZE)
suffix = '_%s_%i_%i' % (train_size_str, nm, RANDOM_STATE)
train_and_save_oracles(X_train, y_train, batch_size=10, n=nm, suffix=suffix)
def train_experimental_vaes(i_list=[0, 2]):
TRAIN_SIZE = 5000
train_size_str = "%ik" % (TRAIN_SIZE/1000)
suffix = '_%s' % train_size_str
for i in i_list:
RANDOM_STATE = i + 1
X_train, _, _ = get_experimental_X_y(random_state=RANDOM_STATE, train_size=TRAIN_SIZE)
vae_0 = build_vae(latent_dim=20,
n_tokens=20,
seq_length=X_train.shape[1],
enc1_units=50)
vae_0.fit([X_train], [X_train, np.zeros(X_train.shape[0])],
epochs=100,
batch_size=10,
verbose=2)
vae_0.encoder_.save_weights("models/vae_0_encoder_weights%s_%i.h5"% (suffix, RANDOM_STATE))
vae_0.decoder_.save_weights("models/vae_0_decoder_weights%s_%i.h5"% (suffix, RANDOM_STATE))
vae_0.vae_.save_weights("models/vae_0_vae_weights%s_%i.h5"% (suffix, RANDOM_STATE))
def run_experimental_weighted_ml(it, repeat_start=0, repeats=3):
assert it in [0, 1, 2]
TRAIN_SIZE = 5000
train_size_str = "%ik" % (TRAIN_SIZE/1000)
num_models = [1, 5, 20][it]
RANDOM_STATE = it + 1
X_train, y_train, gt_train = get_experimental_X_y(random_state=RANDOM_STATE, train_size=TRAIN_SIZE)
vae_suffix = '_%s_%i' % (train_size_str, RANDOM_STATE)
oracle_suffix = '_%s_%i_%i' % (train_size_str, num_models, RANDOM_STATE)
vae_0 = build_vae(latent_dim=20,
n_tokens=20,
seq_length=X_train.shape[1],
enc1_units=50)
vae_0.encoder_.load_weights("models/vae_0_encoder_weights%s.h5" % vae_suffix)
vae_0.decoder_.load_weights("models/vae_0_decoder_weights%s.h5"% vae_suffix)
vae_0.vae_.load_weights("models/vae_0_vae_weights%s.h5"% vae_suffix)
ground_truth = SequenceGP(load=True, load_prefix="data/gfp_gp")
loss = neg_log_likelihood
get_custom_objects().update({"neg_log_likelihood": loss})
oracles = [build_model(X_train.shape[1]) for i in range(num_models)]
for i in range(num_models) :
oracles[i].load_weights("models/oracle_%i%s.h5" % (i, oracle_suffix))
test_kwargs = [
{'weights_type':'cbas', 'quantile': 1},
{'weights_type':'rwr', 'alpha': 20},
{'weights_type':'dbas', 'quantile': 0.95},
{'weights_type':'cem-pi', 'quantile': 0.8},
{'weights_type': 'fbvae', 'quantile': 0.8}
]
base_kwargs = {
'homoscedastic': False,
'homo_y_var': 0.01,
'train_gt_evals':gt_train,
'samples':100,
'cutoff':1e-6,
'it_epochs':10,
'verbose':True,
'LD': 20,
'enc1_units':50,
'iters': 50
}
if num_models==1:
base_kwargs['homoscedastic'] = True
base_kwargs['homo_y_var'] = np.mean((get_balaji_predictions(oracles, X_train)[0] - y_train)**2)
for k in range(repeat_start, repeats):
for j in range(len(test_kwargs)):
test_name = test_kwargs[j]['weights_type']
suffix = "_%s_%i_%i_w_edit_distances" % (train_size_str, RANDOM_STATE, k)
if test_name == 'fbvae':
if base_kwargs['iters'] > 100:
suffix += '_long'
print(suffix)
kwargs = {}
kwargs.update(test_kwargs[j])
kwargs.update(base_kwargs)
[kwargs.pop(k) for k in ['homoscedastic', 'homo_y_var', 'cutoff', 'it_epochs']]
test_traj, test_oracle_samples, test_gt_samples, test_edit_distance_samples, test_max = fb_opt(np.copy(X_train), oracles, ground_truth, vae_0, **kwargs)
else:
if base_kwargs['iters'] > 100:
suffix += '_long'
kwargs = {}
kwargs.update(test_kwargs[j])
kwargs.update(base_kwargs)
test_traj, test_oracle_samples, test_gt_samples, test_edit_distance_samples, test_max = weighted_ml_opt(np.copy(X_train), oracles, ground_truth, vae_0, **kwargs)
np.save('results/%s_traj%s.npy' %(test_name, suffix), test_traj)
np.save('results/%s_oracle_samples%s.npy' % (test_name, suffix), test_oracle_samples)
np.save('results/%s_gt_samples%s.npy'%(test_name, suffix), test_gt_samples )
np.save('results/%s_edit_distance_samples%s.npy'%(test_name, suffix), test_edit_distance_samples )
with open('results/%s_max%s.json'% (test_name, suffix), 'w') as outfile:
json.dump(test_max, outfile)
train_experimental_oracles()
train_experimental_vaes()
run_experimental_weighted_ml(0, repeat_start=0, repeats=1)
run_experimental_weighted_ml(1, repeat_start=0, repeats=1)
run_experimental_weighted_ml(2, repeat_start=0, repeats=1)
run_experimental_weighted_ml(0, repeat_start=1, repeats=3)
run_experimental_weighted_ml(1, repeat_start=1, repeats=3)
run_experimental_weighted_ml(2, repeat_start=1, repeats=3)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Terrain/srtm_landforms.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Terrain/srtm_landforms.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Datasets/Terrain/srtm_landforms.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Terrain/srtm_landforms.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for the first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
dataset = ee.Image('CSP/ERGo/1_0/Global/SRTM_landforms')
landforms = dataset.select('constant')
landformsVis = {
'min': 11.0,
'max': 42.0,
'palette': [
'141414', '383838', '808080', 'EBEB8F', 'F7D311', 'AA0000', 'D89382',
'DDC9C9', 'DCCDCE', '1C6330', '68AA63', 'B5C98E', 'E1F0E5', 'a975ba',
'6f198c'
],
}
Map.setCenter(-105.58, 40.5498, 11)
Map.addLayer(landforms, landformsVis, 'Landforms')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
```
from bokeh.io import output_notebook, show, reset_output
import numpy as np
output_notebook()
from IPython.display import IFrame
IFrame('https://demo.bokehplots.com/apps/sliders', width=900, height=500)
```
### Basic scatterplot
```
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
# create a new plot with default tools, using figure
p = figure(plot_width=400, plot_height=400)
# add a circle renderer with a size, color, and alpha
p.circle([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], size=15, line_color="navy", fill_color="orange", fill_alpha=0.5)
show(p) # show the results
```
### Interactive visualization using sliders
```
from bokeh.layouts import row, column
from bokeh.models import CustomJS, ColumnDataSource, Slider
import matplotlib.pyplot as plt
x = [x*0.005 for x in range(0, 201)]
output_notebook()
source = ColumnDataSource(data=dict(x=x, y=x))
plot = figure(plot_width=400, plot_height=400)
plot.scatter('x', 'y', source=source, line_width=3, line_alpha=0.6)
slider = Slider(start=0.1, end=6, value=1, step=.1, title="power")
update_curve = CustomJS(args=dict(source=source, slider=slider), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
y[i] = Math.pow(x[i], f)
}
source.change.emit();
""")
slider.js_on_change('value', update_curve)
show(row(slider, plot))
#scatterplot using sliders
x = [x*0.005 for x in range(0, 21)]
output_notebook()
source = ColumnDataSource(data=dict(x=x, y=x))
plot = figure(plot_width=400, plot_height=400)
plot.scatter('x', 'y', source=source, line_width=3, line_alpha=0.6)
slider = Slider(start=0.1, end=6, value=1, step=.1, title="power")
update_curve = CustomJS(args=dict(source=source, slider=slider), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
y[i] = Math.pow(x[i], f)
}
source.change.emit();
""")
slider.js_on_change('value', update_curve)
print source.data['y']
show(row(slider, plot))
#Making equivalent of diffusion
Arr = np.random.rand(2,100)
source = ColumnDataSource(data=dict(x=Arr[0,], y=Arr[1,]))
plot = figure(plot_width=400, plot_height=400)
plot.scatter('x', 'y', source=source, line_width=3, line_alpha=0.6)
slider = Slider(start=1, end=8, value=1, step=1, title="Diffusion_steps")
slider2 = Slider(start=1, end=8, value=1, step=1, title="Anti_Diffusion_steps")
update_curve = CustomJS(args=dict(source=source, slider=slider), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
x[i] = Math.pow(x[i], f)
y[i] = Math.pow(y[i], f)
}
source.change.emit();
""")
update_curve2 = CustomJS(args=dict(source=source, slider=slider2), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
x[i] = Math.pow(x[i], 1/f)
y[i] = Math.pow(y[i], 1/f)
}
source.change.emit();
""")
slider.js_on_change('value', update_curve)
slider2.js_on_change('value', update_curve2)
show(row(column(slider,slider2), plot))
from bokeh.models import TapTool, CustomJS, ColumnDataSource
callback = CustomJS(code="alert('hello world')")
tap = TapTool(callback=callback)
p = figure(plot_width=600, plot_height=300, tools=[tap])
p.circle(x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7], size=20)
show(p)
from bokeh.models import ColumnDataSource, OpenURL, TapTool
from bokeh.plotting import figure, output_file, show
output_file("openurl.html")
p = figure(plot_width=400, plot_height=400,
tools="tap", title="Click the Dots")
source = ColumnDataSource(data=dict(
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
color=["navy", "orange", "olive", "firebrick", "gold"]
))
p.circle('x', 'y', color='color', size=20, source=source)
url = "http://www.colors.commutercreative.com/@color/"
taptool = p.select(type=TapTool)
taptool.callback = OpenURL(url=url)
show(p)
from bokeh.models import ColumnDataSource, TapTool, DataRange1d, Plot, LinearAxis, Grid, HoverTool
from bokeh.plotting import figure, output_file, show
from bokeh.models.glyphs import HBar
p = figure(plot_width=400, plot_height=400,
tools="tap", title="Click the Dots")
source = ColumnDataSource(data=dict(
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
color=["navy", "orange", "olive", "firebrick", "gold"]
))
p.circle('x', 'y', color='color', size=20, source=source)
source2 = ColumnDataSource(data=dict(
x=[1,2],
y=[1,2]))
callback = CustomJS(args=dict(source2=source2), code="""
var data = source2.get('data');
var geom = cb_data['geometries'];
data['x'] = [geom[0].x+1,geom[0].x-1]
data['y'] = [geom[0].y+1,geom[0].y-1]
source2.trigger('change');
""")
def callback2(source2 = source2):
data = source2.get('data')
geom = cb_obj.get('geometries')
data['x'] = [geom['x']+1,geom['x']-1]
data['y'] = [geom['y']+1,geom['y']-1]
source2.trigger('change')
taptool = p.select(type=TapTool)
taptool.callback = CustomJS.from_py_func(callback2);
xdr = DataRange1d()
ydr = DataRange1d()
p2 = figure(plot_width=400, plot_height=400)
p2.vbar(x=source2.data['x'], width=0.5, bottom=0,
top=source2.data['y'], color="firebrick")
#glyph = HBar(source2.data['x'], source2.data['y'], left=0, height=0.5, fill_color="#b3de69")
#p2.add_glyph(source2, glyph)
#p2.add_glyph(source, glyph)
show(row(p,p2))
update()
source = ColumnDataSource(data=dict(
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
color=["navy", "orange", "olive", "firebrick", "gold"]
))
source2.data['x']
```
| github_jupyter |
```
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from astropy.time import Time
def convert_to_ap_Time(df, key):
print(key)
df[key] = pd.to_datetime(df[key])
df[key] = Time([t1.astype(str) for t1 in df[key].values], format="isot")
return df
def convert_times_to_datetime(df):
columns = ["Gun Time", "Chip Time", "TOD", "Beat the Bridge", "Beat the Bridge.1"]
for key in columns:
df = convert_to_ap_Time(df, key)
df = convert_Time_to_seconds(df, key)
return df
def convert_Time_to_seconds(df, key):
t0 = Time("2017-05-04T00:00:00.000", format="isot")
df["sub" + key] = df[key] - t0
df["sub" + key] = [t.sec for t in df["sub" + key].values]
return df
def find_astronomers(df):
astronomers = ("Robert FIRTH", "Stephen BROWETT", "Mathew SMITH", "Sadie JONES")
astro_df = df[df["Name"].isin((astronomers))]
return astro_df
def plot_hist_with_astronomers(df, astro_df, key):
rob_time = astro_df[key][158]/60.
mat_time = astro_df[key][737]/60.
steve_time = astro_df[key][1302]/60.
sadie_time = astro_df[key][576]/60.
mean_time = df[key].mean()/60
median_time = df[key].median()/60
plt.hist(df[key]/60., bins = 100)
plt.plot([rob_time, rob_time], [0, 70], lw = 2, label = "Rob")
plt.plot([mat_time, mat_time], [0, 70], lw = 2, label = "Mat")
plt.plot([steve_time, steve_time], [0, 70], lw = 2, label = "Steve")
plt.plot([sadie_time, sadie_time], [0, 70], lw = 2, label = "Sadie")
plt.plot([mean_time, mean_time], [0, 70], lw = 2, color = "Black", ls = ":", label = "Mean")
plt.plot([median_time, median_time], [0, 70], lw = 2, color = "Black", ls = "--", label = "Median")
plt.xlabel(key.replace("sub", "") + " Minutes")
plt.legend()
results_path = "/Users/berto/Code/zoidberg/ABPSoton10k/data/Results10k.csv"
df = pd.read_csv(results_path)
# df = df.drop(df.index[len(df)-10:])
df = df.drop(df.loc[df["Gun Time"] == "DNF"].index)
df = df.drop(df.loc[df["Gun Time"] == "QRY"].index)
df = df.drop(df.loc[df["Beat the Bridge"] == "99:99:99"].index)
df.columns
df = convert_times_to_datetime(df)
astro_df = find_astronomers(df)
astro_df
# key = "subGun Time"
key = "subChip Time"
rob_time = astro_df[key][158]/60.
mat_time = astro_df[key][737]/60.
steve_time = astro_df[key][1302]/60.
sadie_time = astro_df[key][576]/60.
mean_time = df[key].mean()/60
median_time = df[key].median()/60
plt.hist(df[key]/60., bins = 100)
plt.plot([rob_time, rob_time], [0, 70], lw = 2, label = "Rob")
plt.plot([mat_time, mat_time], [0, 70], lw = 2, label = "Mat")
plt.plot([steve_time, steve_time], [0, 70], lw = 2, label = "Steve")
plt.plot([sadie_time, sadie_time], [0, 70], lw = 2, label = "Sadie")
plt.plot([mean_time, mean_time], [0, 70], lw = 2, color = "Black", ls = ":", label = "Mean")
plt.plot([median_time, median_time], [0, 70], lw = 2, color = "Black", ls = "--", label = "Median")
plt.xlabel(key.replace("sub", "") + " Minutes")
plt.legend()
plot_hist_with_astronomers(df=df, astro_df=astro_df, key="subBeat the Bridge")
```
## Chip Time vs Bridge Time
```
keyx = "subChip Time"
keyy = "subBeat the Bridge"
corr_co = np.corrcoef(df[keyx]/60., df[keyy]/60.)
plt.scatter(df[keyx]/60., df[keyy]/60.)
plt.xlabel(keyx.replace("sub", "") + " Minutes")
plt.ylabel(keyy.replace("sub", "") + " Minutes")
print(corr_co[1,0])
```
## Time vs Bib Number
```
keyx = "subChip Time"
keyy = "Bib No"
corr_co = np.corrcoef(df[keyx]/60., df[keyy])
plt.scatter(df[keyx]/60., df[keyy])
plt.xlabel(keyx.replace("sub", "") + " Minutes")
plt.ylabel(keyy.replace("sub", ""))
print(corr_co[1,0])
# plt.scatter(df["Pos"], df["subChip Time"])
# plt.scatter(df["subChip Time"], df["subBeat the Bridge"])
plt.scatter(df["Pos"], df["G/Pos"])
# print(df.groupby("Gender"))
plt.scatter((df["subGun Time"] - df["subChip Time"])/60., df["subGun Time"]/60.)
# plt.scatter(df["subChip Time"]/60., df["Bib No"])
# df.
# df.columns
# fig = plt.figure(figsize=[8, 4])
# fig.subplots_adjust(left = 0.09, bottom = 0.13, top = 0.99,
# right = 0.99, hspace=0, wspace = 0)
# ax1 = fig.add_subplot(111)
# ax1.scatter(df[df["Club"] == "NaN"]["subChip Time"]/60., df[df["Club"] == "NaN"]["subBeat the Bridge"]/60., color = "Orange")
# ax1.scatter(df[df["Club"] != "NaN"]["subChip Time"]/60., df[df["Club"] != "NaN"]["subBeat the Bridge"]/60., color = "Blue")
clubs = df["Club"].unique()
clubs = [clubs[i] for i in np.arange(len(clubs)) if i != 1]
keyx = "subChip Time"
keyy = "subBeat the Bridge"
corr_co = np.corrcoef(df[keyx][df["Club"].isin(clubs)]/60., df[keyy][df["Club"].isin(clubs)]/60.)
plt.scatter(df[keyx][df["Club"].isin(clubs)]/60., df[keyy][df["Club"].isin(clubs)]/60., label = "clubbed")
# plt.scatter(df[keyx][df["Club"].isin(np.invert(clubs))]/60., df[keyy][df["Club"].isin(np.invert(clubs))]/60.)
keyx = "subChip Time"
keyy = "subBeat the Bridge"
corr_co = np.corrcoef(df[keyx]/60., df[keyy]/60.)
plt.scatter(df[keyx]/60., df[keyy]/60., label = "unclubbed", zorder = -9)
plt.xlabel(keyx.replace("sub", "") + " Minutes")
plt.ylabel(keyy.replace("sub", "") + " Minutes")
plt.legend()
plt.hist(df[keyx][df["Club"].isin(clubs)]/60,label = "clubbed", normed = True, alpha = 0.7)
plt.hist(df[keyx]/60,label = "unclubbed", zorder = -99, normed= True, alpha = 0.7)
plt.scatter((df["subGun Time"][df["Club"].isin(clubs)] - df["subChip Time"][df["Club"].isin(clubs)])/60., df["subGun Time"][df["Club"].isin(clubs)]/60.)
plt.scatter((df["subGun Time"] - df["subChip Time"])/60., df["subGun Time"]/60., zorder = -99)
print(df[keyx].mean()/60.)
print(df[keyx][df["Club"].isin(clubs)].mean()/60.)
df[["Club", "Name", "subChip Time"]][df["Club"].isin(clubs)]
# convert_to_ap_Time(df)
t0 = Time("2017-04-26T00:00:00.000", format="isot")
t1 = df["Gun Time"].values[0]
t1
t1 - t0
col = df["Gun Time"] - t0
x = col[0]
x.
col.sec
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.datasets import load_digits, load_iris
from sklearn.model_selection import train_test_split
from pca import pca as MyPCA
```
# Load Digit Dataset
```
digits = load_digits()
def draw_digits(X, y):
fig = plt.figure(1, figsize=(8, 8))
plt.scatter(X[:, 0], X[:, 1],
c=y, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('Spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar()
plt.show();
```
# sklearn PCA
```
pca = PCA(n_components=2, random_state=17).fit(digits.data)
data_pca = pca.transform(digits.data)
pca.explained_variance_ratio_, pca.explained_variance_, pca.singular_values_, pca.components_
data_pca
draw_digits(data_pca, digits.target)
```
# Our Implementation
```
pca1 = MyPCA(n_components=2, solver='svd')
pca1.fit(digits.data)
data_pca1 = pca1.transform(digits.data)
pca1.explained_variance_ratio_, pca1.explained_variance_, pca1.singular_values_, pca1.components_
data_pca1
draw_digits(data_pca1, digits.target)
```
### eig solver
```
pca_eig = MyPCA(n_components=2, solver='eig')
pca_eig.fit(digits.data)
data_eig = pca_eig.transform(digits.data)
pca_eig.explained_variance_ratio_, pca_eig.explained_variance_, pca_eig.singular_values_, pca_eig.components_
data_eig
draw_digits(data_eig, digits.target)
```
# Iris Dataset
Let's try to plot 3 components after PCA.<br>
https://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_iris.html#sphx-glr-auto-examples-decomposition-plot-pca-iris-py
```
from mpl_toolkits.mplot3d import Axes3D
def plot_components(X, y):
fig = plt.figure(1, figsize=(12, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
for name, label in [('Setosa', 0), ('Versicolour', 1), ('Virginica', 2)]:
ax.text3D(X[y == label, 0].mean(),
X[y == label, 1].mean() + 1.5,
X[y == label, 2].mean(), name,
horizontalalignment='center',
bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap=plt.cm.nipy_spectral,
edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
plt.show()
iris = load_iris()
X, y = iris.data, iris.target
```
# sklearn
```
pca_3d = PCA(n_components=3, random_state=17).fit(X)
X_3d = pca_3d.transform(X)
plot_components(X_3d, y)
```
# Our's: Solver:svd
```
pca_3d_svd = MyPCA(n_components=3)
pca_3d_svd.fit(X)
X_3d_svd = pca_3d_svd.transform(X)
plot_components(X_3d_svd, y)
```
# Our's: Solver:eig fit_transform
```
pca_3d_eig = MyPCA(n_components=3, solver='eig')
X_3d_eig = pca_3d_eig.fit_transform(X)
plot_components(X_3d_eig, y)
```
| github_jupyter |
```
import util
import jax
import jax.numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import numpy as base_np
from epiweeks import Week, Year
start = '2020-03-15'
forecast_start = '2020-04-19'
num_weeks = 8
data = util.load_state_data()
places = sorted(list(data.keys()))
#places = ['AK', 'AL']
allQuantiles = [0.01,0.025]+list(np.arange(0.05,0.95+0.05,0.05)) + [0.975,0.99]
forecast_date = pd.to_datetime('2020-04-19')
currentEpiWeek = Week.fromdate(forecast_date) - 1
forecast = {'quantile':[], 'value':[], 'type':[], 'location':[], 'target':[]}
print (currentEpiWeek)
for place in places:
prior_samples, mcmc_samples, post_pred_samples = util.load_samples(place, path='out')
forecast_samples = post_pred_samples['z_future']
t = pd.date_range(start=forecast_start, periods=forecast_samples.shape[1], freq='D')
weekly_df = pd.DataFrame(index=t, data=np.transpose(forecast_samples)).resample("1w",label='right').last()
weekly_df[weekly_df<0.] = 0.
for time, samples in weekly_df.iterrows():
for q in allQuantiles:
deathPrediction = base_np.percentile(samples,q*100)
forecast["quantile"].append("{:.3f}".format(q))
forecast["value"].append(deathPrediction)
forecast["type"].append("quantile")
forecast["location"].append(place)
horizon_date = Week.fromdate(time)
week_ahead = horizon_date.week - currentEpiWeek.week
forecast["target"].append("{:d} wk ahead cum death".format(week_ahead))
currentEpiWeek_datetime = currentEpiWeek.startdate()
forecast["forecast_date"] = "{:4d}-{:02d}-{:02d}".format(currentEpiWeek_datetime.year,currentEpiWeek_datetime.month,currentEpiWeek_datetime.day)
if q==0.50:
forecast["quantile"].append("NA")
forecast["value"].append(deathPrediction)
forecast["type"].append("point")
forecast["location"].append(place)
forecast["target"].append("{:d} wk ahead cum death".format(week_ahead))
forecast["forecast_date"] = "{:4d}-{:02d}-{:02d}".format(currentEpiWeek_datetime.year,currentEpiWeek_datetime.month,currentEpiWeek_datetime.day)
#base_np.quantile(hosp,axis=1,q=allQuantiles)
forecast = pd.DataFrame(forecast)
forecast.loc[forecast.type=="point"]
fips_codes = pd.read_csv('/Users/gcgibson/covid19-forecast-hub/template/state_fips_codes.csv')
df_truth = forecast.merge(fips_codes, left_on='location', right_on='state', how='left')
df_truth["state_code"] = df_truth["state_code"].astype(int)
df_truth = df_truth[["quantile", "value", "type", "state_code","target","forecast_date"]]
df_truth = df_truth.rename(columns={"state_code": "location"})
import datetime
df_truth['location'] = df_truth['location'].apply(lambda x: '{0:0>2}'.format(x))
#df_truth['forecast_date'] = datetime.datetime(2020, 4, 19)
df_truth.to_csv(f'out/sub.csv', float_format="%.0f")
df_truth
```
| github_jupyter |
# Estimating the biomass of terrestrial arthropods
To estimate the biomass of terrestrial arthropods, we rely on two parallel methods - a method based on average biomass densities of arthropods extrapolated to the global ice-free land surface, and a method based on estimates of the average carbon content of a characteristic arthropod and the total number of terrestrial arthropods.
## Average biomass densities method
We collected values from the literature on the biomass densities of arthropods per unit area. We assume, based on [Stork et al.](http://dx.doi.org/10.1007/978-94-009-1685-2_1), most of the biomass is located in the soil, litter or in the canopy of trees. We thus estimate a mean biomass density of arhtropods in soil, litter and in canopies, sum those biomass densities and apply them across the entire ice-free land surface.
### Litter arthropod biomass
We complied a list of values from several different habitats. Most of the measurements are from forests and savannas. For some of the older studies, we did not have access to the original data, but to a summary of the data made by two main studies: [Gist & Crossley](http://dx.doi.org/10.2307/2424109) and [Brockie & Moeed](http://dx.doi.org/10.1007/BF00377108). Here is a sample of the data from Gist & Grossley:
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gmean
import sys
sys.path.insert(0, '../../statistics_helper/')
from CI_helper import *
pd.options.display.float_format = '{:,.1f}'.format
# Load global stocks data
gc_data = pd.read_excel('terrestrial_arthropods_data.xlsx','Gist & Crossley',skiprows=1)
gc_data.head()
```
Here is a sample from Brockie & Moeed:
```
bm_data = pd.read_excel('terrestrial_arthropods_data.xlsx','Brockie & Moeed',skiprows=1)
bm_data.head()
```
We calculate the sum of biomass of all the groups of arthropods in each study to provide an estimate for the total biomass density of arthropods in litter:
```
gc_study = gc_data.groupby('Study').sum()
bm_study = bm_data.groupby('Study').sum()
print('The estimate from Brockie & Moeed:')
bm_study
print('The estimate from Gist & Crossley:')
gc_study
```
In cases where data is coflicting between the two studies, we calculate the mean. We merge the data from the papers to generate a list of estimates on the total biomass density of arhtropods
```
# Concat the data from the two studies
conc = pd.concat([gc_study,bm_study])
conc_mean = conc.groupby(conc.index).mean()
conc_mean
```
We calculate from the dry weight and wet weight estimates the biomass density in g C $m^{-2}$ by assuming 70% water content and 50% carbon in dry mass:
```
# Fill places with no dry weight estimate with 30% of the wet weight estimate
conc_mean['Dry weight [g m^-2]'].fillna(conc_mean['Wet weight [g m^-2]']*0.3,inplace=True)
# Calculate carbon biomass as 50% of dry weight
conc_mean['Biomass density [g C m^-2]'] = conc_mean['Dry weight [g m^-2]']/2
conc_mean['Biomass density [g C m^-2]']
```
We calculate the geometric mean of the estimates from the different studies as our best estimate of the biomass density of litter arthropods.
```
litter_biomass_density = gmean(conc_mean.iloc[0:5,3])
print('Our best estimate for the biomass density of arthropods in litter is ≈%.0f g C m^-2' %litter_biomass_density)
```
### Soil arthropod biomass
As our source for estimating the biomass of soil arthropods, we use these data collected from the literature, which are detailed below:
```
# Load additional data
soil_data = pd.read_excel('terrestrial_arthropods_data.xlsx','Soil',index_col='Reference')
soil_data
```
We calculate the geometric mean of the estimate for the biomass density of arthropods in soils:
```
# Calculate the geometric mean of the estimates of the biomass density of soil arthropods
soil_biomass_density = gmean(soil_data['Biomass density [g C m^-2]'])
print('Our best estimate for the biomass density of arthropods in soils is ≈%.0f g C m^-2' %soil_biomass_density)
```
If we sum the biomass density of soil and litter arthropods, we arrive at an estimate of ≈2 g C m^-2, which is inline with the data from Kitazawa et al. of 1-2 g C m^-2.
### Canopy arthropod biomass
Data on the biomass density of canopy arthropods is much less abundant. We extracted from the literature the following values:
```
# Load the data on the biomass density of canopy arthropods
canopy_data = pd.read_excel('terrestrial_arthropods_data.xlsx', 'Canopy',index_col='Reference')
canopy_data
```
We calculate the geometric mean of the estimates for the biomass density of arthropods in canopies:
```
# Calculate the geometric mean of the estimates of biomass densitiy of canopy arthropods
canopy_biomass_density = gmean(canopy_data['Biomass density [g C m^-2]'])
print('Our best estimate for the biomass density of arthropods in canopies is ≈%.1f g C m^-2' %canopy_biomass_density)
```
To generate our best estimate for the biomass of arthropods using estimates of biomass densities, we sum the estimates for the biomass density of arthropods in soils and in canopies, and apply this density over the entire ice-free land surface of $1.3×10^{14} \: m^2$:
```
# Sum the biomass densities of arthropods in soils and in canopies
total_denisty = litter_biomass_density+soil_biomass_density+canopy_biomass_density
# Apply the average biomass density across the entire ice-free land surface
method1_estimate = total_denisty*1.3e14
print('Our best estimate for the biomass of terrestrial arthropods using average biomass densities is ≈%.1f Gt C' %(method1_estimate/1e15))
```
## Average carbon content method
In this method, in order to estimate the total biomass of arthropods, we calculate the carbon content of a characteristic arthropod, and multiply this carbon content by an estimate for the total number of arthropods.
We rely both on data from Gist & Crossley which detail the total number of arthropods per unit area as well as the total biomass of arthropods per unit area for serveal studies. Form this data we can calculate the characteristic carbon content of a single arthropod assuming 50% carbon in dry mass:
```
pd.options.display.float_format = '{:,.1e}'.format
# Calculate the carbon content of a single arthropod by dividing the dry weight by 2 (assuming 50% carbon in
# dry weight) and dividing the result by the total number of individuals
gc_study['Carbon content [g C per individual]'] = gc_study['Dry weight [g m^-2]']/2/gc_study['Density of individuals [N m^-2]']
gc_study
```
We combine the data from these studies with data from additional sources detailed below:
```
# Load additional data sources
other_carbon_content_data = pd.read_excel('terrestrial_arthropods_data.xlsx', 'Carbon content',index_col='Reference')
other_carbon_content_data
```
We calculate the geometric mean of the estimates from the difference studies and use it as our best estimate for the carbon content of a characteristic arthropod:
```
# Calculate the geometric mean of the estimates from the different studies on the average carbon content of a single arthropod.
average_carbon_content = gmean(pd.concat([other_carbon_content_data,gc_study])['Carbon content [g C per individual]'])
print('Our best estimate for the carbon content of a characteristic arthropod is %.1e g C' % average_carbon_content)
```
To estimate the total biomass of arthropods using the characteristic carbon content method, we multiply our best estiamte of the carbon content of a single arthropod by an estimate of the total number of arthropods made by [Williams](http://dx.doi.org/10.1086/282115). Williams estiamted a total of $~10^{18}$ individual insects in soils. We assume this estimate of the total number of insects is close to the total number of arthropods (noting that in this estimate Williams also included collembola which back in 1960 were considered insects, and are usually very numerous because of their small size). To estimate the total biomass of arthropods, we multiply the carbon content of a single arthropod by the the estimate for the total number of arthropods:
```
# Total number of insects estimated by Williams
tot_num_arthropods = 1e18
# Calculate the total biomass of arthropods
method2_estimate = average_carbon_content*tot_num_arthropods
print('Our best estimate for the biomass of terrestrial arthropods using average biomass densities is ≈%.1f Gt C' %(method2_estimate/1e15))
```
Our best estimate for the biomass of arthropods is the geometric mean of the estimates from the two methods:
```
# Calculate the geometric mean of the estimates using the two methods
best_estimate = gmean([method1_estimate,method2_estimate])
print('Our best estimate for the biomass of terrestrial arthropods is ≈%.1f Gt C' %(best_estimate/1e15))
```
# Uncertainty analysis
To assess the uncertainty associated with the estimate of the biomass of terrestrial arthropods, we compile a collection of the different sources of uncertainty, and combine them to project the total uncertainty. We survey the interstudy uncertainty for estimates within each method, the total uncertainty of each method and the uncertainty of the geometric mean of the values from the two methods.
## Average biomass densities method
We calculate the 95% confidence interval for the geometric mean of the biomass densities reported for soil and canopy arthropods:
```
litter_CI = geo_CI_calc(conc_mean['Biomass density [g C m^-2]'])
soil_CI = geo_CI_calc(soil_data['Biomass density [g C m^-2]'])
canopy_CI = geo_CI_calc(canopy_data['Biomass density [g C m^-2]'])
print('The 95 percent confidence interval for the average biomass density of soil arthropods is ≈%.1f-fold' %litter_CI)
print('The 95 percent confidence interval for the average biomass density of soil arthropods is ≈%.1f-fold' %soil_CI)
print('The 95 percent confidence interval for the average biomass density of canopy arthropods is ≈%.1f-fold' %canopy_CI)
```
To estimate the uncertainty of the global biomass estimate using the average biomass density method, we propagate the uncertainties of the soil and canopy biomass density:
```
method1_CI = CI_sum_prop(estimates=np.array([litter_biomass_density,soil_biomass_density,canopy_biomass_density]),mul_CIs=np.array([litter_CI,soil_CI,canopy_CI]))
print('The 95 percent confidence interval biomass of arthropods using the biomass densities method is ≈%.1f-fold' %method1_CI)
```
## Average carbon content method
As a measure of the uncertainty of the estimate of the total biomass of arthropods using the average carbon content method, we calculate the 95% confidence interval of the geometric mean of the estimates from different studies of the carbon content of a single arthropod:
```
carbon_content_CI = geo_CI_calc(pd.concat([other_carbon_content_data,gc_study])['Carbon content [g C per individual]'])
print('The 95 percent confidence interval of the carbon content of a single arthropod is ≈%.1f-fold' %carbon_content_CI)
```
We combine this uncertainty of the average carbon content of a single arthropod with the uncertainty reported in Williams on the total number of insects of about one order of magnitude. This provides us with a measure of the uncertainty of the estimate of the biomass of arthropods using the average carbon content method.
```
# The uncertainty of the total number of insects from Williams
tot_num_arthropods_CI = 10
# Combine the uncertainties of the average carbon content of a single arthropod and the uncertainty of
# the total number of arthropods
method2_CI = CI_prod_prop(np.array([carbon_content_CI,tot_num_arthropods_CI]))
print('The 95 percent confidence interval biomass of arthropods using the average carbon content method is ≈%.1f-fold' %method2_CI)
```
## Inter-method uncertainty
We calculate the 95% conficence interval of the geometric mean of the estimates of the biomass of arthropods using the average biomass density or the average carbon content:
```
inter_CI = geo_CI_calc(np.array([method1_estimate,method2_estimate]))
print('The inter-method uncertainty of the geometric mean of the estimates of the biomass of arthropods is ≈%.1f' % inter_CI)
```
As our best projection for the uncertainty associated with the estimate of the biomass of terrestrial arthropods, we take the highest uncertainty among the collection of uncertainties we generate, which is the ≈15-fold uncertainty of the average carbon content method.
```
mul_CI = np.max([inter_CI,method1_CI,method2_CI])
print('Our best projection for the uncertainty associated with the estimate of the biomass of terrestrial arthropods is ≈%.1f-fold' %mul_CI)
```
## The biomass of termites
As we state in the Supplementary Information, there are some groups of terrestrial arthropods for which better estimates are available. An example is the biomass of termites. We use the data in [Sanderson](http://dx.doi.org/10.1029/96GB01893) to estimate the global biomass of termites:
```
# Load termite data
termite_data = pd.read_excel('terrestrial_arthropods_data.xlsx', 'Sanderson', skiprows=1, index_col=0)
# Multiply biomass density by biome area and sum over biomes
termite_biomass = (termite_data['Area [m^2]']* termite_data['Biomass density [g wet weight m^-2]']).sum()
# Calculate carbon mass assuming carbon is 15% of wet weight
termite_biomass *= 0.15
print('The estimate of the total biomass of termites based on Sanderson is ≈%.2f Gt C' %(termite_biomass/1e15))
```
| github_jupyter |
# Pandas Exercise
```
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
import pandas as pd
def df_info(df: pd.DataFrame) -> None:
return df.head(n=20).style
```
## Cars Auction Dataset
| Feature | Type | Description |
|--------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Price | Integer | The sale price of the vehicle in the ad |
| Years | Integer | The vehicle registration year |
| Brand | String | The brand of car |
| Model | String | model of the vehicle |
| Color | String | Color of the vehicle |
| State/City | String | The location in which the car is being available for purchase |
| Mileage | Float | miles traveled by vehicle |
| Title Status | String | This feature included binary classification, which are clean title vehicles and salvage insurance |
| Condition | String | Time |
```
df = pd.read_csv("../data/USA_cars_datasets.csv")
print(df.columns)
df.head()
```
## Exercise 1
- Get the counts for the us states
## Exercise 2
- Get all cars from the state of new mexico
## Exercise 3
- Compute the mean mileage of all cars from new york
## Exercise 4
- Remove all entries where the year is below 2019
## Exercise 5
- Replace all color values by the first character of the color name
E.g.: 'blue' => 'b'
| github_jupyter |
## inference in simple model using synthetic data
population size 10^6, inference window 2x4 = 8 days, to be compared with ``-win5`` analogous notebook
```
%env OMP_NUM_THREADS=1
%matplotlib inline
import numpy as np
import os
import pickle
import pprint
import time
import pyross
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
#from matplotlib import rc; rc('text', usetex=True)
import synth_fns
```
(cell 3 was removed to hide local file info)
### main settings
```
## for dataFiles : needs a fresh value in every notebook
fileRoot = 'dataSynthInfTest-pop1e6-win2'
## total population
popN = 1e6
## tau-leaping param, take this negative to force gillespie
## or set a small value for high-accuracy tau-leap (eg 1e-4 or 1e-5)
leapEps = -1
## do we use small tolerances for the likelihood computations? (use False for debug etc)
isHighAccuracy = True
# absolute tolerance for logp for MAP
inf_atol = 1.0
## prior mean of beta, divided by true value (set to 1.0 for the simplest case)
betaPriorOffset = 0.8
betaPriorLogNorm = False
## mcmc
mcSamples = 5000
nProcMCMC = 2 # None ## take None to use default but large numbers are not efficient in this example
trajSeed = 18
infSeed = 21
mcSeed = infSeed+2
loadTraj = False
saveMC = True
```
### model
```
model_dict = synth_fns.get_model(popN)
model_spec = model_dict['mod']
contactMatrix = model_dict['CM']
parameters_true = model_dict['params']
cohortsM = model_dict['cohortsM']
Ni = model_dict['cohortsPop']
```
#### more settings
```
## total trajectory time (bare units)
Tf_bare = 20
## total inf time
Tf_inf_bare = 2
## inference period starts when the total deaths reach this amount (as a fraction)
fracDeaths = 2e-3 # int(N*200/1e5)
## hack to get higher-frequency data
## how many data points per "timestep" (in original units)
fineData = 4
## this assumes that all parameters are rates !!
for key in parameters_true:
#print(key,parameters_true[key])
parameters_true[key] /= fineData
Tf = Tf_bare * fineData;
Nf = Tf+1
Tf_inference = Tf_inf_bare * fineData
Nf_inference = Tf_inference+1
```
### plotting helper functions
```
def plotTraj(M,data_array,Nf_start,Tf_inference,fineData):
fig = plt.figure(num=None, figsize=(6, 4), dpi=80, facecolor='w', edgecolor='k')
#plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=12)
t = np.linspace(0, Tf/fineData, Nf)
# plt.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='S', lw=4)
plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-o', label='Exposed', lw=2)
plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-o', label='Infected', lw=2)
plt.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), '-o', label='Deaths', lw=2)
#plt.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=2)
plt.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
plt.legend()
plt.show()
fig,axs = plt.subplots(1,2, figsize=(12, 5), dpi=80, facecolor='w', edgecolor='k')
ax = axs[0]
ax.plot(t[1:],np.diff(np.sum(data_array[:, 3*M:4*M], axis=1)),'o-',label='death increments', lw=1)
ax.legend(loc='upper right') ; # plt.show()
ax = axs[1]
ax.plot(t,np.sum(data_array[:, 3*M:4*M], axis=1),'o-',label='deaths',ms=3)
ax.legend() ;
plt.show()
def plotMAP(res,data_array,M,N,estimator,Nf_start,Tf_inference,fineData):
print('**beta(bare units)',res['params_dict']['beta']*fineData)
print('**logLik',res['log_likelihood'],'true was',logpTrue)
print('\n')
print(res)
fig,axs = plt.subplots(1,3, figsize=(15, 7), dpi=80, facecolor='w', edgecolor='k')
plt.subplots_adjust(wspace=0.3)
#plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=12)
t = np.linspace(0, Tf/fineData, Nf)
ax = axs[0]
#plt.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='S', lw=4)
ax.plot(t, np.sum(data_array[:, M:2*M], axis=1), 'o', label='Exposed', lw=2)
ax.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), 'o', label='Infected', lw=2)
ax.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), 'o', label='Deaths', lw=2)
#plt.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=2)
tt = np.linspace(Nf_start, Tf, Nf-Nf_start,)/fineData
xm = estimator.integrate(res['x0'], Nf_start, Tf, Nf-Nf_start, dense_output=False)
#plt.plot(tt, np.sum(xm[:, :M], axis=1), '-x', label='S-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, M:2*M], axis=1), '-x', color='C0',label='E-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, 2*M:3*M], axis=1), '-x', color='C1',label='I-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, 3*M:4*M], axis=1), '-x', color='C2',label='D-MAP', lw=2, ms=3)
#plt.plot(tt, N-np.sum(xm[:, :4*M], axis=1), '-o', label='R-MAP', lw=2)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
ax = axs[1]
ax.plot(t[1:], np.diff(np.sum(data_array[:, 3*M:4*M], axis=1)), '-o', label='death incs', lw=2)
ax.plot(tt[1:], np.diff(np.sum(xm[:, 3*M:4*M], axis=1)), '-x', label='MAP', lw=2, ms=3)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
ax = axs[2]
ax.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='Sus', lw=1.5, ms=3)
#plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-o', label='Exposed', lw=2)
#plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-o', label='Infected', lw=2)
#plt.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), '-o', label='Deaths', lw=2)
ax.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=1.5, ms=3)
#infResult = res
tt = np.linspace(Nf_start, Tf, Nf-Nf_start,)/fineData
xm = estimator.integrate(res['x0'], Nf_start, Tf, Nf-Nf_start, dense_output=False)
ax.plot(tt, np.sum(xm[:, :M], axis=1), '-x', label='S-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, M:2*M], axis=1), '-x', label='E-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, 2*M:3*M], axis=1), '-x', label='I-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, 3*M:4*M], axis=1), '-x', label='D-MAP', lw=2, ms=3)
ax.plot(tt, N-np.sum(xm[:, :4*M], axis=1), '-x', label='R-MAP', lw=1.5, ms=3)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
plt.show()
def plotMCtrace(selected_dims, sampler, numTrace=None):
# Plot the trace for these dimensions:
plot_dim = len(selected_dims)
plt.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(plot_dim, figsize=(12, plot_dim), sharex=True)
samples = sampler.get_chain()
if numTrace == None : numTrace = np.shape(samples)[1] ## corrected index
for ii,dd in enumerate(selected_dims):
ax = axes[ii]
ax.plot(samples[:, :numTrace , dd], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
axes[-1].set_xlabel("step number");
plt.show(fig)
plt.close()
def plotPosteriors(estimator,obsData, fltrDeath, Tf_inference,param_priors, init_priors,contactMatrix,
infResult,parameters_true,trueInit) :
## used for prior pdfs
(likFun,priFun,dimFlat) = pyross.evidence.latent_get_parameters(estimator,
obsData, fltrDeath, Tf_inference,
param_priors, init_priors,
contactMatrix,
#intervention_fun=interventionFn,
tangent=False,
)
xVals = np.linspace(parameters_true['beta']*0.5,parameters_true['beta']*1.5,100)
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
plt.hist(betas,density=True,color='lightblue',label='posterior')
yVal=2
plt.plot([infResult['params_dict']['beta']],[2*yVal],'bs',label='MAP',ms=10)
plt.plot([parameters_true['beta']],[yVal],'ro',label='true',ms=10)
## this is a bit complicated, it just finds the prior for beta from the infResult
var='beta'
jj = infResult['param_keys'].index(var)
xInd = infResult['param_guess_range'][jj]
#print(jj,xInd)
pVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[xInd] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[xInd] )
plt.plot(xVals,pVals,color='darkgreen',label='prior')
plt.xlabel(var)
plt.ylabel('pdf')
plt.legend()
labs=['init S','init E','init I']
nPanel=3
fig,axs = plt.subplots(1,nPanel,figsize=(14,4))
for ii in range(nPanel) :
ax = axs[ii]
yVal=1.0/popN
xs = [ rr['x0'][ii] for rr in result_mcmc ]
ax.hist(xs,color='lightblue',density=True)
ax.plot([infResult['x0'][ii]],yVal,'bs',label='true')
ax.plot([trueInit[ii]],yVal,'ro',label='true')
## this is a bit complicated, it just finds the prior for beta from the infResult
## axis ranges
xMin = np.min(xs)*0.8
xMax = np.max(xs)*1.2
xVals = np.linspace(xMin,xMax,100)
## this ID is a negative number because the init params are the end of the 'flat' param array
paramID = ii-nPanel
pVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[paramID] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[paramID] )
ax.plot(xVals,pVals,color='darkgreen',label='prior')
#plt.xlabel(var)
ax.set_xlabel(labs[ii])
ax.set_ylabel('pdf')
ax.yaxis.set_ticklabels([])
plt.show()
```
### synthetic data
```
if loadTraj :
ipFile = fileRoot+'-stochTraj.npy'
syntheticData = np.load(ipFile)
print('loading trajectory from',ipFile)
else :
ticTime = time.time()
syntheticData = synth_fns.make_stochastic_traj(Tf,Nf,trajSeed,model_dict,leapEps)
tocTime = time.time() - ticTime
print('traj generation time',tocTime,'secs')
np.save(fileRoot+'-stochTraj.npy',syntheticData)
Nf_start = synth_fns.get_start_time(syntheticData, popN, fracDeaths)
print('inf starts at timePoint',Nf_start)
plotTraj(cohortsM,syntheticData,Nf_start,Tf_inference,fineData)
```
### basic inference (estimator) setup
(including computation of likelihood for the true parameters)
```
[estimator,fltrDeath,obsData,trueInit] = synth_fns.get_estimator(isHighAccuracy,model_dict,syntheticData, popN, Nf_start, Nf_inference,)
## compute log-likelihood of true params
logpTrue = -estimator.minus_logp_red(parameters_true, trueInit, obsData, fltrDeath, Tf_inference,
contactMatrix, tangent=False)
print('**logLikTrue',logpTrue,'\n')
print('death data\n',obsData,'length',np.size(obsData),Nf_inference)
```
### priors
```
[param_priors,init_priors] = synth_fns.get_priors(model_dict,betaPriorOffset,betaPriorLogNorm,fracDeaths,estimator)
print('Prior Params:',param_priors)
print('Prior Inits:')
pprint.pprint(init_priors)
print('trueBeta',parameters_true['beta'])
print('trueInit',trueInit)
```
### inference (MAP)
```
infResult = synth_fns.do_inf(estimator, obsData, fltrDeath, syntheticData,
popN, Tf_inference, infSeed, param_priors,init_priors, model_dict, inf_atol)
#pprint.pprint(infResult)
print('MAP likelihood',infResult['log_likelihood'],'true',logpTrue)
print('MAP beta',infResult['params_dict']['beta'],'true',parameters_true['beta'])
```
### plot MAP trajectory
```
plotMAP(infResult,syntheticData,cohortsM,popN,estimator,Nf_start,Tf_inference,fineData)
```
#### slice of likelihood
(note this is not the posterior, hence MAP is not exactly at the peak)
```
## range for beta (relative to MAP)
rangeParam = 0.1
[bVals,likVals] = synth_fns.sliceLikelihood(rangeParam,infResult,
estimator,obsData,fltrDeath,contactMatrix,Tf_inference)
#print('logLiks',likVals,logp)
plt.plot(bVals , likVals, 'o-')
plt.plot(infResult['params_dict']['beta'],infResult['log_likelihood'],'s',ms=6)
plt.show()
```
### MCMC
```
sampler = synth_fns.do_mcmc(mcSamples, nProcMCMC, estimator, Tf_inference, infResult,
obsData, fltrDeath, param_priors, init_priors,
model_dict,infSeed)
plotMCtrace([0,2,3], sampler)
result_mcmc = synth_fns.load_mcmc_result(estimator, obsData, fltrDeath, sampler, param_priors, init_priors, model_dict)
print('result shape',np.shape(result_mcmc))
print('last sample\n',result_mcmc[-1])
```
#### save the result
```
if saveMC :
opFile = fileRoot + "-mcmc.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([infResult,result_mcmc],f)
```
#### estimate MCMC autocorrelation
```
# these are the estimated autocorrelation times for the sampler
# (it likes runs ~50 times longer than this...)
pp = sampler.get_log_prob()
nSampleTot = np.shape(pp)[0]
#print('correl',sampler.get_autocorr_time(discard=int(nSampleTot/3)))
print('nSampleTot',nSampleTot)
```
#### plot posterior distributions
```
plotPosteriors(estimator,obsData, fltrDeath, Tf_inference,param_priors, init_priors,contactMatrix,
infResult,parameters_true,trueInit)
```
### analyse posterior for beta
```
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
postMeanBeta = np.mean(betas)
postStdBeta = np.std(betas)
postCIBeta = [ np.percentile(betas,2.5) , np.percentile(betas,97.5)]
print("beta: true {b:.5f} MAP {m:.5f}".format(b=parameters_true['beta'],m=infResult['params_dict']['beta']))
print("post: mean {m:.5f} std {s:.5f} CI95: {l:.5f} {u:.5f}".format(m=postMeanBeta,
s=postStdBeta,
l=postCIBeta[0],u=postCIBeta[1]))
```
### posterior correlations for initial conditions
```
sis = np.array( [ rr['x0'][0] for rr in result_mcmc ] )/popN
eis = np.array( [ rr['x0'][1] for rr in result_mcmc ] )/popN
iis = np.array( [ rr['x0'][2] for rr in result_mcmc ] )/popN
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
fig,axs = plt.subplots(1,3,figsize=(15,4))
plt.subplots_adjust(wspace=0.35)
ax = axs[0]
ax.plot(eis,iis,'o',ms=2)
ax.set_xlabel('E0')
ax.set_ylabel('I0')
ax = axs[1]
ax.plot(1-eis-iis-sis,sis,'o',ms=2)
ax.set_ylabel('S0')
ax.set_xlabel('R0')
ax = axs[2]
ax.plot(1-eis-iis-sis,betas,'o',ms=2)
ax.set_ylabel('beta')
ax.set_xlabel('R0')
plt.show()
def forecast(result_mcmc, nsamples, Nf_start, Tf_inference, Nf_inference, estimator, obs, fltr, contactMatrix):
trajs = []
#x = (data_array[Nf_start:Nf_start+Nf_inference])
#obs=np.einsum('ij,kj->ki', fltr, x)
# this should pick up the right number of traj, equally spaced
totSamples = len(result_mcmc)
skip = int(totSamples/nsamples)
modulo = totSamples % skip
#print(modulo,skip)
for sample_res in result_mcmc[modulo::skip]:
endpoints = estimator.sample_endpoints(obs, fltr, Tf_inference, sample_res, 1, contactMatrix=contactMatrix)
xm = estimator.integrate(endpoints[0], Nf_start+Tf_inference, Tf, Nf-Tf_inference-Nf_start, dense_output=False)
trajs.append(xm)
return trajs
def plot_forecast(allTraj, data_array, nsamples, Tf,Nf, Nf_start, Tf_inference, Nf_inference, M,
estimator, obs, contactMatrix):
#x = (data_array[Tf_start:Tf_start+Nf_inference]).astype('float')
#obs=np.einsum('ij,kj->ki', fltr, x)
#samples = estimator.sample_endpoints(obs, fltr, Tf_inference, res, nsamples, contactMatrix=contactMatrix)
time_points = np.linspace(0, Tf, Nf)
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
#for x_start in samples:
for traj in allTraj:
#xm = estimator.integrate(x_start, Tf_start+Tf_inference, Tf, Nf-Tf_inference-Tf_start, dense_output=False)
# plt.plot(time_points[Tf_inference+Tf_start:], np.sum(xm[:, M:2*M], axis=1), color='grey', alpha=0.1)
# plt.plot(time_points[Tf_inference+Tf_start:], np.sum(xm[:, 2*M:3*M], axis=1), color='grey', alpha=0.1)
incDeaths = np.diff( np.sum(traj[:, 3*M:4*M], axis=1) )
plt.plot(time_points[1+Tf_inference+Nf_start:], incDeaths, color='grey', alpha=0.2)
# plt.plot(time_points, np.sum(data_array[:, M:2*M], axis=1), label='True E')
# plt.plot(time_points, np.sum(data_array[:, 2*M:3*M], axis=1), label='True I')
incDeathsObs = np.diff( np.sum(data_array[:, 3*M:4*M], axis=1) )
plt.plot(time_points[1:],incDeathsObs, 'ko', label='True D')
plt.axvspan(Nf_start, Tf_inference+Nf_start,
label='Used for inference',
alpha=0.3, color='dodgerblue')
plt.xlim([0, Tf])
plt.legend()
plt.show()
nsamples = 40
foreTraj = forecast(result_mcmc, nsamples, Nf_start, Tf_inference, Nf_inference,
estimator, obsData, fltrDeath, contactMatrix)
print(len(foreTraj))
foreTraj = np.array( foreTraj )
np.save(fileRoot+'-foreTraj.npy',foreTraj)
plot_forecast(foreTraj, syntheticData, nsamples, Tf,Nf, Nf_start, Tf_inference, Nf_inference, cohortsM,
estimator, obsData, contactMatrix)
print(Nf_inference)
print(len(result_mcmc))
```
| github_jupyter |
```
'''
Notebook to specifically study correlations between ELG targets and Galactic foregrounds
Much of this made possible and copied from script shared by Anand Raichoor
Run in Python 3; install pymangle, fitsio, healpy locally: pip install --user fitsio; pip install --user healpy; git clone https://github.com/esheldon/pymangle...
'''
import fitsio
import numpy as np
#from desitarget.io import read_targets_in_hp, read_targets_in_box, read_targets_in_cap
import astropy.io.fits as fits
import glob
import os
import healpy as hp
from matplotlib import pyplot as plt
print(nest)
#Some information is in pixelized map
#get nside and nest from header
pixfn = '/project/projectdirs/desi/target/catalogs/dr8/0.31.1/pixweight/pixweight-dr8-0.31.1.fits'
hdr = fits.getheader(pixfn,1)
nside,nest = hdr['HPXNSIDE'],hdr['HPXNEST']
print(fits.open(pixfn)[1].columns.names)
hpq = fitsio.read(pixfn)
#get MC efficiency
mcf = fitsio.read(os.getenv('SCRATCH')+'/ELGMCeffHSCHP.fits')
mmc = np.mean(mcf['EFF'])
mcl = np.zeros(12*nside*nside)
for i in range(0,len(mcf)):
pix = mcf['HPXPIXEL'][i]
mcl[pix] = mcf['EFF'][i]/mmc
#ELGs were saved here
elgf = os.getenv('SCRATCH')+'/ELGtargetinfo.fits'
#for healpix
def radec2thphi(ra,dec):
return (-dec+90.)*np.pi/180.,ra*np.pi/180.
#read in ELGs, put them into healpix
felg = fitsio.read(elgf)
dth,dphi = radec2thphi(felg['RA'],felg['DEC'])
dpix = hp.ang2pix(nside,dth,dphi,nest)
lelg = len(felg)
print(lelg)
#full random file is available, easy to read some limited number; take 1.5x ELG to start with
rall = fitsio.read('/project/projectdirs/desi/target/catalogs/dr8/0.31.0/randomsall/randoms-inside-dr8-0.31.0-all.fits',rows=np.arange(int(1.5*lelg))
)
rall_header = fitsio.read_header('/project/projectdirs/desi/target/catalogs/dr8/0.31.0/randomsall/randoms-inside-dr8-0.31.0-all.fits',ext=1)
#cut randoms to ELG footprint
keep = (rall['NOBS_G']>0) & (rall['NOBS_R']>0) & (rall['NOBS_Z']>0)
print(len(rall[keep]))
elgbits = [1,5,6,7,11,12,13]
keepelg = keep
for bit in elgbits:
keepelg &= ((rall['MASKBITS'] & 2**bit)==0)
print(len(rall[keepelg]))
relg = rall[keepelg]
print(rall_header)
#write out randoms
#fitsio.write(os.getenv('SCRATCH')+'/ELGrandoms.fits',relg,overwrite=True)
#put randoms into healpix
rth,rphi = radec2thphi(relg['RA'],relg['DEC'])
rpix = hp.ang2pix(nside,rth,rphi,nest=nest)
#let's define split into bmzls, DECaLS North, DECaLS South (Anand has tools to make distinct DES region as well)
#one function to do directly, the other just for the indices
print(np.unique(felg['PHOTSYS']))
#bmzls = b'N' #if in desi environment
bmzls = 'N' #if in Python 3; why the difference? Maybe version of fitsio?
def splitcat(cat):
NN = cat['PHOTSYS'] == bmzls
d1 = (cat['PHOTSYS'] != bmzls) & (cat['RA'] < 300) & (cat['RA'] > 100) & (cat['DEC'] > -20)
d2 = (d1==0) & (NN ==0) & (cat['DEC'] > -30)
return cat[NN],cat[d1],cat[d2]
def splitcat_ind(cat):
NN = cat['PHOTSYS'] == bmzls
d1 = (cat['PHOTSYS'] != bmzls) & (cat['RA'] < 300) & (cat['RA'] > 100) & (cat['DEC'] > -20)
d2 = (d1==0) & (NN ==0) & (cat['DEC'] > -30)
return NN,d1,d2
#indices for split
dbml,ddnl,ddsl = splitcat_ind(felg)
rbml,rdnl,rdsl = splitcat_ind(relg)
print(len(felg[dbml]),len(felg[ddnl]),len(felg[ddsl]))
#put into full sky maps (probably not necessary but easier to keep straight down the line)
pixlrbm = np.zeros(12*nside*nside)
pixlgbm = np.zeros(12*nside*nside)
pixlrdn = np.zeros(12*nside*nside)
pixlgdn = np.zeros(12*nside*nside)
pixlrds = np.zeros(12*nside*nside)
pixlgds = np.zeros(12*nside*nside)
for pix in rpix[rbml]:
pixlrbm[pix] += 1.
print('randoms done')
for pix in dpix[dbml]:
pixlgbm[pix] += 1.
for pix in rpix[rdnl]:
pixlrdn[pix] += 1.
print('randoms done')
for pix in dpix[ddnl]:
pixlgdn[pix] += 1.
for pix in rpix[rdsl]:
pixlrds[pix] += 1.
print('randoms done')
for pix in dpix[ddsl]:
pixlgds[pix] += 1.
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*hpq['STARDENS']+b)
print(len(pixlgds))
def plotvshp(r1,d1,sys,rng,gdzm=20,ebvm=0.15,useMCeff=True,correctstar=False,title='',effac=1.,south=True):
w = hpq['GALDEPTH_Z'] > gdzm
w &= hpq['EBV'] < ebvm
if useMCeff:
w &= mcl > 0
if sys != 'gdc' and sys != 'rdc' and sys != 'zdc':
sm = hpq[w][sys]
else:
if sys == 'gdc':
print('g depth, extinction corrected')
sm = hpq[w]['GALDEPTH_G']*np.exp(-3.214*hpq[w]['EBV'])
if sys == 'rdc':
sm = hpq[w]['GALDEPTH_R']*np.exp(-2.165*hpq[w]['EBV'])
if sys == 'zdc':
sm = hpq[w]['GALDEPTH_Z']*np.exp(-1.211*hpq[w]['EBV'])
ds = np.ones(len(d1))
if correctstar:
ds = ws
dmc = np.ones(len(d1))
if useMCeff:
dmc = mcl**effac
hd1 = np.histogram(sm,weights=d1[w]*ds[w]/dmc[w],range=rng)
hdnoc = np.histogram(sm,weights=d1[w],bins=hd1[1],range=rng)
#print(hd1)
hr1 = np.histogram(sm,weights=r1[w],bins=hd1[1],range=rng)
#print(hr1)
xl = []
for i in range(0,len(hd1[0])):
xl.append((hd1[1][i]+hd1[1][i+1])/2.)
plt.errorbar(xl,hd1[0]/hr1[0]/(sum(d1[w]*ds[w]/dmc[w])/sum(r1[w])),np.sqrt(hd1[0])/hr1[0]/(lelg/len(relg)),fmt='ko')
if useMCeff:
plt.plot(xl,hdnoc[0]/hr1[0]/(sum(d1[w])/sum(r1[w])),'k--')
print(hd1[0]/hr1[0]/(sum(d1[w]*ds[w]/dmc[w])/sum(r1[w])))
#plt.title(str(mp)+reg)
plt.plot(xl,np.ones(len(xl)),'k:')
plt.ylabel('relative density')
plt.xlabel(sys)
plt.ylim(0.7,1.3)
plt.title(title)
plt.show()
title = 'DECaLS South'
effac=2.
plotvshp(pixlrds,pixlgds,'STARDENS',(0,0.5e4),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'PSFSIZE_G',(.9,2.5),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'PSFSIZE_Z',(.8,2.5),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'EBV',(0,0.15),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'gdc',(0,3000),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'rdc',(0,1000),title=title,effac=effac)
plotvshp(pixlrds,pixlgds,'zdc',(20,200),title=title,effac=effac)
title = 'DECaLS North'
effac=2.
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*hpq['STARDENS']+b)
cs = True
plotvshp(pixlrdn,pixlgdn,'STARDENS',(0,0.5e4),title=title,effac=effac,correctstar='')
plotvshp(pixlrdn,pixlgdn,'PSFSIZE_G',(.8,2.5),title=title,effac=effac,correctstar='')
plotvshp(pixlrdn,pixlgdn,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)
plotvshp(pixlrdn,pixlgdn,'PSFSIZE_Z',(.8,2.),title=title,effac=effac)
plotvshp(pixlrdn,pixlgdn,'EBV',(0,0.15),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrdn,pixlgdn,'gdc',(0,3000),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrdn,pixlgdn,'rdc',(0,1000),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrdn,pixlgdn,'zdc',(20,200),title=title,effac=effac,correctstar=cs)
title = 'BASS/MZLS'
effac=1.
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*hpq['STARDENS']+b)
cs = True
plotvshp(pixlrbm,pixlgbm,'STARDENS',(0,0.5e4),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrbm,pixlgbm,'PSFSIZE_G',(.8,2.5),title=title,effac=effac,correctstar='')
plotvshp(pixlrbm,pixlgbm,'PSFSIZE_R',(.8,2.5),title=title,effac=effac)
plotvshp(pixlrbm,pixlgbm,'PSFSIZE_Z',(.8,2.),title=title,effac=effac)
plotvshp(pixlrbm,pixlgbm,'EBV',(0,0.15),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrbm,pixlgbm,'gdc',(0,2000),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrbm,pixlgbm,'rdc',(0,1000),title=title,effac=effac,correctstar=cs)
plotvshp(pixlrbm,pixlgbm,'zdc',(20,200),title=title,effac=effac,correctstar=cs)
'''
Below here, directly use data/randoms
'''
#Open files with grids for efficiency and define function to interpolate them (to be improved)
grids = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffgridsouth.dat').transpose()
#grids[3] = grids[3]
gridn = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffgridnorth.dat').transpose()
#print(np.mean(gridn[3]))
#gridn[3] = gridn[3]/np.mean(gridn[3])
def interpeff(gsig,rsig,zsig,south=True):
md = 0
xg = 0.15
#if gsig > xg:
# gsig = .99*xg
xr = 0.15
#if rsig > xr:
# rsig = 0.99*xr
xz = 0.4
#if zsig > xz:
# zsig = 0.99*xz
ngp = 30
if south:
grid = grids
else:
grid = gridn
i = (ngp*gsig/(xg-md)).astype(int)
j = (ngp*rsig/(xr-md)).astype(int)
k = (ngp*zsig/(xz-md)).astype(int)
ind = (i*ngp**2.+j*ngp+k).astype(int)
#print(i,j,k,ind)
#print(grid[0][ind],grid[1][ind],grid[2][ind])
#print(grid[0][ind-1],grid[1][ind-1],grid[2][ind-1])
#print(grid[0][ind+1],grid[1][ind+1],grid[2][ind+1])
return grid[3][ind]
#print(interpeff([0.0],[0.0],[0.0],south=False))
#print(interpeff(0.0,0.0,0.0,south=True))
#print(0.1/.4)
#print(0.4/30.)
#grid[2][0]
#Get depth values that match those used for efficiency grids
depth_keyword="PSFDEPTH"
R_G=3.214 # http://legacysurvey.org/dr8/catalogs/#galactic-extinction-coefficients
R_R=2.165
R_Z=1.211
gsigmad=1./np.sqrt(felg[depth_keyword+"_G"])
rsigmad=1./np.sqrt(felg[depth_keyword+"_R"])
zsigmad=1./np.sqrt(felg[depth_keyword+"_Z"])
gsig = gsigmad*10**(0.4*R_G*felg["EBV"])
w = gsig >= 0.15
gsig[w] = 0.99*0.15
rsig = rsigmad*10**(0.4*R_R*felg["EBV"])
w = rsig >= 0.15
rsig[w] = 0.99*0.15
zsig = zsigmad*10**(0.4*R_Z*felg["EBV"])
w = zsig >= 0.4
zsig[w] = 0.99*0.4
print(min(gsig),max(gsig))
effsouthl = interpeff(gsig,rsig,zsig,south=True)
effnorthl = interpeff(gsig,rsig,zsig,south=False)
plt.hist(effnorthl,bins=100)
plt.show()
effbm = effnorthl[dbml]
print(np.mean(effbm))
effbm = effbm/np.mean(effbm)
plt.hist(effbm,bins=100)
plt.show()
effdn = effsouthl[ddnl]
print(np.mean(effdn))
effdn = effdn/np.mean(effdn)
plt.hist(effdn,bins=100)
plt.show()
#plt.scatter(felg[dbml]['RA'],felg[dbml]['DEC'],c=effbm)
#plt.colorbar()
#plt.show()
effds = effsouthl[ddsl]
print(np.mean(effds))
effds = effds/np.mean(effds)
plt.hist(effds,bins=100)
plt.show()
stardensg = np.zeros(len(felg))
print(len(felg),len(dpix))
for i in range(0,len(dpix)):
if i%1000000==0 : print(i)
pix = dpix[i]
stardensg[i] = hpq['STARDENS'][pix]
stardensr = np.zeros(len(relg))
print(len(relg),len(rpix))
for i in range(0,len(rpix)):
if i%1000000==0 : print(i)
pix = rpix[i]
stardensr[i] = hpq['STARDENS'][pix]
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,2000))
hr1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])
#no correction
hgn1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),bins=hg1[1])
hrn1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,3000))
hr2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),bins=hg2[1])
hrn2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
#no strong relation with stellar density
hg3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,2000))
hr3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),bins=hg3[1])
hrn3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hrn1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hrn2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hrn3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_G*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_R']*np.exp(-1.*R_R*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,500))
hr1 = np.histogram(relg[rbml]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rbml]['EBV']),bins=hg1[1])
hgn1 = np.histogram(felg[dbml]['GALDEPTH_R']*np.exp(-1.*R_R*felg[dbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,1000))
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddnl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdnl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
hg3 = np.histogram(felg[ddsl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,1000))
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsl]['EBV']),bins=hg3[1])
hr3 = np.histogram(relg[rdsl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_R*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),weights=1./effbm*ws,range=(0,200))
hgn1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),weights=1./effdn**2.*ws,range=(0,200))
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
hg3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsl]['EBV']),weights=1./effds**2.,range=(0,200))
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsl]['EBV']),bins=hg3[1])
hr3 = np.histogram(relg[rdsl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_Z*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(stardensg[dbml],weights=1./effbm,range=(0,5000))
hgn1 = np.histogram(stardensg[dbml],bins=hg1[1])
hr1 = np.histogram(stardensr[rbml],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(stardensg[ddnl],weights=1./effdn**2.,range=(0,5000))
hgn2 = np.histogram(stardensg[ddnl],bins=hg2[1])
hr2 = np.histogram(stardensr[rdnl],bins=hg2[1])
#DECaLS S
hg3 = np.histogram(stardensg[ddsl],weights=1./effds**2.,range=(0,5000))
hgn3 = np.histogram(stardensg[ddsl],bins=hg3[1])
hr3 = np.histogram(stardensr[rdsl],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm3,'b:')
plt.ylim(.7,1.3)
plt.xlabel('Stellar Density')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC correction, points are after')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['EBV'],weights=1./effbm*ws,range=(0,0.15))
hgn1 = np.histogram(felg[dbml]['EBV'],bins=hg1[1])
hr1 = np.histogram(relg[rbml]['EBV'],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['EBV'],weights=1./effdn**2.*ws,range=(0,0.15))
hgn2 = np.histogram(felg[ddnl]['EBV'],bins=hg2[1])
hr2 = np.histogram(relg[rdnl]['EBV'],bins=hg2[1])
#DECaLS S
hg3 = np.histogram(felg[ddsl]['EBV'],weights=1./effds**2.,range=(0,0.15))
hgn3 = np.histogram(felg[ddsl]['EBV'],bins=hg3[1])
hr3 = np.histogram(relg[rdsl]['EBV'],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('E(B-V)')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
nh1 = fits.open('NHI_HPX.fits.gz')[1].data['NHI']
#make data column
thphi = radec2thphi(felg['RA'],felg['DEC'])
r = hp.Rotator(coord=['C','G'],deg=False)
thphiG = r(thphi[0],thphi[1])
pixhg = hp.ang2pix(1024,thphiG[0],thphiG[1])
h1g = np.zeros(len(felg))
for i in range(0,len(pixhg)):
h1g[i] = np.log(nh1[pixhg[i]])
if i%1000000==0 : print(i)
#make random column
thphi = radec2thphi(relg['RA'],relg['DEC'])
r = hp.Rotator(coord=['C','G'],deg=False)
thphiG = r(thphi[0],thphi[1])
pixhg = hp.ang2pix(1024,thphiG[0],thphiG[1])
h1r = np.zeros(len(relg))
for i in range(0,len(pixhg)):
h1r[i] = np.log(nh1[pixhg[i]])
if i%1000000==0 : print(i)
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(h1g[dbml],weights=1./effbm*ws)
hgn1 = np.histogram(h1g[dbml],bins=hg1[1])
hr1 = np.histogram(h1r[rbml],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(h1g[ddnl],weights=1./effdn**2.*ws)
hgn2 = np.histogram(h1g[ddnl],bins=hg2[1])
hr2 = np.histogram(h1r[rdnl],bins=hg2[1])
#DECaLS S
hg3 = np.histogram(h1g[ddsl],weights=1./effds**2.)
hgn3 = np.histogram(h1g[ddsl],bins=hg3[1])
hr3 = np.histogram(h1r[rdsl],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('ln(HI)')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
a = np.random.rand(len(relg))
w = a < 0.01
plt.plot(h1r[w],relg[w]['EBV'],'.k')
plt.show()
a,b = np.histogram(h1r,weights=relg['EBV'])
c,d = np.histogram(h1r,bins=b)
print(a)
print(c)
plt.plot(0.008*np.exp(np.array(xl3)-45.5),(a/c))
plt.plot(a/c,a/c,'--')
plt.show()
dhg = felg['EBV']-0.008*np.exp(h1g-45.5)
dhr = relg['EBV']-0.008*np.exp(h1r-45.5)
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(dhg[dbml],weights=1./effbm*ws,range=(-0.1,.15))
hgn1 = np.histogram(dhg[dbml],bins=hg1[1])
hr1 = np.histogram(dhr[rbml],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(dhg[ddnl],weights=1./effdn**2.*ws,range=(-0.1,.15))
hgn2 = np.histogram(dhg[ddnl],bins=hg2[1])
hr2 = np.histogram(dhr[rdnl],bins=hg2[1])
#DECaLS S
hg3 = np.histogram(dhg[ddsl],weights=1./effds**2.,range=(-0.1,.15))
hgn3 = np.histogram(dhg[ddsl],bins=hg3[1])
hr3 = np.histogram(dhr[rdsl],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm3,'b:')
plt.ylim(.7,1.3)
plt.xlabel('diff HI EBV')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
plt.scatter(relg[w]['RA'],relg[w]['DEC'],c=dhr[w],s=.1,vmax=0.04,vmin=-0.04)
plt.colorbar()
plt.show()
wr = abs(dhr) > 0.02
wg = abs(dhg) > 0.02
print(len(relg[wr])/len(relg))
print(len(felg[wg])/len(felg))
#bmzls
w1g = ~wg & dbml
w1r = ~wr & rbml
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[w1g]+b)
wsn = 1./(slp*stardensg[dbml]+b)
effbmw = effnorthl[w1g]
hg1 = np.histogram(felg[w1g]['EBV'],weights=1./effbmw*ws,range=(0,0.15))
hgn1 = np.histogram(felg[dbml]['EBV'],bins=hg1[1],weights=1./effbm*wsn)
hrn1 = np.histogram(relg[rbml]['EBV'],bins=hg1[1])
hr1 = np.histogram(relg[w1r]['EBV'],bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
w1g = ~wg & ddnl
w1r = ~wr & rdnl
ws = 1./(slp*stardensg[w1g]+b)
wsn = 1./(slp*stardensg[ddnl]+b)
effdnw = effsouthl[w1g]
hg2 = np.histogram(felg[w1g]['EBV'],weights=1./effdnw**2.*ws,range=(0,0.15))
hgn2 = np.histogram(felg[ddnl]['EBV'],bins=hg2[1],weights=1./effdn**2.*wsn)
hrn2 = np.histogram(relg[rdnl]['EBV'],bins=hg2[1])
hr2 = np.histogram(relg[w1r]['EBV'],bins=hg2[1])
#DECaLS S
w1g = ~wg & ddsl
w1r = ~wr & rdsl
effdsw = effsouthl[w1g]
hg3 = np.histogram(felg[w1g]['EBV'],weights=1./effdsw**2.,range=(0,0.15))
hgn3 = np.histogram(felg[ddsl]['EBV'],bins=hg3[1],weights=1./effds**2.)
hrn3 = np.histogram(relg[rdsl]['EBV'],bins=hg3[1])
hr3 = np.histogram(relg[w1r]['EBV'],bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
norm1n = sum(hgn1[0])/sum(hrn1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hrn1[0]/norm1n,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
norm2n = sum(hgn2[0])/sum(hrn2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hrn2[0]/norm2n,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
norm3n = sum(hgn3[0])/sum(hrn3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hrn3[0]/norm3n,'b:')
plt.ylim(.7,1.3)
plt.xlabel('E(B-V)')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.title(r'dashed is before masking |$\Delta$|E(B-V)$>0.02$, points are after')
plt.show()
def plotvsstar(d1,r1,reg='',fmt='ko'):
w1 = d1
#w1 &= felg['MORPHTYPE'] == mp
#w1 &= d1['EBV'] < 0.15 #mask applied to (e)BOSS
#mr = r1['EBV'] < 0.15
hd1 = np.histogram(stardensg[w1],range=(0,5000))
#print(hd1)
hr1 = np.histogram(stardensr[r1],bins=hd1[1])
#print(hr1)
xl = []
for i in range(0,len(hd1[0])):
xl.append((hd1[1][i]+hd1[1][i+1])/2.)
plt.errorbar(xl,hd1[0]/hr1[0],np.sqrt(hd1[0])/hr1[0],fmt=fmt)
#plt.title(str(mp)+reg)
#plt.ylabel('relative density')
#plt.xlabel('stellar density')
#plt.show()
morphl = np.unique(felg['MORPHTYPE'])
print(morphl)
for mp in morphl:
msel = felg['MORPHTYPE'] == mp
tsel = ddsl & msel
tseln = ddnl & msel
print(mp)
print(len(felg[tsel])/len(felg[ddsl]),len(felg[tseln])/len(felg[ddnl]))
plotvsstar(tsel,rdsl,'DECaLS South')
plotvsstar(tseln,rdnl,'DECaLS North',fmt='rd')
# plt.title(str(mp)+reg)
plt.ylabel('relative density')
plt.xlabel('stellar density')
plt.legend(['DECaLS SGC','DECaLS NGC'])
plt.title('selecting type '+mp)
plt.show()
'''
Divide DECaLS S into DES and non-DES
'''
import pymangle
desply ='/global/cscratch1/sd/raichoor/desits/des.ply'
mng = pymangle.mangle.Mangle(desply)
polyidd = mng.polyid(felg['RA'],felg['DEC'])
isdesd = polyidd != -1
polyidr = mng.polyid(relg['RA'],relg['DEC'])
isdesr = polyidr != -1
ddsdl = ddsl & isdesd
ddsndl = ddsl & ~isdesd
rdsdl = rdsl & isdesr
rdsndl = rdsl & ~isdesr
#DECaLS SGC DES
hg1 = np.histogram(stardensg[ddsdl],weights=1./effsouthl[ddsdl]**2.,range=(0,5000))
#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))
hgn1 = np.histogram(stardensg[ddsl],bins=hg3[1])
hr1 = np.histogram(stardensr[rdsdl],bins=hg1[1])
#DECaLS SGC not DES
hg2 = np.histogram(stardensg[ddsndl],weights=1./effsouthl[ddsndl]**2.,range=(0,5000))
#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))
hgn2 = np.histogram(stardensg[ddsndl],bins=hg3[1])
hr2 = np.histogram(stardensr[rdsndl],bins=hg2[1])
xl1 = []
xl2 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
plt.ylim(.7,1.3)
plt.xlabel('Stellar Density')
plt.ylabel('relative density')
plt.legend((['DES','SGC, not DES']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
#plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
'''
g-band depth
'''
#DECaLS SGC DES
hg1 = np.histogram(felg[ddsdl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,2000))
#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))
hgn1 = np.histogram(felg[ddsdl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsdl]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rdsdl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsdl]['EBV']),bins=hg1[1])
#DECaLS SGC not DES
hg2 = np.histogram(felg[ddsndl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,2000))
#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))
hgn2 = np.histogram(felg[ddsndl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsndl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdsndl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsndl]['EBV']),bins=hg2[1])
xl1 = []
xl2 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_G*MWTRANS')
plt.ylabel('relative density')
plt.legend((['DES','SGC, not DES']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
#plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
'''
r-band depth
'''
#DECaLS SGC DES
hg1 = np.histogram(felg[ddsdl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,2000))
#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))
hgn1 = np.histogram(felg[ddsdl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsdl]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rdsdl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsdl]['EBV']),bins=hg1[1])
#DECaLS SGC not DES
hg2 = np.histogram(felg[ddsndl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,2000))
#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))
hgn2 = np.histogram(felg[ddsndl]['GALDEPTH_R']*np.exp(-1.*R_R*felg[ddsndl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdsndl]['GALDEPTH_R']*np.exp(-1.*R_R*relg[rdsndl]['EBV']),bins=hg2[1])
xl1 = []
xl2 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_R*MWTRANS')
plt.ylabel('relative density')
plt.legend((['DES','SGC, not DES']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
#plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
'''
z-band depth
'''
#DECaLS SGC DES
hg1 = np.histogram(felg[ddsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsdl]['EBV']),weights=1./effsouthl[ddsdl]**2.,range=(0,500))
#hg1 = np.histogram(stardensg[ddsdl],range=(0,5000))
hgn1 = np.histogram(felg[ddsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsdl]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rdsdl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsdl]['EBV']),bins=hg1[1])
#DECaLS SGC not DES
hg2 = np.histogram(felg[ddsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsndl]['EBV']),weights=1./effsouthl[ddsndl]**2.,range=(0,500))
#hg2 = np.histogram(stardensg[ddsndl],range=(0,5000))
hgn2 = np.histogram(felg[ddsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddsndl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdsndl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdsndl]['EBV']),bins=hg2[1])
xl1 = []
xl2 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
#plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
#plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_Z*MWTRANS')
plt.ylabel('relative density')
plt.legend((['DES','SGC, not DES']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
#plt.title('dashed is before MC+stellar density correction, points are after')
plt.show()
'''
Above results didn't quite work at low depth; checking what happens when snr requirements are ignored in the MC
Results are gone, but they basically show that removing the snr requirements makes things worse
'''
grids = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffnosnrgridsouth.dat').transpose()
#grids[3] = grids[3]
gridn = np.loadtxt(os.getenv('SCRATCH')+'/ELGeffnosnrgridnorth.dat').transpose()
effsouthlno = interpeff(gsig,rsig,zsig,south=True)
effnorthlno = interpeff(gsig,rsig,zsig,south=False)
effbmno = effnorthlno[dbml]
print(np.mean(effbmno))
effbmno = effbmno/np.mean(effbmno)
plt.hist(effbmno,bins=100)
plt.show()
effdnno = effsouthlno[ddnl]
print(np.mean(effdnno))
effdnno = effdnno/np.mean(effdnno)
plt.hist(effdnno,bins=100)
plt.show()
#plt.scatter(felg[dbml]['RA'],felg[dbml]['DEC'],c=effbm)
#plt.colorbar()
#plt.show()
effdsno = effsouthlno[ddsl]
print(np.mean(effdsno))
effdsno = effdsno/np.mean(effdsno)
plt.hist(effdsno,bins=100)
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),weights=1./effbmno*ws,range=(0,200))
hgn1 = np.histogram(felg[dbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[dbml]['EBV']),bins=hg1[1])
hr1 = np.histogram(relg[rbml]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),weights=1./effdnno**2.*ws,range=(0,200))
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*felg[ddnl]['EBV']),bins=hg2[1])
hr2 = np.histogram(relg[rdnl]['GALDEPTH_Z']*np.exp(-1.*R_Z*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
hg3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_R*felg[ddsl]['EBV']),weights=1./effdsno**2.,range=(0,200))
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_Z']*np.exp(-1.*R_R*felg[ddsl]['EBV']),bins=hg3[1])
hr3 = np.histogram(relg[rdsl]['GALDEPTH_Z']*np.exp(-1.*R_R*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hr1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hr2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hr3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_Z*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.show()
#bmzls
slp = -0.2/4000.
b = 1.1
ws = 1./(slp*stardensg[dbml]+b)
hg1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),weights=1./effbmno*ws,range=(0,2000))
hr1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])
#no correction
hgn1 = np.histogram(felg[dbml]['GALDEPTH_G']*np.exp(-3.214*felg[dbml]['EBV']),bins=hg1[1])
hrn1 = np.histogram(relg[rbml]['GALDEPTH_G']*np.exp(-3.214*relg[rbml]['EBV']),bins=hg1[1])
#DECaLS N
slp = -0.35/4000.
b = 1.1
ws = 1./(slp*stardensg[ddnl]+b)
hg2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),weights=1./effdnno**2.*ws,range=(0,3000))
hr2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])
hgn2 = np.histogram(felg[ddnl]['GALDEPTH_G']*np.exp(-3.214*felg[ddnl]['EBV']),bins=hg2[1])
hrn2 = np.histogram(relg[rdnl]['GALDEPTH_G']*np.exp(-3.214*relg[rdnl]['EBV']),bins=hg2[1])
#DECaLS S
#no strong relation with stellar density
hg3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),weights=1./effdsno**2.,range=(0,2000))
hr3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])
hgn3 = np.histogram(felg[ddsl]['GALDEPTH_G']*np.exp(-3.214*felg[ddsl]['EBV']),bins=hg3[1])
hrn3 = np.histogram(relg[rdsl]['GALDEPTH_G']*np.exp(-3.214*relg[rdsl]['EBV']),bins=hg3[1])
xl1 = []
xl2 = []
xl3 = []
for i in range(0,len(hg1[0])):
xl1.append((hg1[1][i]+hg1[1][i+1])/2.)
xl2.append((hg2[1][i]+hg2[1][i+1])/2.)
xl3.append((hg3[1][i]+hg3[1][i+1])/2.)
norm1 = sum(hg1[0])/sum(hr1[0])
plt.errorbar(xl1,hg1[0]/hr1[0]/norm1,np.sqrt(hg1[0])/hr1[0],fmt='ko')
plt.plot(xl1,hgn1[0]/hrn1[0]/norm1,'k:')
norm2 = sum(hg2[0])/sum(hr2[0])
plt.errorbar(xl2,hg2[0]/hr2[0]/norm2,np.sqrt(hg2[0])/hr2[0],fmt='rd')
plt.plot(xl2,hgn2[0]/hrn2[0]/norm2,'r:')
norm3 = sum(hg3[0])/sum(hr3[0])
plt.errorbar(xl3,hg3[0]/hr3[0]/norm3,np.sqrt(hg3[0])/hr3[0],fmt='b^')
plt.plot(xl3,hgn3[0]/hrn3[0]/norm1,'b:')
plt.ylim(.7,1.3)
plt.xlabel('GALDEPTH_G*MWTRANS')
plt.ylabel('relative density')
plt.legend((['bmzls','DECaLS N','DECaLS S']))
plt.plot(xl2,np.ones(len(xl2)),'k--')
plt.show()
```
| github_jupyter |
# Swish-based classifier with data augmentation and stochastic weght-averaging
- Swish activation, 4 layers, 100 neurons per layer
- Data is augmentaed via phi rotations, and transvers and longitudinal flips
- Model uses a running average of previous weights
- Validation score use ensemble of 10 models weighted by loss
### Import modules
```
%matplotlib inline
from __future__ import division
import sys
import os
sys.path.append('../')
from Modules.Basics import *
from Modules.Class_Basics import *
```
## Options
```
with open(dirLoc + 'features.pkl', 'rb') as fin:
classTrainFeatures = pickle.load(fin)
nSplits = 10
patience = 50
maxEpochs = 200
ensembleSize = 10
ensembleMode = 'loss'
compileArgs = {'loss':'binary_crossentropy', 'optimizer':'adam'}
trainParams = {'epochs' : 1, 'batch_size' : 256, 'verbose' : 0}
modelParams = {'version':'modelSwish', 'nIn':len(classTrainFeatures), 'compileArgs':compileArgs, 'mode':'classifier'}
print ("\nTraining on", len(classTrainFeatures), "features:", [var for var in classTrainFeatures])
```
## Import data
```
with open(dirLoc + 'inputPipe.pkl', 'rb') as fin:
inputPipe = pickle.load(fin)
trainData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'train.hdf5', "r+"),
inputPipe=inputPipe, augRotMult=16)
```
## Determine LR
```
lrFinder = batchLRFind(trainData, getModel, modelParams, trainParams,
lrBounds=[1e-5,1e-1], trainOnWeights=True, verbose=0)
```
## Train classifier
```
results, histories = batchTrainClassifier(trainData, nSplits, getModel,
{**modelParams, 'compileArgs':{**compileArgs, 'lr':2e-3}},
trainParams, trainOnWeights=True, maxEpochs=maxEpochs,
swaStart=125, swaRenewal=-1,
patience=patience, verbose=1, amsSize=250000)
```
Once SWA is activated at epoch 125, we find that the validation loss goes through a rapid decrease followed by a plateau with large suppression of the statistical fluctuations.
Comparing to 5_Model_Data_Augmentation the metrics are mostly the same, except for the AMS which moves from3.98 to 4.04.
## Construct ensemble
```
with open('train_weights/resultsFile.pkl', 'rb') as fin:
results = pickle.load(fin)
ensemble, weights = assembleEnsemble(results, ensembleSize, ensembleMode, compileArgs)
```
## Response on validation data with TTA
```
valData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'val.hdf5', "r+"), inputPipe=inputPipe,
rotate = True, reflect = True, augRotMult=8)
batchEnsemblePredict(ensemble, weights, valData, ensembleSize=ensembleSize, verbose=1)
print('Testing ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source)),
roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source), sample_weight=getFeature('weights', valData.source))))
amsScanSlow(convertToDF(valData.source))
%%time
bootstrapMeanAMS(convertToDF(valData.source), N=512)
```
In the validation metrics we also find improvement over 5_Model_Data_Augmentation: overallAMS moves from 3.97 to 3.99, and AMS corresponding to mean cut increases to 3.97 from 3.91.
# Test scoring
```
testData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'testing.hdf5', "r+"), inputPipe=inputPipe,
rotate = True, reflect = True, augRotMult=8)
%%time
batchEnsemblePredict(ensemble, weights, testData, ensembleSize=ensembleSize, verbose=1)
scoreTestOD(testData.source, 0.9606163307325915)
```
Unfortunately, applying the cut to the test data shows an improvement in the public score (3.65->3.68) but a large decrease in private score (3.82->3.79)
# Save/Load
```
name = "weights/Swish_SWA-125"
saveEnsemble(name, ensemble, weights, compileArgs, overwrite=1)
ensemble, weights, compileArgs, _, _ = loadEnsemble(name)
```
| github_jupyter |
```
import csv
import numpy as np
import os
import pandas as pd
import scipy.interpolate
import sklearn.metrics
import sys
sys.path.append("../src")
import localmodule
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
from matplotlib import pyplot as plt
%matplotlib inline
# Define constants.
dataset_name = localmodule.get_dataset_name()
models_dir = localmodule.get_models_dir()
units = localmodule.get_units()
n_units = len(units)
n_trials = 10
import tqdm
model_names = [
"icassp-convnet", "icassp-convnet_aug-all-but-noise", "icassp-convnet_aug-all",
"pcen-convnet", "pcen-convnet_aug-all-but-noise", "pcen-convnet_aug-all",
"icassp-ntt-convnet", "icassp-ntt-convnet_aug-all-but-noise", "icassp-ntt-convnet_aug-all",
"pcen-ntt-convnet", "pcen-ntt-convnet_aug-all-but-noise", "pcen-ntt-convnet_aug-all",
"icassp-add-convnet", "icassp-add-convnet_aug-all-but-noise", "icassp-add-convnet_aug-all",
"pcen-add-convnet", "pcen-add-convnet_aug-all-but-noise", "pcen-add-convnet_aug-all",
]
n_models = len(model_names)
fold_accs = []
for fold_id in range(6):
model_accs = {}
for model_name in tqdm.tqdm(model_names):
val_accs = []
for trial_id in range(10):
model_dir = os.path.join(models_dir, model_name)
test_unit_str = units[fold_id]
test_unit_dir = os.path.join(model_dir, test_unit_str)
trial_str = "trial-" + str(trial_id)
trial_dir = os.path.join(test_unit_dir, trial_str)
val_unit_strs = localmodule.fold_units()[fold_id][2]
val_tn = 0
val_tp = 0
val_fn = 0
val_fp = 0
for val_unit_str in val_unit_strs:
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
trial_str,
"predict-" + val_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
# Load prediction.
try:
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_pred = (y_pred > 0.5).astype('int')
# Load ground truth.
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
val_tn = val_tn + tn
val_fp = val_fp + fp
val_fn = val_fn + fn
val_tp = val_tp + tp
except:
val_tn = -np.inf
val_tp = -np.inf
val_fn = -np.inf
val_tp = -np.inf
if val_tn < 0:
val_acc = 0.0
else:
val_acc =\
100 * (val_tn+val_tp) /\
(val_tn+val_tp+val_fn+val_fp)
val_accs.append(val_acc)
# Remove the models that did not train (accuracy close to 50%, i.e. chance)
val_accs = [v for v in val_accs if v > 65.0]
model_accs[model_name] = val_accs
fold_accs.append(model_accs)
fold_accs
fold_id = 0
#model_accs = np.stack(list(fold_accs[fold_id].values()))[:,:]
#plt.boxplot(model_accs.T);
model_names = [
"icassp-convnet", "icassp-convnet_aug-all-but-noise", "icassp-convnet_aug-all",
"pcen-convnet", "pcen-convnet_aug-all-but-noise", "pcen-convnet_aug-all",
"icassp-ntt-convnet", "icassp-ntt-convnet_aug-all-but-noise", "icassp-ntt-convnet_aug-all",
"pcen-ntt-convnet", "pcen-ntt-convnet_aug-all-but-noise", "pcen-ntt-convnet_aug-all",
"icassp-add-convnet", "icassp-add-convnet_aug-all-but-noise", "icassp-add-convnet_aug-all",
"pcen-add-convnet", "pcen-add-convnet_aug-all-but-noise", "pcen-add-convnet_aug-all",
]
errs = 100 - np.stack([np.median(x) for x in list(fold_accs[fold_id].values())])
xmax = np.ceil(np.max(errs)) + 2.5
fig = plt.figure(figsize=(xmax/2, 4), frameon=False)
plt.plot(errs[0], [0], 'o', color='blue');
plt.plot(errs[1], [1], 'o', color='blue');
plt.plot(errs[2], [2], 'o', color='blue');
plt.plot(errs[3], [0], 'o', color='orange');
plt.plot(errs[4], [1], 'o', color='orange');
plt.plot(errs[5], [2], 'o', color='orange');
plt.text(-0.5, 1, 'no context\nadaptation',
horizontalalignment='center',
verticalalignment='center',
rotation=90, wrap=True)
#plt.text(max(errs[0], errs[3]) + 1, 0, 'none');
#plt.text(max(errs[1], errs[4]) + 1, 1, 'geometrical');
#plt.text(max(errs[2], errs[5]) + 1, 2, 'adaptive');
plt.plot(errs[6], [4], 'o', color='blue');
plt.plot(errs[7], [5], 'o', color='blue');
plt.plot(errs[8], [6], 'o', color='blue');
plt.plot(errs[9], [4], 'o', color='orange');
plt.plot(errs[10], [5], 'o', color='orange');
plt.plot(errs[11], [6], 'o', color='orange');
plt.text(-0.5, 5, 'mixture\nof experts',
horizontalalignment='center',
verticalalignment='center',
rotation=90, wrap=True)
#plt.text(max(errs[6], errs[9]) + 1, 4, 'none');
#plt.text(max(errs[7], errs[10]) + 1, 5, 'geometrical');
#plt.text(max(errs[8], errs[11]) + 1, 6, 'adaptive');
plt.plot(errs[12], [8], 'o', color='blue');
plt.plot(errs[13], [9], 'o', color='blue');
plt.plot(errs[14], [10], 'o', color='blue');
plt.plot(errs[15], [8], 'o', color='orange');
plt.plot(errs[16], [9], 'o', color='orange');
plt.plot(errs[17], [10], 'o', color='orange');
plt.text(-0.5, 9, 'adaptive\nthreshold',
horizontalalignment='center',
verticalalignment='center',
rotation=90, wrap=True)
#plt.text(max(errs[12], errs[15]) + 1, 8, 'none');
#plt.text(max(errs[13], errs[16]) + 1, 9, 'geometrical');
#plt.text(max(errs[14], errs[17]) + 1, 10, 'adaptive');
plt.plot([0, xmax], [3, 3], '--', color=[0.75, 0.75, 0.75], linewidth=1.0, alpha=0.5)
plt.plot([0, xmax], [7, 7], '--', color=[0.75, 0.75, 0.75], linewidth=1.0, alpha=0.5)
plt.xlim([0.0, xmax])
plt.ylim([10.5, -0.5])
ax = fig.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.get_yaxis().set_ticks([])
fig.gca().set_xticks(range(0, int(xmax)+1, 1));
fig.gca().xaxis.grid(linestyle='--', alpha=0.5)
plt.xlabel("Average miss rate (%)")
#plt.savefig("spl_bv-70k-benchmark_fold-" + units[fold_id] + ".eps")
model_names = [
# "icassp-convnet", "icassp-convnet_aug-all-but-noise",
# "icassp-ntt-convnet", "icassp-ntt-convnet_aug-all-but-noise",
# "icassp-add-convnet", "icassp-add-convnet_aug-all-but-noise",
# "pcen-convnet", "pcen-convnet_aug-all-but-noise",
# "pcen-ntt-convnet", "pcen-ntt-convnet_aug-all-but-noise",
# "pcen-add-convnet", "pcen-add-convnet_aug-all-but-noise",
]
model_names = [
"icassp-convnet", "icassp-ntt-convnet", "icassp-add-convnet",
"icassp-convnet_aug-all-but-noise", "icassp-ntt-convnet_aug-all-but-noise", "icassp-add-convnet_aug-all-but-noise",
"pcen-convnet", "pcen-ntt-convnet", "pcen-add-convnet",
"pcen-convnet_aug-all-but-noise", "pcen-ntt-convnet_aug-all-but-noise", "pcen-add-convnet_aug-all-but-noise"
]
plt.gca().invert_yaxis()
colors = [
"#CB0003", # RED
"#E67300", # ORANGE
"#990099", # PURPLE
"#0000B2", # BLUE
"#009900", # GREEN
# '#008888', # TURQUOISE
# '#888800', # KAKI
'#555555', # GREY
]
xticks = np.array([1.0, 1.5, 2, 2.5, 3, 4, 5, 6, 8, 10, 12, 16, 20])
#xticks = np.array(range(1, 20))
plt.xticks(np.log2(xticks))
xtick_strs = []
for xtick in xticks:
if np.abs(xtick - int(xtick)) == 0:
xtick_strs.append("{:2d}".format(int(xtick)))
else:
xtick_strs.append("{:1.1f}".format(xtick))
print(xtick_strs)
plt.gca().set_xticklabels(xtick_strs, family="serif")
plt.xlim([np.log2(xticks[0]), np.log2(22.0)])
errs = np.zeros((len(model_names), 6))
for fold_id in range(6):
errs[:, fold_id] =\
np.log2(100 - np.array([np.median(fold_accs[fold_id][name]) for name in model_names]))
#ys = [1, 2, 4, 5, 7, 8, 11, 12, 14, 15, 17, 18]
ys = [1, 2, 3, 5, 6, 7, 10, 11, 12, 14, 15, 16]
for i in range(len(model_names)):
plt.plot(errs[i, fold_id], ys[i], 'o', color=colors[fold_id]);
ytick_dict = {
"icassp-convnet": " logmelspec ",
"icassp-convnet_aug-all-but-noise": "GDA ➡ logmelspec ",
##
"icassp-ntt-convnet": " logmelspec ➡ MoE",
"icassp-ntt-convnet_aug-all-but-noise": "GDA ➡ logmelspec ➡ MoE",
##
"icassp-add-convnet": " logmelspec ➡ AT ",
"icassp-add-convnet_aug-all-but-noise": "GDA ➡ logmelspec ➡ AT ",
###
###
"pcen-convnet": " PCEN ",
"pcen-convnet_aug-all-but-noise": "GDA ➡ PCEN ",
##
"pcen-ntt-convnet": " PCEN ➡ MoE",
"pcen-ntt-convnet_aug-all-but-noise": "GDA ➡ PCEN ➡ MoE",
##
"pcen-add-convnet": " PCEN ➡ AT ",
"pcen-add-convnet_aug-all-but-noise": "GDA ➡ PCEN ➡ AT ",
}
plt.yticks(ys)
plt.gca().set_yticklabels([ytick_dict[m] for m in model_names], family="monospace")
plt.xlabel("Per-fold validation error rate (%)", family="serif")
plt.gca().spines['left'].set_visible(False)
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.gca().grid(linestyle="--")
plt.savefig('fig_per-fold-validation.svg', bbox_inches="tight")
np.sum(pareto > 0, axis=1)
n_val_trials = 1
model_names = [
"icassp-convnet", "icassp-convnet_aug-all-but-noise", "icassp-convnet_aug-all",
"icassp-ntt-convnet", "icassp-ntt-convnet_aug-all-but-noise", "icassp-ntt-convnet_aug-all",
"pcen-convnet", "pcen-convnet_aug-all-but-noise", "pcen-convnet_aug-all",
"icassp-add-convnet", "icassp-add-convnet_aug-all-but-noise", "icassp-add-convnet_aug-all",
"pcen-add-convnet", "pcen-add-convnet_aug-all-but-noise", "pcen-add-convnet_aug-all",
"pcen-ntt-convnet_aug-all-but-noise", "pcen-ntt-convnet_aug-all",
"pcen-addntt-convnet_aug-all-but-noise",
]
n_models = len(model_names)
model_val_accs = {}
model_test_accs = {}
# Loop over models.
for model_id, model_name in enumerate(model_names):
model_dir = os.path.join(models_dir, model_name)
model_val_accs[model_name] = np.zeros((6,))
model_test_accs[model_name] = np.zeros((6,))
for test_unit_id in range(6):
# TRIAL SELECTION
test_unit_str = units[test_unit_id]
test_unit_dir = os.path.join(model_dir, test_unit_str)
val_accs = []
for trial_id in range(n_trials):
trial_str = "trial-" + str(trial_id)
trial_dir = os.path.join(test_unit_dir, trial_str)
history_name = "_".join([
dataset_name,
model_name,
test_unit_str,
trial_str,
"history.csv"
])
history_path = os.path.join(
trial_dir, history_name)
try:
history_df = pd.read_csv(history_path)
val_acc = max(history_df["Validation accuracy (%)"])
except:
val_acc = 0.0
val_accs.append(val_acc)
val_accs = np.array(val_accs)
trial_id = np.argmax(val_accs)
# VALIDATION SET EVALUATION
trial_str = "trial-" + str(trial_id)
trial_dir = os.path.join(test_unit_dir, trial_str)
fns, fps, tns, tps = [], [], [], []
validation_units = localmodule.fold_units()[test_unit_id][2]
for val_unit_str in validation_units:
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
trial_str,
"predict-" + val_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
# Load prediction.
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_pred = (y_pred > 0.5).astype('int')
# Load ground truth.
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
tns.append(tn)
fps.append(fp)
fns.append(fn)
tps.append(tp)
tn = sum(tns)
tp = sum(tps)
fn = sum(fns)
fp = sum(fps)
val_acc = 100 * (tn+tp) / (tn+tp+fn+fp)
model_val_accs[model_name][test_unit_id] = val_acc
# TEST SET EVALUATION
trial_dir = os.path.join(
test_unit_dir, trial_str)
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
trial_str,
"predict-" + test_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
# Load prediction.
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_pred = (y_pred > 0.5).astype('int')
# Load ground truth.
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
test_acc = 100 * (tn+tp) / (tn+tp+fn+fp)
model_test_accs[model_name][test_unit_id] = test_acc
model_names
model_diagrams = {
"icassp-convnet": " melspec -> log ",
"icassp-convnet_aug-all-but-noise": " geom -> melspec -> log ",
"icassp-convnet_aug-all": "(noise + geom) -> melspec -> log ",
"icassp-ntt-convnet": " melspec -> log -> NTT ",
"icassp-ntt-convnet_aug-all-but-noise": " geom -> melspec -> log -> NTT ",
"icassp-ntt-convnet_aug-all": "(noise + geom) -> melspec -> log -> NTT ",
"pcen-convnet": " melspec -> PCEN ",
"pcen_convnet_aug-all-but-noise": " geom -> melspec -> PCEN ",
"pcen-convnet_aug-all": "(noise + geom) -> melspec -> PCEN ",
"icassp-add-convnet": " melspec -> log -> CONCAT",
"icassp-add-convnet_aug-all-but-noise": " geom -> melspec -> log -> CONCAT",
"icassp-add-convent_aug-all": "(noise + geom) -> melspec -> log -> CONCAT",
"pcen-add-convnet": " melspec -> PCEN -> CONCAT",
"pcen-add-convnet_aug-all-but-noise": " geom -> melspec -> PCEN -> CONCAT",
"pcen-add-convnet_aug-all": "(noise + geom) -> melspec -> PCEN -> CONCAT",
"pcen-ntt-convnet_aug-all-but-noise": " geom -> melspec -> PCEN -> NTT ",
"pcen-ntt-convnet_aug-all": "(noise + geom) -> melspec -> PCEN -> NTT ",
"pcen-addntt-convnet_aug-all": "(noise + geom) -> melspec -> PCEN -> AFFINE"}
plt.figure(figsize=(9, 6))
plt.rcdefaults()
fig, ax = plt.subplots()
plt.boxplot(np.stack(model_val_accs.values()).T, 0, 'rs', 0)
#plt.ylim((-5.0, 1.0))
plt.setp(ax.get_yticklabels(), family="serif")
ax.set_yticklabels(model_names)
plt.gca().invert_yaxis()
ax.set_xlabel('Accuracy (%)')
ax.set_title('BirdVox-70k validation set')
plt.show()
plt.figure(figsize=(9, 6))
plt.rcdefaults()
fig, ax = plt.subplots()
plt.boxplot(np.stack(model_test_accs.values()).T, 0, 'rs', 0)
#plt.ylim((-5.0, 1.0))
plt.setp(ax.get_yticklabels(), family="serif")
ax.set_yticklabels(model_names)
plt.gca().invert_yaxis()
ax.set_xlabel('Accuracy (%)')
ax.set_title('BirdVox-70k test set')
plt.show()
model_test_accs
ablation_reference_name = "pcen-add-convnet_aug-all-but-noise"
ablation_names = [x for x in list(model_val_accs.keys()) if x not in
["icassp-add-convnet_aug-all",
ablation_reference_name,
"icassp-ntt-convnet",
"pcen-addntt-convnet_aug-all-but-noise"]]
ablation_names = list(reversed(ablation_names))
ytick_dict = {
"icassp-convnet": " logmelspec ",
"icassp-convnet_aug-all-but-noise": "GDA -> logmelspec ",
"icassp-convnet_aug-all": "ADA -> logmelspec ",
##
"icassp-ntt-convnet": " logmelspec -> MoE",
"icassp-ntt-convnet_aug-all-but-noise": "GDA -> logmelspec -> MoE",
"icassp-ntt-convnet_aug-all": "ADA -> logmelspec -> MoE",
##
"icassp-add-convnet": " logmelspec -> AT ",
"icassp-add-convnet_aug-all-but-noise": "GDA -> logmelspec -> AT ",
"icassp-add-convnet_aug-all": "ADA -> logmelspec -> AT ",
###
###
"pcen-convnet": " PCEN ",
"pcen-convnet_aug-all-but-noise": "GDA -> PCEN ",
"pcen-convnet_aug-all": "ADA -> PCEN ",
##
"pcen-ntt-convnet": " PCEN -> MoE",
"pcen-ntt-convnet_aug-all-but-noise": "GDA -> PCEN -> MoE",
"pcen-ntt-convnet_aug-all": "GDA -> PCEN -> MoE",
##
"pcen-add-convnet": " PCEN -> AT ",
"pcen-add-convnet_aug-all-but-noise": "GDA -> PCEN -> AT ",
"pcen-add-convnet_aug-all": "ADA -> PCEN -> AT ",
###
"pcen-addntt-convnet_aug-all-but-noise":"GDA -> PCEN -> AT + MoE ",
}
reference_val_accs = model_val_accs[ablation_reference_name]
ablation_val_accs = [
100 * (reference_val_accs - model_val_accs[name]) / (100 - reference_val_accs)
for name in ablation_names]
ablation_names = list(reversed([ablation_names[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))
ablation_val_accs = list(reversed([ablation_val_accs[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))
ablation_val_accs = np.array(ablation_val_accs)
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(8, 6))
plt.grid(linestyle="--")
plt.axvline(0.0, linestyle="--", color="#009900")
plt.plot([0.0], [1+len(ablation_val_accs)], 'd',
color="#009900", markersize=10.0)
colors = [
"#CB0003", # RED
"#E67300", # ORANGE
"#990099", # PURPLE
"#0000B2", # BLUE
"#009900", # GREEN
# '#008888', # TURQUOISE
# '#888800', # KAKI
'#555555', # GREY
]
plt.boxplot(ablation_val_accs.T, 0, 'rs', 0,
whis=100000, patch_artist=True, boxprops={"facecolor": "w"})
for i, color in enumerate(colors):
plt.plot(np.array(ablation_val_accs[:,i]),
range(1, 1+len(ablation_val_accs[:,i])), 'o', color=color)
fig.canvas.draw()
plt.setp(ax.get_yticklabels(), family="serif")
#ax.set_yticklabels([
# "adaptive threshold\nreplaced by\n mixture of experts",
# "no data augmentation",
# "addition of noise\nto frontend but not to\nauxiliary features",
# "no context adaptation",
# "PCEN\nreplaced by\nlog-mel frontend",
# "state of the art [X]"])
ax.set_yticks(range(1, 2+len(ablation_val_accs)))
ax.set_yticklabels([ytick_dict[x] for x in
(ablation_names + [ablation_reference_name])], family="monospace")
plt.gca().invert_xaxis()
plt.gca().invert_yaxis()
ax.set_xlabel('Relative difference in validation miss rate (%)', family="serif")
plt.ylim([0.5, 1.5+len(ablation_names)])
plt.show()
reference_test_accs = model_test_accs[ablation_reference_name]
print(reference_test_accs)
baseline_test_accs = model_test_accs["icassp-convnet_aug-all"]
print(baseline_test_accs)
plt.savefig('fig_exhaustive-per-fold-validation.eps', bbox_inches="tight")
plt.savefig('fig_exhaustive-per-fold-validation.png', bbox_inches="tight", dpi=1000)
%matplotlib inline
ablation_reference_name = "pcen-add-convnet_aug-all-but-noise"
#ablation_names = [x for x in list(model_val_accs.keys()) if x not in
# ["icassp-add-convnet_aug-all",
# ablation_reference_name,
# "icassp-ntt-convnet",
# "pcen-addntt-convnet_aug-all-but-noise"]]
ablation_names = [
"pcen-ntt-convnet_aug-all-but-noise",
"pcen-add-convnet",
"pcen-add-convnet_aug-all",
"pcen-convnet_aug-all-but-noise",
"icassp-convnet_aug-all-but-noise",
"icassp-convnet_aug-all"
]
ablation_names = list(reversed(ablation_names))
ytick_dict = {
"icassp-convnet": " logmelspec ",
"icassp-convnet_aug-all-but-noise": "GDA -> logmelspec ",
"icassp-convnet_aug-all": "ADA -> logmelspec ",
##
"icassp-ntt-convnet": " logmelspec -> MoE",
"icassp-ntt-convnet_aug-all-but-noise": "GDA -> logmelspec -> MoE",
"icassp-ntt-convnet_aug-all": "ADA -> logmelspec -> MoE",
##
"icassp-add-convnet": " logmelspec -> AT ",
"icassp-add-convnet_aug-all-but-noise": "GDA -> logmelspec -> AT ",
"icassp-add-convnet_aug-all": "ADA -> logmelspec -> AT ",
###
###
"pcen-convnet": " PCEN ",
"pcen-convnet_aug-all-but-noise": "GDA -> PCEN ",
"pcen-convnet_aug-all": "ADA -> PCEN ",
##
"pcen-ntt-convnet": " PCEN -> MoE",
"pcen-ntt-convnet_aug-all-but-noise": "GDA -> PCEN -> MoE",
"pcen-ntt-convnet_aug-all": "GDA -> PCEN -> MoE",
##
"pcen-add-convnet": " PCEN -> AT ",
"pcen-add-convnet_aug-all-but-noise": "GDA -> PCEN -> AT ",
"pcen-add-convnet_aug-all": "ADA -> PCEN -> AT ",
###
"pcen-addntt-convnet_aug-all-but-noise":"GDA -> PCEN -> AT + MoE ",
}
reference_val_accs = model_val_accs[ablation_reference_name]
ablation_val_accs = [
100 * (reference_val_accs - model_val_accs[name]) / (100 - reference_val_accs)
for name in ablation_names]
ablation_names = list(reversed([ablation_names[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))
ablation_val_accs = list(reversed([ablation_val_accs[i] for i in np.argsort(np.median(ablation_val_accs,axis=1))]))
ablation_val_accs = np.array(ablation_val_accs)
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(7, 4))
plt.grid(linestyle="--")
plt.axvline(0.0, linestyle="--", color="#009900")
plt.plot([0.0], [1+len(ablation_val_accs)], 'd',
color="#009900", markersize=10.0)
colors = [
"#CB0003", # RED
"#E67300", # ORANGE
"#990099", # PURPLE
"#0000B2", # BLUE
"#009900", # GREEN
# '#008888', # TURQUOISE
# '#888800', # KAKI
'#555555', # GREY
]
for i, color in enumerate(colors):
plt.plot(np.array(ablation_val_accs[:,i]),
range(1, 1+len(ablation_val_accs[:,i])), 'o', color=color)
fig.canvas.draw()
plt.boxplot(ablation_val_accs.T, 0, 'rs', 0,
whis=100000)
plt.setp(ax.get_yticklabels(), family="serif")
ax.set_yticklabels(reversed([
"BirdVoxDetect",
"adaptive threshold\nreplaced by\n mixture of experts",
"no data augmentation",
"addition of noise\nto frontend but not to\nauxiliary features",
"no context adaptation",
"PCEN\nreplaced by\nlog-mel frontend",
"previous state of the art [57]"]))
ax.set_yticks(range(1, 2+len(ablation_val_accs)))
#ax.set_yticklabels([ytick_dict[x] for x in
# (ablation_names + [ablation_reference_name])], family="monospace")
plt.gca().invert_xaxis()
plt.gca().invert_yaxis()
ax.set_xlabel('Relative difference in validation miss rate (%)', family="serif")
plt.ylim([0.5, 1.5+len(ablation_names)])
#plt.show()
reference_test_accs = model_test_accs[ablation_reference_name]
print(reference_test_accs)
baseline_test_accs = model_test_accs["icassp-convnet_aug-all"]
print(baseline_test_accs)
plt.savefig('fig_ablation-study.eps', bbox_inches="tight")
plt.savefig('fig_ablation-study.svg', bbox_inches="tight")
plt.savefig('fig_ablation-study.png', bbox_inches="tight", dpi=1000)
2
n_trials = 10
report = {}
for model_name in model_names:
model_dir = os.path.join(models_dir, model_name)
# Initialize dictionaries
model_report = {
"validation": {},
"test_cv-acc_th=0.5": {}
}
# Initialize matrix of validation accuracies.
val_accs = np.zeros((n_units, n_trials))
val_tps = np.zeros((n_units, n_trials))
val_tns = np.zeros((n_units, n_trials))
val_fps = np.zeros((n_units, n_trials))
val_fns = np.zeros((n_units, n_trials))
test_accs = np.zeros((n_units, n_trials))
test_tps = np.zeros((n_units, n_trials))
test_tns = np.zeros((n_units, n_trials))
test_fps = np.zeros((n_units, n_trials))
test_fns = np.zeros((n_units, n_trials))
# Loop over test units.
for test_unit_id, test_unit_str in enumerate(units):
# Define directory for test unit.
test_unit_dir = os.path.join(model_dir, test_unit_str)
# Retrieve fold such that unit_str is in the test set.
folds = localmodule.fold_units()
fold = [f for f in folds if test_unit_str in f[0]][0]
test_units = fold[0]
validation_units = fold[2]
# Loop over trials.
for trial_id in range(n_trials):
# Define directory for trial.
trial_str = "trial-" + str(trial_id)
trial_dir = os.path.join(test_unit_dir, trial_str)
# Initialize.
break_switch = False
val_fn = 0
val_fp = 0
val_tn = 0
val_tp = 0
# Loop over validation units.
for val_unit_str in validation_units:
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
"trial-" + str(trial_id),
"predict-" + val_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
# Load prediction.
csv_file = pd.read_csv(prediction_path)
# Parse prediction.
if model_name == "icassp-convnet_aug-all":
y_pred = np.array(csv_file["Predicted probability"])
y_true = np.array(csv_file["Ground truth"])
elif model_name == "pcen-add-convnet_aug-all-but-noise":
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_true = np.array(df["Ground truth"])
# Threshold.
y_pred = (y_pred > 0.5).astype('int')
# Check that CSV file is not corrupted.
if len(y_pred) == 0:
break_switch = True
break
# Compute confusion matrix.
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
val_fn = val_fn + fn
val_fp = val_fp + fp
val_tn = val_tn + tn
val_tp = val_tp + tp
if not break_switch:
val_acc = (val_tn+val_tp) / (val_fn+val_fp+val_tn+val_tp)
else:
val_fn = 0
val_fp = 0
val_tn = 0
val_tp = 0
val_acc = 0.0
val_fns[test_unit_id, trial_id] = val_fn
val_fps[test_unit_id, trial_id] = val_fp
val_tns[test_unit_id, trial_id] = val_tn
val_tps[test_unit_id, trial_id] = val_tp
val_accs[test_unit_id, trial_id] = val_acc
# Initialize.
predictions_name = "_".join([
dataset_name,
model_name,
"test-" + test_unit_str,
"trial-" + str(trial_id),
"predict-" + test_unit_str,
"clip-predictions.csv"
])
prediction_path = os.path.join(
trial_dir, predictions_name)
with open(prediction_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
y_pred = np.array(df["Predicted probability"])
y_pred = (y_pred > 0.5).astype('int')
y_true = np.array(df["Ground truth"])
# Check that CSV file is not corrupted.
if len(y_pred) == 0:
test_tn, test_fp, test_fn, test_tp = 0, 0, 0, 0
test_acc = 0.0
else:
# Load ground truth.
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
test_tn, test_fp, test_fn, test_tp =\
sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
test_acc = (test_tn+test_tp) / (test_fn+test_fp+test_tn+test_tp)
test_fns[test_unit_id, trial_id] = test_fn
test_fps[test_unit_id, trial_id] = test_fp
test_tns[test_unit_id, trial_id] = test_tn
test_tps[test_unit_id, trial_id] = test_tp
test_accs[test_unit_id, trial_id] = test_acc
model_report["validation"]["FN"] = test_fn
model_report["validation"]["FP"] = test_fp
model_report["validation"]["TN"] = test_tn
model_report["validation"]["TP"] = test_tp
model_report["validation"]["accuracy"] = val_accs
best_trials = np.argsort(model_report["validation"]["accuracy"], axis=1)
model_report["validation"]["best_trials"] = best_trials
model_report["test_cv-acc_th=0.5"]["FN"] = test_fns
model_report["test_cv-acc_th=0.5"]["FP"] = test_fps
model_report["test_cv-acc_th=0.5"]["TN"] = test_tns
model_report["test_cv-acc_th=0.5"]["TP"] = test_tps
model_report["test_cv-acc_th=0.5"]["accuracy"] = test_accs
cv_accs = []
for eval_trial_id in range(5):
cv_fn = 0
cv_fp = 0
cv_tn = 0
cv_tp = 0
for test_unit_id, test_unit_str in enumerate(units):
best_trials = model_report["validation"]["best_trials"]
unit_best_trials = best_trials[test_unit_id, -5:]
unit_best_trials = sorted(unit_best_trials)
trial_id = unit_best_trials[eval_trial_id]
cv_fn = cv_fn + model_report["test_cv-acc_th=0.5"]["FN"][test_unit_id, trial_id]
cv_fp = cv_fp + model_report["test_cv-acc_th=0.5"]["FP"][test_unit_id, trial_id]
cv_tn = cv_tn + model_report["test_cv-acc_th=0.5"]["TN"][test_unit_id, trial_id]
cv_tp = cv_tp + model_report["test_cv-acc_th=0.5"]["TP"][test_unit_id, trial_id]
cv_acc = (cv_tn+cv_tp) / (cv_tn+cv_tp+cv_fn+cv_fp)
cv_accs.append(cv_acc)
model_report["test_cv-acc_th=0.5"]["global_acc"] = np.array(cv_accs)
report[model_name] = model_report
print(model_name, ": acc = {:5.2f}% ± {:3.1f}".format(
100*np.mean(report[model_name]['test_cv-acc_th=0.5']['global_acc']),
100*np.std(report[model_name]['test_cv-acc_th=0.5']['global_acc'])))
#print(report['icassp-convnet_aug-all']['test_cv-acc_th=0.5']['global_acc'])
#print(report['pcen-add-convnet_aug-all-but-noise']['test_cv-acc_th=0.5']['global_acc'])
list(report.keys())
icassp_accs = report['icassp-convnet_aug-all']['test_cv-acc_th=0.5']['global_acc']
print("ICASSP 2018: acc = {:5.2f}% ± {:3.1f}".format(100*np.mean(icassp_accs), 100*np.std(icassp_accs)))
spl_accs = report['pcen-add-convnet_aug-all-but-noise']['test_cv-acc_th=0.5']['global_acc']
print("SPL 2018: acc = {:5.2f}% ± {:3.1f}".format(100*np.mean(spl_accs), 100*np.std(spl_accs)))
n_trials = 5
model_name = "skm-cv"
model_dir = os.path.join(models_dir, model_name)
skm_fns = np.zeros((n_trials, n_units))
skm_fps = np.zeros((n_trials, n_units))
skm_tns = np.zeros((n_trials, n_units))
skm_tps = np.zeros((n_trials, n_units))
# Loop over trials.
for trial_id in range(n_trials):
# Loop over units.
for test_unit_id, test_unit_str in enumerate(units):
# Define path to predictions.
unit_dir = os.path.join(model_dir, test_unit_str)
trial_str = "trial-" + str(5 + trial_id)
trial_dir = os.path.join(unit_dir, trial_str)
predictions_name = "_".join([
dataset_name,
"skm-proba",
"test-" + test_unit_str,
trial_str,
"predict-" + test_unit_str,
"clip-predictions.csv"
])
predictions_path = os.path.join(trial_dir, predictions_name)
# Remove header, which has too few columns (hack).
with open(predictions_path, 'r') as f:
reader = csv.reader(f)
rows = list(reader)
rows = [",".join(row) for row in rows]
rows = rows[1:]
rows = "\n".join(rows)
# Parse rows with correct header.
df = pd.read_csv(StringIO(rows),
names=[
"Dataset",
"Test unit",
"Prediction unit",
"Timestamp",
"Center Freq (Hz)",
"Augmentation",
"Key",
"Ground truth",
"Predicted probability"])
# Extract y_pred and y_true.
y_pred = np.array((df["Predicted probability"] > 0.5)).astype("int")
y_true = np.array(df["Ground truth"])
# Compute confusion matrix.
test_tn, test_fp, test_fn, test_tp =\
sklearn.metrics.confusion_matrix(
y_true, y_pred).ravel()
skm_fns[trial_id, test_unit_id] = test_fn
skm_fps[trial_id, test_unit_id] = test_fp
skm_tns[trial_id, test_unit_id] = test_tn
skm_tps[trial_id, test_unit_id] = test_tp
total_skm_fns = np.sum(skm_fns[:, 1:], axis=1)
total_skm_fps = np.sum(skm_fps[:, 1:], axis=1)
total_skm_tns = np.sum(skm_tns[:, 1:], axis=1)
total_skm_tps = np.sum(skm_tps[:, 1:], axis=1)
total_skm_accs = (total_skm_tns+total_skm_tps) / (total_skm_fns+total_skm_fps+total_skm_tns+total_skm_tps)
print("SKM: acc = {:5.2f}% ± {:3.1f}".format(100*np.mean(total_skm_accs), 100*np.std(total_skm_accs)))
xticks = np.array([2.0, 5.0, 10.0, 20.0, 50.0])
lms_snr_accs = np.repeat([0.652], 5)
pcen_snr_accs = np.repeat([0.809], 5)
skm_accs = total_skm_accs
fig, ax = plt.subplots(figsize=(10, 3))
plt.rcdefaults()
plt.boxplot(np.log2(np.array([
100*(1-lms_snr_accs),
100*(1-pcen_snr_accs),
100*(1-skm_accs),
100*(1-icassp_accs),
100*(1-spl_accs)]).T), 0, 'rs', 0,
whis=100000, patch_artist=True, boxprops={"facecolor": "w"});
plt.xlim(np.log2(np.array([2.0, 50.0])))
plt.xticks(np.log2(xticks))
plt.gca().set_xticklabels([100 - x for x in xticks])
plt.setp(ax.get_yticklabels(), family="serif")
ax.set_yticklabels(["logmelspec-SNR", "PCEN-SNR", "PCA-SKM-CNN", "logmelspec-CNN", "BirdVoxDetect"],
family="serif")
plt.gca().invert_yaxis()
plt.gca().invert_xaxis()
plt.xlabel("Test accuracy (%)", family="serif")
plt.gca().yaxis.grid(color='k', linestyle='--', linewidth=1.0, alpha=0.25, which="major")
plt.gca().xaxis.grid(color='k', linestyle='--', linewidth=1.0, alpha=0.25, which="major")
plt.savefig('fig_per-fold-test.eps', bbox_inches="tight")
np.min(icassp_accs), np.max(icassp_accs)
np.min(spl_accs), np.max(spl_accs)
icassp_fold_accs = report['icassp-convnet_aug-all']['validation']["accuracy"]
spl_fold_accs = report['pcen-add-convnet_aug-all-but-noise']['validation']["accuracy"]
print(np.mean(np.max(icassp_fold_accs, axis=1)), np.mean(np.max(spl_fold_accs, axis=1)))
```
| github_jupyter |
# Skip-gram Word2Vec
In this notebook, I'll lead you through using PyTorch to implement the [Word2Vec algorithm](https://en.wikipedia.org/wiki/Word2vec) using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
## Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
* A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of Word2Vec from Chris McCormick
* [First Word2Vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al.
* [Neural Information Processing Systems, paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for Word2Vec also from Mikolov et al.
---
## Word embeddings
When you're dealing with words in text, you end up with tens of thousands of word classes to analyze; one for each word in a vocabulary. Trying to one-hot encode these words is massively inefficient because most values in a one-hot vector will be set to zero. So, the matrix multiplication that happens in between a one-hot input vector and a first, hidden layer will result in mostly zero-valued hidden outputs.
To solve this problem and greatly increase the efficiency of our networks, we use what are called **embeddings**. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
<img src='assets/lookup_matrix.png' width=50%>
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**.
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning.
---
## Word2Vec
The Word2Vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words.
<img src="assets/context_drink.png" width=40%>
Words that show up in similar **contexts**, such as "coffee", "tea", and "water" will have vectors near each other. Different words will be further away from one another, and relationships can be represented by distance in vector space.
There are two architectures for implementing Word2Vec:
>* CBOW (Continuous Bag-Of-Words) and
* Skip-gram
<img src="assets/word2vec_architectures.png" width=60%>
In this implementation, we'll be using the **skip-gram architecture** with **negative sampling** because it performs better than CBOW and trains faster with negative sampling. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
---
## Loading Data
Next, we'll ask you to load in data and place it in the `data` directory
1. Load the [text8 dataset](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/October/5bbe6499_text8/text8.zip); a file of cleaned up *Wikipedia article text* from Matt Mahoney.
2. Place that data in the `data` folder in the home directory.
3. Then you can extract it and delete the archive, zip file to save storage space.
After following these steps, you should have one file in your data directory: `data/text8`.
```
# read in the extracted text file
with open('data/text8') as f:
text = f.read()
# print out the first 100 characters
print(text[:100])
```
## Pre-processing
Here I'm fixing up the text to make training easier. This comes from the `utils.py` file. The `preprocess` function does a few things:
>* It converts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems.
* It removes all words that show up five or *fewer* times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations.
* It returns a list of words in the text.
This may take a few seconds to run, since our text file is quite large. If you want to write your own functions for this stuff, go for it!
```
import utils
# get list of words
words = utils.preprocess(text)
print(words[:30])
# print some stats about this word data
print("Total words in text: {}".format(len(words)))
print("Unique words: {}".format(len(set(words)))) # `set` removes any duplicate words
```
### Dictionaries
Next, I'm creating two dictionaries to convert words to integers and back again (integers to words). This is again done with a function in the `utils.py` file. `create_lookup_tables` takes in a list of words in a text and returns two dictionaries.
>* The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1, and so on.
Once we have our dictionaries, the words are converted to integers and stored in the list `int_words`.
```
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
print(int_words[:30])
```
## Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
> Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to `train_words`.
```
from collections import Counter
import random
import numpy as np
threshold = 1e-5
word_counts = Counter(int_words)
#print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
# discard some frequent words, according to the subsampling equation
# create a new list of words for training
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
print(train_words[:30])
```
## Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to define a surrounding _context_ and grab all the words in a window around that word, with size $C$.
From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf):
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $[ 1: C ]$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
> **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
Say, we have an input and we're interested in the idx=2 token, `741`:
```
[5233, 58, 741, 10571, 27349, 0, 15067, 58112, 3580, 58, 10712]
```
For `R=2`, `get_target` should return a list of four values:
```
[5233, 58, 10571, 27349]
```
```
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = words[start:idx] + words[idx+1:stop+1]
return list(target_words)
# test your code!
# run this cell multiple times to check for random window selection
int_text = [i for i in range(10)]
print('Input: ', int_text)
idx=5 # word index of interest
target = get_target(int_text, idx=idx, window_size=5)
print('Target: ', target) # you should get some indices around the idx
```
### Generating Batches
Here's a generator function that returns batches of input and target data for our model, using the `get_target` function from above. The idea is that it grabs `batch_size` words from a words list. Then for each of those batches, it gets the target words in a window.
```
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
int_text = [i for i in range(20)]
x,y = next(get_batches(int_text, batch_size=4, window_size=5))
print('x\n', x)
print('y\n', y)
```
---
## Validation
Here, I'm creating a function that will help us observe our model as it learns. We're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them using the cosine similarity:
<img src="assets/two_vectors.png" width=30%>
$$
\mathrm{similarity} = \cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}
$$
We can encode the validation words as vectors $\vec{a}$ using the embedding table, then calculate the similarity with each word vector $\vec{b}$ in the embedding table. With the similarities, we can print out the validation words and words in our embedding table semantically similar to those words. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
```
def cosine_similarity(embedding, valid_size=16, valid_window=100, device='cpu'):
""" Returns the cosine similarity of validation words with words in the embedding matrix.
Here, embedding should be a PyTorch embedding module.
"""
# Here we're calculating the cosine similarity between some random words and
# our embedding vectors. With the similarities, we can look at what words are
# close to our random words.
# sim = (a . b) / |a||b|
embed_vectors = embedding.weight
# magnitude of embedding vectors, |b|
magnitudes = embed_vectors.pow(2).sum(dim=1).sqrt().unsqueeze(0)
# pick N words from our ranges (0,window) and (1000,1000+window). lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_examples = torch.LongTensor(valid_examples).to(device)
valid_vectors = embedding(valid_examples)
similarities = torch.mm(valid_vectors, embed_vectors.t())/magnitudes
return valid_examples, similarities
```
---
# SkipGram model
Define and train the SkipGram model.
> You'll need to define an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) and a final, softmax output layer.
An Embedding layer takes in a number of inputs, importantly:
* **num_embeddings** – the size of the dictionary of embeddings, or how many rows you'll want in the embedding weight matrix
* **embedding_dim** – the size of each embedding vector; the embedding dimension
Below is an approximate diagram of the general structure of our network.
<img src="assets/skip_gram_arch.png" width=60%>
>* The input words are passed in as batches of input word tokens.
* This will go into a hidden layer of linear units (our embedding layer).
* Then, finally into a softmax output layer.
We'll use the softmax layer to make a prediction about the context words by sampling, as usual.
---
## Negative Sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct example, but only a small number of incorrect, or noise, examples. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).
There are two modifications we need to make. First, since we're not taking the softmax output over all the words, we're really only concerned with one output word at a time. Similar to how we use an embedding table to map the input word to the hidden layer, we can now use another embedding table to map the hidden layer to the output word. Now we have two embedding layers, one for input words and one for output words. Secondly, we use a modified loss function where we only care about the true example and a small subset of noise examples.
$$
- \large \log{\sigma\left(u_{w_O}\hspace{0.001em}^\top v_{w_I}\right)} -
\sum_i^N \mathbb{E}_{w_i \sim P_n(w)}\log{\sigma\left(-u_{w_i}\hspace{0.001em}^\top v_{w_I}\right)}
$$
This is a little complicated so I'll go through it bit by bit. $u_{w_O}\hspace{0.001em}^\top$ is the embedding vector for our "output" target word (transposed, that's the $^\top$ symbol) and $v_{w_I}$ is the embedding vector for the "input" word. Then the first term
$$\large \log{\sigma\left(u_{w_O}\hspace{0.001em}^\top v_{w_I}\right)}$$
says we take the log-sigmoid of the inner product of the output word vector and the input word vector. Now the second term, let's first look at
$$\large \sum_i^N \mathbb{E}_{w_i \sim P_n(w)}$$
This means we're going to take a sum over words $w_i$ drawn from a noise distribution $w_i \sim P_n(w)$. The noise distribution is basically our vocabulary of words that aren't in the context of our input word. In effect, we can randomly sample words from our vocabulary to get these words. $P_n(w)$ is an arbitrary probability distribution though, which means we get to decide how to weight the words that we're sampling. This could be a uniform distribution, where we sample all words with equal probability. Or it could be according to the frequency that each word shows up in our text corpus, the unigram distribution $U(w)$. The authors found the best distribution to be $U(w)^{3/4}$, empirically.
Finally, in
$$\large \log{\sigma\left(-u_{w_i}\hspace{0.001em}^\top v_{w_I}\right)},$$
we take the log-sigmoid of the negated inner product of a noise vector with the input vector.
<img src="assets/neg_sampling_loss.png" width=50%>
To give you an intuition for what we're doing here, remember that the sigmoid function returns a probability between 0 and 1. The first term in the loss pushes the probability that our network will predict the correct word $w_O$ towards 1. In the second term, since we are negating the sigmoid input, we're pushing the probabilities of the noise words towards 0.
```
import torch
from torch import nn
import torch.optim as optim
class SkipGramNeg(nn.Module):
def __init__(self, n_vocab, n_embed, noise_dist=None):
super().__init__()
self.n_vocab = n_vocab
self.n_embed = n_embed
self.noise_dist = noise_dist
# define embedding layers for input and output words
self.in_embed = nn.Embedding(n_vocab,n_embed)
self.out_embed = nn.Embedding(n_vocab,n_embed)
# Initialize both embedding tables with uniform distribution
self.in_embed.weight.data.uniform_(1,-1)
self.out_embed.weight.data.uniform_(1,-1)
def forward_input(self, input_words):
# return input vector embeddings
input_vector = self.in_embed(input_words)
return input_vector
def forward_output(self, output_words):
# return output vector embeddings
output_vector = self.out_embed(output_words)
return output_vector
def forward_noise(self, batch_size, n_samples):
""" Generate noise vectors with shape (batch_size, n_samples, n_embed)"""
if self.noise_dist is None:
# Sample words uniformly
noise_dist = torch.ones(self.n_vocab)
else:
noise_dist = self.noise_dist
# Sample words from our noise distribution
noise_words = torch.multinomial(noise_dist,
batch_size * n_samples,
replacement=True)
device = "cuda" if model.out_embed.weight.is_cuda else "cpu"
noise_words = noise_words.to(device)
## TODO: get the noise embeddings
# reshape the embeddings so that they have dims (batch_size, n_samples, n_embed)
noise_vector = self.out_embed(noise_words)
noise_vector = noise_vector.view(batch_size, n_samples, self.n_embed)
return noise_vector
class NegativeSamplingLoss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, input_vectors, output_vectors, noise_vectors):
batch_size, embed_size = input_vectors.shape
# Input vectors should be a batch of column vectors
input_vectors = input_vectors.view(batch_size, embed_size, 1)
# Output vectors should be a batch of row vectors
output_vectors = output_vectors.view(batch_size, 1, embed_size)
# bmm = batch matrix multiplication
# correct log-sigmoid loss
out_loss = torch.bmm(output_vectors, input_vectors).sigmoid().log()
out_loss = out_loss.squeeze()
# incorrect log-sigmoid loss
noise_loss = torch.bmm(noise_vectors.neg(), input_vectors).sigmoid().log()
noise_loss = noise_loss.squeeze().sum(1) # sum the losses over the sample of noise vectors
# negate and sum correct and noisy log-sigmoid losses
# return average batch loss
return -(out_loss + noise_loss).mean()
```
### Training
Below is our training loop, and I recommend that you train on GPU, if available.
```
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Get our noise distribution
# Using word frequencies calculated earlier in the notebook
word_freqs = np.array(sorted(freqs.values(), reverse=True))
unigram_dist = word_freqs/word_freqs.sum()
noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))
# instantiating the model
embedding_dim = 300
model = SkipGramNeg(len(vocab_to_int), embedding_dim, noise_dist=noise_dist).to(device)
# using the loss that we defined
criterion = NegativeSamplingLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
print_every = 1500
steps = 0
epochs = 3
# train for some number of epochs
for e in range(epochs):
# get our input, target batches
for input_words, target_words in get_batches(train_words, 512):
steps += 1
inputs, targets = torch.LongTensor(input_words), torch.LongTensor(target_words)
inputs, targets = inputs.to(device), targets.to(device)
# input, outpt, and noise vectors
input_vectors = model.forward_input(inputs)
output_vectors = model.forward_output(targets)
noise_vectors = model.forward_noise(inputs.shape[0], 5)
# negative sampling loss
loss = criterion(input_vectors, output_vectors, noise_vectors)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# loss stats
if steps % print_every == 0:
print("Epoch: {}/{}".format(e+1, epochs))
print("Loss: ", loss.item()) # avg batch loss at this point in training
valid_examples, valid_similarities = cosine_similarity(model.in_embed, device=device)
_, closest_idxs = valid_similarities.topk(6)
valid_examples, closest_idxs = valid_examples.to('cpu'), closest_idxs.to('cpu')
for ii, valid_idx in enumerate(valid_examples):
closest_words = [int_to_vocab[idx.item()] for idx in closest_idxs[ii]][1:]
print(int_to_vocab[valid_idx.item()] + " | " + ', '.join(closest_words))
print("...\n")
```
## Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
# getting embeddings from the embedding layer of our model, by name
embeddings = model.in_embed.weight.to('cpu').data.numpy()
viz_words = 380
tsne = TSNE()
embed_tsne = tsne.fit_transform(embeddings[:viz_words, :])
fig, ax = plt.subplots(figsize=(16, 16))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
#
```
| github_jupyter |
# Problem Statement
Customer churn and engagement has become one of the top issues for most banks. It costs significantly more to acquire new customers than retain existing. It is of utmost important for a bank to retain its customers.
We have a data from a MeBank (Name changed) which has a data of 7124 customers. In this data-set we have a dependent variable “Exited” and various independent variables.
Based on the data, build a model to predict when the customer will exit the bank. Split the data into Train and Test dataset (70:30), build the model on Train data-set and test the model on Test-dataset. Secondly provide recommendations to the bank so that they can retain the customers who are on the verge of exiting.
# Data Dictionary
<b>CustomerID</b> - Bank ID of the Customer
<b>Surname</b> - Customer’s Surname
<b>CreditScore</b> - Current Credit score of the customer
<b>Geography</b> - Current country of the customer
<b>Gender</b> - Customer’s Gender
<b>Age</b> - Customer’s Age
<b>Tenure</b> - Customer’s duration association with bank in years
<b>Balance</b> - Current balance in the bank account.
<b>Num of Dependents</b> - Number of dependents
<b>Has Crcard</b> - 1 denotes customer has a credit card and 0 denotes customer does not have a credit card
<b>Is Active Member</b> - 1 denotes customer is an active member and 0 denotes customer is not an active member
<b>Estimated Salary</b> - Customer’s approx. salary
<b>Exited</b> - 1 denotes customer has exited the bank and 0 denotes otherwise
### Load library and import data
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
churn=pd.read_csv("Churn_Modelling.csv")
```
### Inspect the data
```
churn.head()
churn.info()
```
Age and Balance variable has numeric data but data type is object. It appears some special character is present in this variable.
Also there are missing values for some variables.
# EDA
### Removing unwanted variables
```
# remove the variables and check the data for the 10 rows
churn.head(10)
```
Checking dimensions after removing unwanted variables,
### Summary
```
churn.describe(include="all")
churn.shape
```
### Proportion of observations in Target classes
```
# Get the proportions
```
### Checking for Missing values
```
# Are there any missing values ?
```
There are some missing values
### Checking for inconsistencies in Balance and Age variable
```
churn.Balance.sort_values()
```
There are 3 cases where '?' is present, and 3 cases where missing values are present for Balance variable.
Summary also proves the count of missing variables.
To confirm on the count of ? , running value_counts()
```
churn.Balance.value_counts()
churn[churn.Balance=="?"]
```
This confirms there are 3 cases having ?
```
churn.Age.value_counts().sort_values()
```
There is 1 case where ? is present
### Replacing ? as Nan in Age and Balance variable
```
```
Verifying count of missing values for Age and Balance variable below:
```
churn.Balance.isnull().sum()
churn.Age.isnull().sum()
```
### Imputing missing values
```
sns.boxplot(churn['Credit Score'])
```
As Outliers are present in the "Credit Score", so we impute the null values by median
```
sns.boxplot(churn['Tenure'])
sns.boxplot(churn['Estimated Salary'])
```
Substituting the mean value for all other numeric variables
```
for column in churn[['Credit Score', 'Tenure', 'Estimated Salary']]:
mean = churn[column].mean()
churn[column] = churn[column].fillna(mean)
churn.isnull().sum()
```
### Converting Object data type into Categorical
```
for column in churn[['Geography','Gender','Has CrCard','Is Active Member']]:
if churn[column].dtype == 'object':
churn[column] = pd.Categorical(churn[column]).codes
churn.head()
churn.info()
```
### Substituting the mode value for all categorical variables
```
for column in churn[['Geography','Gender','Has CrCard','Is Active Member']]:
mode = churn[column].mode()
churn[column] = churn[column].fillna(mode[0])
churn.isnull().sum()
```
Age and Balance are still not addressed. Getting the modal value
```
churn['Balance'].mode()
churn['Age'].mode()
```
Replacing nan with modal values,
```
churn['Balance']=churn['Balance'].fillna(3000)
churn['Age']=churn['Age'].fillna(37)
churn.isnull().sum()
```
There are no more missing values.
```
churn.info()
```
Age and Balance are still object, which has to be converted
### Converting Age and Balance to numeric variables
```
churn['Age']=churn['Age'].astype(str).astype(int)
churn['Balance']=churn['Balance'].astype(str).astype(float)
```
### Checking for Duplicates
```
# Are there any duplicates ?
dups = churn.duplicated()
print('Number of duplicate rows = %d' % (dups.sum()))
churn[dups]
```
There are no Duplicates
### Checking for Outliers
```
plt.figure(figsize=(15,15))
churn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']].boxplot(vert=0)
```
Very small number of outliers are present, which is also not significant as it will not affect much on ANN Predictions
### Checking pairwise distribution of the continuous variables
```
import seaborn as sns
sns.pairplot(churn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']])
```
### Checking for Correlations
```
# construct heatmap with only continuous variables
plt.figure(figsize=(10,8))
sns.set(font_scale=1.2)
sns.heatmap(churn[['Age','Balance','Credit Score', 'Tenure', 'Estimated Salary']].corr(), annot=True)
```
There is hardly any correlation between the variables
### Train Test Split
```
from sklearn.model_selection import train_test_split
#Extract x and y
#split data into 70% training and 30% test data
# Checking dimensions on the train and test data
print('x_train: ',x_train.shape)
print('x_test: ',x_test.shape)
print('y_train: ',y_train.shape)
print('y_test: ',y_test.shape)
```
### Scaling the variables
```
from sklearn.preprocessing import StandardScaler
#Initialize an object for StandardScaler
#Scale the training data
x_train
# Apply the transformation on the test data
x_test = sc.transform(x_test)
x_test
```
### Building Neural Network Model
```
clf = MLPClassifier(hidden_layer_sizes=100, max_iter=5000,
solver='sgd', verbose=True, random_state=21,tol=0.01)
# Fit the model on the training data
```
### Predicting training data
```
# use the model to predict the training data
y_pred =
```
### Evaluating model performance on training data
```
from sklearn.metrics import confusion_matrix,classification_report
confusion_matrix(y_train,y_pred)
print(classification_report(y_train, y_pred))
# AUC and ROC for the training data
# predict probabilities
probs = clf.predict_proba(x_train)
# keep probabilities for the positive outcome only
probs = probs[:, 1]
# calculate AUC
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(y_train, probs)
print('AUC: %.3f' % auc)
# calculate roc curve
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train, probs)
plt.plot([0, 1], [0, 1], linestyle='--')
# plot the roc curve for the model
plt.plot(fpr, tpr, marker='.')
# show the plot
plt.show()
```
### Predicting Test Data and comparing model performance
```
y_pred = clf.predict(x_test)
confusion_matrix(y_test, y_pred)
print(classification_report(y_test, y_pred))
# AUC and ROC for the test data
# predict probabilities
probs = clf.predict_proba(x_test)
# keep probabilities for the positive outcome only
probs = probs[:, 1]
# calculate AUC
auc = roc_auc_score(y_test, probs)
print('AUC: %.3f' % auc)
# calculate roc curve
fpr, tpr, thresholds = roc_curve(y_test, probs)
plt.plot([0, 1], [0, 1], linestyle='--')
# plot the roc curve for the model
plt.plot(fpr, tpr, marker='.')
# show the plot
plt.show()
```
### Model Tuning through Grid Search
**Below Code may take too much time.These values can be used instead {'hidden_layer_sizes': 500, 'max_iter': 5000, 'solver': 'adam', 'tol': 0.01}**
```
from sklearn.model_selection import GridSearchCV
param_grid = {
'hidden_layer_sizes': [100,200,300,500],
'max_iter': [5000,2500,7000,6000],
'solver': ['sgd','adam'],
'tol': [0.01],
}
nncl = MLPClassifier(random_state=1)
grid_search = GridSearchCV(estimator = nncl, param_grid = param_grid, cv = 10)
grid_search.fit(x_train, y_train)
grid_search.best_params_
best_grid = grid_search.best_estimator_
best_grid
ytrain_predict = best_grid.predict(x_train)
ytest_predict = best_grid.predict(x_test)
confusion_matrix(y_train,ytrain_predict)
# Accuracy of Train data
print(classification_report(y_train,ytrain_predict))
#from sklearn.metrics import roc_curve,roc_auc_score
rf_fpr, rf_tpr,_=roc_curve(y_train,best_grid.predict_proba(x_train)[:,1])
plt.plot(rf_fpr,rf_tpr, marker='x', label='NN')
plt.plot(np.arange(0,1.1,0.1),np.arange(0,1.1,0.1))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC')
plt.show()
print('Area under Curve is', roc_auc_score(y_train,best_grid.predict_proba(x_train)[:,1]))
confusion_matrix(y_test,ytest_predict)
# Accuracy of Test data
print(classification_report(y_test,ytest_predict))
#from sklearn.metrics import roc_curve,roc_auc_score
rf_fpr, rf_tpr,_=roc_curve(y_test,best_grid.predict_proba(x_test)[:,1])
plt.plot(rf_fpr,rf_tpr, marker='x', label='NN')
plt.plot(np.arange(0,1.1,0.1),np.arange(0,1.1,0.1))
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC')
plt.show()
print('Area under Curve is', roc_auc_score(y_test,best_grid.predict_proba(x_test)[:,1]))
best_grid.score
```
## Conclusion
AUC on the training data is 86% and on test data is 84%. The precision and recall metrics are also almost similar between training and test set, which indicates no overfitting or underfitting has happened.
best_grid model has better improved performance over the initial clf model as the sensitivity was much lesser in the initial model.
The Overall model performance is moderate enough to start predicting if any new customer will churn or not.
| github_jupyter |
# **Lab Session : Feature extraction II**
Author: Vanessa Gómez Verdejo (http://vanessa.webs.tsc.uc3m.es/)
Updated: 27/02/2017 (working with sklearn 0.18.1)
In this lab session we are going to work with some of the kernelized extensions of most well-known feature extraction techniques: PCA, PLS and CCA.
As in the previous notebook, to analyze the discriminatory capability of the extracted features, let's use a linear SVM as classifier and use its final accuracy over the test data to evaluate the goodness of the different feature extraction methods.
To implement the different approaches we will base on [Scikit-Learn](http://scikit-learn.org/stable/) python toolbox.
#### ** During this lab we will cover: **
#### *Part 2: Non linear feature selection*
##### * Part 2.1: Kernel extensions of PCA*
##### * Part 2.2: Analyzing the influence of the kernel parameter*
##### * Part 2.3: Kernel MVA approaches*
As you progress in this notebook, you will have to complete some exercises. Each exercise includes an explanation of what is expected, followed by code cells where one or several lines will have written down `<FILL IN>`. The cell that needs to be modified will have `# TODO: Replace <FILL IN> with appropriate code` on its first line. Once the `<FILL IN>` sections are updated and the code can be run; below this cell, you will find the test cell (beginning with the line `# TEST CELL`) and you can run it to verify the correctness of your solution.
```
%matplotlib inline
```
## *Part 2: Non linear feature selection*
#### ** 2.0: Creating toy problem **
The following code let you generate a bidimensional problem consisting of thee circles of data with different radius, each one associated to a different class.
As expected from the geometry of the problem, the classification boundary is not linear, so we will able to analyze the advantages of using no linear feature extraction techniques to transform the input space to a new space where a linear classifier can provide an accurate solution.
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_circles
import matplotlib.pyplot as plt
np.random.seed(0)
X, Y = make_circles(n_samples=400, factor=.6, noise=.1)
X_c2 = 0.1*np.random.randn(200,2)
Y_c2 = 2*np.ones((200,))
X= np.vstack([X,X_c2])
Y= np.hstack([Y,Y_c2])
plt.figure()
plt.title("Original space")
reds = Y == 0
blues = Y == 1
green = Y == 2
plt.plot(X[reds, 0], X[reds, 1], "ro")
plt.plot(X[blues, 0], X[blues, 1], "bo")
plt.plot(X[green, 0], X[green, 1], "go")
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
# split into a training and testing set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)
# Normalizing the data
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Binarize the labels for supervised feature extraction methods
set_classes = np.unique(Y)
Y_train_bin = label_binarize(Y_train, classes=set_classes)
Y_test_bin = label_binarize(Y_test, classes=set_classes)
```
### ** Part 2.1: Kernel PCA**
To extend the previous PCA feature extraction approach to its non-linear version, we can use of [KernelPCA( )](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.KernelPCA.html#sklearn.decomposition.KernelPCA) function.
Let's start this section computing the different kernel matrix that we need to train and evaluate the different feature extraction methods. For this exercise, we are going to consider a Radial Basis Function kernel (RBF), where each element of the kernel matrix is given by $k(x_i,x_j) = \exp (- \gamma (x_i -x_j)^2)$.
To analyze the advantages of the non linear feature extraction, let's compare it with its linear version. So, let's start computing both linear and kernelized versions of PCA. Complete the following code to obtain the variables (P_train, P_test) and (P_train_k, P_test_k) which have to contain, respectively, the projected data of the linear PCA and the KPCA.
To start to work, compute a maximum of two new projected features and fix gamma (the kernel parameter) to 1.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn.decomposition import PCA, KernelPCA
N_feat_max=2
# linear PCA
pca = PCA(n_components=N_feat_max)
pca.fit(X_train, Y_train)
P_train = pca.transform(X_train)
P_test = pca.transform(X_test)
# KPCA
pca_K = KernelPCA(n_components=N_feat_max, kernel="rbf", gamma=1)
pca_K.fit(X_train, Y_train)
P_train_k = pca_K.transform(X_train)
P_test_k =pca_K.transform(X_test)
print 'PCA and KPCA projections sucessfully computed'
```
Now, let's evaluate the discriminatory capability of the projected data (both linear and kernelized ones) feeding with them a linear SVM and measuring its accuracy over the test data. Complete the following to code to return in variables acc_test_lin and acc_test_kernel the SVM test accuracy using either the linear PCA projected data or the KPCA ones.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Define SVM classifier
from sklearn import svm
clf = svm.SVC(kernel='linear')
# Train it using linear PCA projections and evaluate it
clf.fit(P_train, Y_train)
acc_test_lin = clf.score(P_test, Y_test)
# Train it using KPCA projections and evaluate it
clf.fit(P_train_k, Y_train)
acc_test_kernel = clf.score(P_test_k, Y_test)
print("The test accuracy using linear PCA projections is %2.2f%%" %(100*acc_test_lin))
print("The test accuracy using KPCA projections is %2.2f%%" %(100*acc_test_kernel))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST Training and test data generation
Test.assertEquals(np.round(acc_test_lin,4), 0.2400, 'incorrect result: test accuracy using linear PCA projections is uncorrect')
Test.assertEquals(np.round(acc_test_kernel,4), 0.9533, 'incorrect result: test accuracy using KPCA projections is uncorrect')
```
Finally, let's analyze the transformation capabilities of the projected data using a KPCA vs. linear PCA plotting the resulting projected data for both training and test data sets.
Just run the following cells to obtain the desired representation.
```
def plot_projected_data(data, label):
"""Plot the desired sample data assigning differenet colors according to their categories.
Only two first dimensions of data ar plot and only three different categories are considered.
Args:
data: data set to be plot (number data x dimensions).
labes: target vector indicating the category of each data.
"""
reds = label == 0
blues = label == 1
green = label == 2
plt.plot(data[reds, 0], data[reds, 1], "ro")
plt.plot(data[blues, 0], data[blues, 1], "bo")
plt.plot(data[green, 0], data[green, 1], "go")
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.figure(figsize=(8, 8))
plt.subplot(2,2,1)
plt.title("Projected space of linear PCA for training data")
plot_projected_data(P_train, Y_train)
plt.subplot(2,2,2)
plt.title("Projected space of KPCA for training data")
plot_projected_data(P_train_k, Y_train)
plt.subplot(2,2,3)
plt.title("Projected space of linear PCA for test data")
plot_projected_data(P_test, Y_test)
plt.subplot(2,2,4)
plt.title("Projected space of KPCA for test data")
plot_projected_data(P_test_k, Y_test)
plt.show()
```
Go to the first cell and modify the kernel parameter (for instance, set gamma to 10 or 100) and run the code again. What is it happening? Why?
### ** Part 2.2: Analyzing the influence of the kernel parameter**
In the case of working with RBF kernel, the kernel width selection can be critical:
* If gamma value is too high the width of the RBF is reduced (tending to be a delta function) and, therefore, the interaction between the training data is null. So we project each data over itself and assign it a dual variable in such a way that the best possible projection (for classification purposes) of the training data is obtain (causing overfitting problems).
* If gamma value is close to zero, the RBF width increases and the kernel behavior tends to be similar to a linear kernel. In this case, the non-linear properties are lost.
Therefore, in this kind of applications, the value of kernel width can be critical and it's advised selecting it by cross validation.
This part of lab section aims to adjust the gamma parameter by a validation process. So, we will start creating a validation partition of the training data.
```
## Redefine the data partitions: creating a validation partition
# split training data into a training and validation set
X_train2, X_val, Y_train2, Y_val = train_test_split(X_train, Y_train, test_size=0.33)
# Normalizing the data
scaler = StandardScaler()
X_train2 = scaler.fit_transform(X_train2)
X_val = scaler.transform(X_val)
X_test = scaler.transform(X_test)
# Binarize the training labels for supervised feature extraction methods
set_classes = np.unique(Y)
Y_train_bin2 = label_binarize(Y_train2, classes=set_classes)
```
Now let's evaluate the KPCA performance when different values of gamma are used. So, complete the below code in such a way that for each gamma value you can:
* Train the KPCA and obtain the projections for the training, validation and test data.
* Obtain the accuracies of a linear SVM over the validation and test partitions.
Once, you have the validation and test accuracies for each gamma value, obtain the optimum gamma value (i.e., the gamma value which provides the maximum validation accuracy) and its corresponding test accuracy.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn.decomposition import KernelPCA
from sklearn import svm
np.random.seed(0)
# Defining parameters
N_feat_max = 2
rang_g = [0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50 , 100, 500, 1000]
# Variables to save validation and test accuracies
acc_val = []
acc_test = []
# Bucle to explore gamma values
for g_value in rang_g:
print 'Evaluting with gamma ' + str(g_value)
# 1. Train KPCA and project the data
pca_K = KernelPCA(n_components=N_feat_max, kernel="rbf", gamma=g_value)
pca_K.fit(X_train2, Y_train2)
P_train_k = pca_K.transform(X_train2)
P_val_k = pca_K.transform(X_val)
P_test_k = pca_K.transform(X_test)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='linear')
clf.fit(P_train_k, Y_train2)
acc_val.append(clf.score(P_val_k, Y_val))
acc_test.append(clf.score(P_test_k, Y_test))
# Find the optimum value of gamma and its corresponging test accuracy
pos_max = np.argmax(acc_test)
g_opt = rang_g[pos_max+1]
acc_test_opt = acc_test[pos_max]
print 'Optimum of value of gamma: ' + str(g_opt)
print 'Test accuracy: ' + str(acc_test_opt)
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST Training and test data generation
Test.assertEquals(g_opt, 1, 'incorrect result: validated gamma value is uncorrect')
Test.assertEquals(np.round(acc_test_opt,4), 0.9467, 'incorrect result: validated test accuracy is uncorrect')
```
Finally, just run the next code to train the final model with the selected gamma value and plot the projected data
```
# Train KPCA and project the data
pca_K = KernelPCA(n_components=N_feat_max, kernel="rbf", gamma=g_opt)
pca_K.fit(X_train2)
P_train_k = pca_K.transform(X_train2)
P_val_k = pca_K.transform(X_val)
P_test_k = pca_K.transform(X_test)
# Plot the projected data
plt.figure(figsize=(15, 5))
plt.subplot(1,3,1)
plt.title("Projected space of KPCA: train data")
plot_projected_data(P_train_k, Y_train2)
plt.subplot(1,3,2)
plt.title("Projected space of KPCA: validation data")
plot_projected_data(P_val_k, Y_val)
plt.subplot(1,3,3)
plt.title("Projected space of KPCA: test data")
plot_projected_data(P_test_k, Y_test)
plt.show()
```
### ** Part 2.3: Kernel MVA approaches**
Until now, we have only used the KPCA approach because is the only not linear feature extraction method that it is included in Scikit-Learn.
However, if we compare linear and kernel versions of MVA approaches, we could extend any linear MVA approach to its kernelized version. In this way, we can use the same methods reviewed for the linear approaches and extend them to its non-linear fashion calling it with the training kernel matrix, instead of the training data, and the method would learn the dual variables, instead of the eigenvectors.
The following table relates both approaches:
| | Linear | Kernel |
|------ |---------------------------|----------------------------|
|Input data | ${\bf X}$ | ${\bf K}$ |
|Variables to compute (fit) |Eigenvectors (${\bf U}$) |Dual variables (${\bf A}$) |
|Projection vectors | ${\bf U}$ |${\bf U}=\Phi^T {\bf A}$ (cannot be computed) |
|Project data (transform) |${\bf X}' = {\bf U}^T {\bf X}^T$|${\bf X}' ={\bf A}^T \Phi \Phi^T = {\bf A}^T {\bf K}$|
** Computing and centering kernel matrix **
Let's start this section computing the different kernel matrix that we need to train and evaluate the different feature extraction methods. For this exercise, we are going to consider a Radial Basis Function kernel (RBF), where each element of the kernel matrix is given by $k(x_i,x_j) = \exp (- \gamma (x_i -x_j)^2)$.
In particular, we need to compute two kernel matrix:
* Training data kernel matrix (K_tr) where the RBF is compute pairwise over the training data. The resulting matrix dimension is of $N_{tr} \times N_{tr}$, being $N_{tr}$ the number of training data.
* Test data kernel matrix (K_test) where the RBF is compute between training and test samples, i.e., in RBF expression the data $x_i$ belongs to test data whereas $x_j$ belongs to training data. The resulting matrix dimension is of $N_{test} \times N_{tr}$, being $N_{test}$ and $N_{tr}$ the number of test and training data, respectively.
Use the [rbf_kernel( )](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.rbf_kernel.html) function to compute the K_tr and K_test kernel matrix. Fix the kernel width value (gamma) to 1.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Computing the kernel matrix
from sklearn.metrics.pairwise import rbf_kernel
g_value = 1
# Compute the kernel matrix (use the X_train matrix, before dividing it in validation and training data)
K_tr = rbf_kernel(X_train, X_train, gamma=g_value)
K_test = rbf_kernel(X_test, X_train, gamma=g_value)
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST Training and test data generation
Test.assertEquals(K_tr.shape, (450,450), 'incorrect result: dimensions of training kernel matrix are uncorrect')
Test.assertEquals(K_test.shape, (150,450), 'incorrect result: dimensions of test kernel matrix are uncorrect')
```
After compute these kernel matrix, they have to be centered (in the same way that we remove the mean when we work over the input space). For this purpose, next code provides you the function center_K(). Use it properly to remove the mean of both K_tr and K_test matrix.
```
def center_K(K):
"""Center a kernel matrix K, i.e., removes the data mean in the feature space.
Args:
K: kernel matrix
"""
size_1,size_2 = K.shape;
D1 = K.sum(axis=0)/size_1
D2 = K.sum(axis=1)/size_2
E = D2.sum(axis=0)/size_1
K_n = K + np.tile(E,[size_1,size_2]) - np.tile(D1,[size_1,1]) - np.tile(D2,[size_2,1]).T
return K_n
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Center the kernel matrix
K_tr_c = center_K(K_tr)
K_test_c = center_K(K_test)
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST Training and test data generation
Test.assertEquals(np.round(K_tr_c[0][0],2), 0.55, 'incorrect result: centered training kernel matrix is uncorrect')
Test.assertEquals(np.round(K_test_c[0][0],2), -0.24, 'incorrect result: centered test kernel matrix is uncorrect')
```
** Alternative KPCA formulation **
Complete the following code lines to obtain a KPCA implementaion using the linear PCA function and the kernel matrix as input data. Later, compare its result with that of the KPCA function.
```
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
from sklearn import svm
# Defining parameters
N_feat_max = 2
## PCA method (to complete)
# 1. Train PCA with the kernel matrix and project the data
pca_K2 = PCA(n_components=N_feat_max)
pca_K2.fit(K_tr_c, Y_train)
P_train_k2 = pca_K2.transform(K_tr_c)
P_test_k2 = pca_K2.transform(K_test_c)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='linear')
clf.fit(P_train_k2, Y_train)
print 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))
## KPCA method (for comparison purposes)
# 1. Train KPCA and project the data
# Fixing gamma to 0.5 here, it is equivalent to gamma=1 in rbf function
pca_K = KernelPCA(n_components=N_feat_max, kernel="rbf", gamma=0.5)
pca_K.fit(X_train)
P_train_k = pca_K.transform(X_train)
P_test_k = pca_K.transform(X_test)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='linear')
clf.fit(P_train_k, Y_train)
print 'Test accuracy with KPCA: '+ str(clf.score(P_test_k, Y_test))
```
** Alternative KPLS and KCCA formulations **
Use the PLS and CCA methods with the kernel matrix to obtain no-linear (or kernelized) supervised feature extractors.
```
###########################################################
# KCCA
###########################################################
from lib.mva import mva
# Defining parameters
N_feat_max = 2
## PCA method (to complete)
# 1. Train PCA with the kernel matrix and project the data
CCA = mva('CCA', N_feat_max)
CCA.fit(K_tr_c, Y_train,reg=1e-2)
P_train_k2 = CCA.transform(K_tr_c)
P_test_k2 = CCA.transform(K_test_c)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='rbf', C=C, gamma=gamma)
clf.fit(P_train_k2, Y_train)
print 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))
###########################################################
# KPLS
###########################################################
from sklearn.cross_decomposition import PLSSVD
# Defining parameters
N_feat_max = 100
## PCA method (to complete)
# 1. Train PCA with the kernel matrix and project the data
pls = PLSSVD(n_components=N_feat_max)
pls.fit(K_tr_c, Y_train_bin)
P_train_k2 = CCA.transform(K_tr_c)
P_test_k2 = CCA.transform(K_test_c)
# 2. Evaluate the projection performance
clf = svm.SVC(kernel='rbf', C=C, gamma=gamma)
clf.fit(P_train_k2, Y_train)
print 'Test accuracy with PCA with a kenel matrix as input: '+ str(clf.score(P_test_k2, Y_test))
```
| github_jupyter |
Practical 1: Sentiment Detection of Movie Reviews
========================================
This practical concerns sentiment detection of movie reviews.
In [this file](https://gist.githubusercontent.com/bastings/d47423301cca214e3930061a5a75e177/raw/5113687382919e22b1f09ce71a8fecd1687a5760/reviews.json) (80MB) you will find 1000 positive and 1000 negative **movie reviews**.
Each review is a **document** and consists of one or more sentences.
To prepare yourself for this practical, you should
have a look at a few of these texts to understand the difficulties of
the task (how might one go about classifying the texts?); you will write
code that decides whether a random unseen movie review is positive or
negative.
Please make sure you have read the following paper:
> Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan
(2002).
[Thumbs up? Sentiment Classification using Machine Learning
Techniques](https://dl.acm.org/citation.cfm?id=1118704). EMNLP.
Bo Pang et al. were the "inventors" of the movie review sentiment
classification task, and the above paper was one of the first papers on
the topic. The first version of your sentiment classifier will do
something similar to Bo Pang’s system. If you have questions about it,
we should resolve them in our first demonstrated practical.
**Advice**
Please read through the entire practical and familiarise
yourself with all requirements before you start coding or otherwise
solving the tasks. Writing clean and concise code can make the difference
between solving the assignment in a matter of hours, and taking days to
run all experiments.
**Environment**
All code should be written in **Python 3**.
If you use Colab, check if you have that version with `Runtime -> Change runtime type` in the top menu.
> If you want to work in your own computer, then download this notebook through `File -> Download .ipynb`.
The easiest way to
install Python is through downloading
[Anaconda](https://www.anaconda.com/download).
After installation, you can start the notebook by typing `jupyter notebook filename.ipynb`.
You can also use an IDE
such as [PyCharm](https://www.jetbrains.com/pycharm/download/) to make
coding and debugging easier. It is good practice to create a [virtual
environment](https://docs.python.org/3/tutorial/venv.html) for this
project, so that any Python packages don’t interfere with other
projects.
#### Learning Python 3
If you are new to Python 3, you may want to check out a few of these resources:
- https://learnxinyminutes.com/docs/python3/
- https://www.learnpython.org/
- https://docs.python.org/3/tutorial/
Loading the Data
-------------------------------------------------------------
```
# download sentiment lexicon
!wget https://gist.githubusercontent.com/bastings/d6f99dcb6c82231b94b013031356ba05/raw/f80a0281eba8621b122012c89c8b5e2200b39fd6/sent_lexicon
# download review data
!wget https://gist.githubusercontent.com/bastings/d47423301cca214e3930061a5a75e177/raw/5113687382919e22b1f09ce71a8fecd1687a5760/reviews.json
import math
import os
import sys
from subprocess import call
from nltk import FreqDist
from nltk.util import ngrams
from nltk.stem.porter import PorterStemmer
import sklearn as sk
#from google.colab import drive
import pickle
import json
from collections import Counter
import requests
import matplotlib.pyplot as plt
import numpy as np
# load reviews into memory
# file structure:
# [
# {"cv": integer, "sentiment": str, "content": list}
# {"cv": integer, "sentiment": str, "content": list}
# ..
# ]
# where `content` is a list of sentences,
# with a sentence being a list of (token, pos_tag) pairs.
# For documentation on POS-tags, see
# https://catalog.ldc.upenn.edu/docs/LDC99T42/tagguid1.pdf
with open("reviews.json", mode="r", encoding="utf-8") as f:
reviews = json.load(f)
print(len(reviews))
def print_sentence_with_pos(s):
print(" ".join("%s/%s" % (token, pos_tag) for token, pos_tag in s))
for i, r in enumerate(reviews):
print(r["cv"], r["sentiment"], len(r["content"])) # cv, sentiment, num sents
print_sentence_with_pos(r["content"][0])
if i == 4:
break
c = Counter()
for review in reviews:
for sentence in review["content"]:
for token, pos_tag in sentence:
c[token.lower()] += 1
print("#types", len(c))
print("Most common tokens:")
for token, count in c.most_common(25):
print("%10s : %8d" % (token, count))
```
Symbolic approach – sentiment lexicon (2pts)
---------------------------------------------------------------------
**How** could one automatically classify movie reviews according to their
sentiment?
If we had access to a **sentiment lexicon**, then there are ways to solve
the problem without using Machine Learning. One might simply look up
every open-class word in the lexicon, and compute a binary score
$S_{binary}$ by counting how many words match either a positive, or a
negative word entry in the sentiment lexicon $SLex$.
$$S_{binary}(w_1w_2...w_n) = \sum_{i = 1}^{n}\text{sgn}(SLex\big[w_i\big])$$
**Threshold.** In average there are more positive than negative words per review (~7.13 more positive than negative per review) to take this bias into account you should use a threshold of **8** (roughly the bias itself) to make it harder to classify as positive.
$$
\text{classify}(S_{binary}(w_1w_2...w_n)) = \bigg\{\begin{array}{ll}
\text{positive} & \text{if } S_{binary}(w_1w_2...w_n) > threshold\\
\text{negative} & \text{else }
\end{array}
$$
To implement this approach, you should use the sentiment
lexicon in `sent_lexicon`, which was taken from the
following work:
> Theresa Wilson, Janyce Wiebe, and Paul Hoffmann
(2005). [Recognizing Contextual Polarity in Phrase-Level Sentiment
Analysis](http://www.aclweb.org/anthology/H/H05/H05-1044.pdf). HLT-EMNLP.
#### (Q: 1.1) Implement this approach and report its classification accuracy. (1 pt)
##### This block loads the lexicon file and stores the sentiments and word types as dictionaries
```
#
# Given a line from the sentiment file
# ex. type=weaksubj len=1 word1=abandoned pos1=adj stemmed1=n priorpolarity=negative
# Returns a dictionary
# ex. {type: weaksubj, len: 1, word1: abandoned, pos1: adj, stemmed1: n, priorpolarity: negative}
#
def sentiment_line_to_dict(line):
dictionary = {}
words = line.split()
for word in words:
variable_assignment = word.split('=')
variable = variable_assignment[0]
value = variable_assignment[1]
dictionary[variable] = value
return dictionary
#
# Adds the word with the sentiment to the dictionary.
# If the word is already in the dictionary and the sentiments are conflicting,
# the sentiment will be set to 0 (neutral).
#
def add_sentiment_to_dict(sentiment_dict, word, sentiment):
if word in sentiment_dict.keys():
if not sentiment_dict[word] == sentiment:
sentiment_dict[word] = 0
else:
sentiment_dict[word] = sentiment
return sentiment_dict
#
# Adds the word with the type to the dictionary.
# If the word is already in the dictionary and the types are conflicting,
# the type will be set to 2 (neutral).
#
def add_type_to_dict(type_dict, word, word_type):
if word in type_dict.keys():
if not type_dict[word] == word_type:
type_dict[word] = 2
else:
type_dict[word] = word_type
return type_dict
#
# Converts a sentiment string: "positive", "negative", "neutral" to 1, -1 and 0 respectively.
#
def sentiment_to_score(sentiment_as_string):
if sentiment_as_string == 'positive':
return 1
if sentiment_as_string == 'negative':
return -1
return 0
#
# Converts a type string: "strongsubj", "weaksubj" to 2, and 1 respectively.
#
def type_to_score(type_as_string):
if type_as_string == 'strongsubj':
return 3
if type_as_string == 'weaksubj':
return 1
return 1
#
# Parses the lexicon file and return 2 dictionaries:
# sentiment_dict with structure: { "love": 1, "hate": -1, "butter": 0 ... }
# 1 is positive, -1 is negative, 0 is neutral.
#
# type_dict with structure: { "love": 2, "hate": 1, "butter": 1.5 ... }
# 2 is a strong type, 1 is a weak type and 1.5 is a word that had both types in the file.
#
def parse_lexicon_to_dicts():
with open("sent_lexicon", mode="r", encoding="utf-8") as file:
array_of_lines = file.readlines()
sentiment_dict = {}
type_dict = {}
for line in array_of_lines:
line_as_dict = sentiment_line_to_dict(line)
word = line_as_dict['word1']
word_type = type_to_score(line_as_dict['type'])
sentiment = sentiment_to_score(line_as_dict['priorpolarity'])
sentiment_dict = add_sentiment_to_dict(sentiment_dict, word, sentiment)
type_dict = add_type_to_dict(type_dict, word, word_type)
return sentiment_dict, type_dict
sentiment_dict, type_dict = parse_lexicon_to_dicts()
print("Loaded the file!")
```
##### This block contains the binary classification code
```
THRESHOLD = 8
#
# Given a review returns a list of all the words.
# note: all words are converted to lowercase
#
def get_words_of_review(review):
content = review['content']
words = []
for line in content:
for word_pair in line:
word = word_pair[0]
word_in_lowercase = word.lower()
words.append(word_in_lowercase)
return words
#
# Returns the binary (unaltered) score of the word;
# 1 for positive, -1 for negative, 0 for neutral or not found.
#
def get_binary_score_of_word(word):
try:
return sentiment_dict[word]
except KeyError:
# Word not in our dictionary.
return 0
#
# Given a review returns the real sentiment.
# -1 for negative, 1 for positive.
#
def get_real_sentiment_of_review(review):
if (review['sentiment'] == 'NEG'):
return -1
return 1
#
# Given a review returns 1 if it classifies it as positive, and -1 otherwise.
#
def binary_classify_review(review):
words = get_words_of_review(review)
score = 0
for word in words:
score += get_binary_score_of_word(word)
if score > THRESHOLD:
return 1
return -1
#
# Returns token_results which is a list of wheter our predictions were correct: ['-', '+', '-', ...]
# And returns the accuracy as a percentage, e.g. 45 for 45% accuracy.
#
def binary_classify_all_reviews():
total = 0
correct = 0
token_results = []
for review in reviews:
prediction = binary_classify_review(review)
real_sentiment = get_real_sentiment_of_review(review)
if prediction == real_sentiment:
correct += 1
token_results.append('+')
else:
token_results.append('-')
total += 1
accuracy = correct / total * 100
return accuracy, token_results
binary_accuracy, binary_results = binary_classify_all_reviews()
print("Binary classification accuracy: {0:.2f}%".format(binary_accuracy))
```
If the sentiment lexicon also has information about the **magnitude** of
sentiment (e.g., *“excellent"* would have higher magnitude than
*“good"*), we could take a more fine-grained approach by adding up all
sentiment scores, and deciding the polarity of the movie review using
the sign of the weighted score $S_{weighted}$.
$$S_{weighted}(w_1w_2...w_n) = \sum_{i = 1}^{n}SLex\big[w_i\big]$$
Their lexicon also records two possible magnitudes of sentiment (*weak*
and *strong*), so you can implement both the binary and the weighted
solutions (please use a switch in your program). For the weighted
solution, you can choose the weights intuitively *once* before running
the experiment.
#### (Q: 1.2) Now incorporate magnitude information and report the classification accuracy. Don't forget to use the threshold. (1 pt)
```
#
# Returns the weighted score of the word;
# it multiplies the original score of the word with the type (strong 3, neutral 2, or weak 1).
#
def get_weighted_score_of_word(word):
try:
score = sentiment_dict[word]
word_type = type_dict[word]
return word_type * score
except KeyError:
# Word not in our dictionary.
return 0
#
# Given a review returns 1 if it classifies it as positive, and -1 otherwise.
#
def weighted_classify_review(review):
words = get_words_of_review(review)
score = 0
for word in words:
score += get_weighted_score_of_word(word)
if score > THRESHOLD:
return 1
return -1
#
# Returns token_results which is a list of wheter our predictions were correct: ['-', '+', '-', ...]
# And returns the accuracy as a percentage, e.g. 45 for 45% accuracy.
#
def weighted_classify_all_reviews():
total = 0
correct = 0
token_results = []
for review in reviews:
prediction = weighted_classify_review(review)
real_sentiment = get_real_sentiment_of_review(review)
if prediction == real_sentiment:
correct += 1
token_results.append('+')
else:
token_results.append('-')
total += 1
accuracy = correct / total * 100
return accuracy, token_results
magnitude_accuracy, magnitude_results = weighted_classify_all_reviews()
print("Magnitude classification accuracy: {0:.2f}%".format(magnitude_accuracy))
```
#### Optional: make a barplot of the two results.
```
plt.bar("Binary", binary_accuracy)
plt.bar("Magnitude", magnitude_accuracy)
plt.ylabel("Accuracy in %")
plt.title("Accuracy of binary and magnitude classification")
plt.show()
```
Answering questions in statistically significant ways (1pt)
-------------------------------------------------------------
Does using the magnitude improve the results? Oftentimes, answering questions like this about the performance of
different signals and/or algorithms by simply looking at the output
numbers is not enough. When dealing with natural language or human
ratings, it’s safe to assume that there are infinitely many possible
instances that could be used for training and testing, of which the ones
we actually train and test on are a tiny sample. Thus, it is possible
that observed differences in the reported performance are really just
noise.
There exist statistical methods which can be used to check for
consistency (*statistical significance*) in the results, and one of the
simplest such tests is the **sign test**.
The sign test is based on the binomial distribution. Count all cases when System 1 is better than System 2, when System 2 is better than System 1, and when they are the same. Call these numbers $Plus$, $Minus$ and $Null$ respectively.
The sign test returns the probability that the null hypothesis is true.
This probability is called the $p$-value and it can be calculated for the two-sided sign test using the following formula (we multiply by two because this is a two-sided sign test and tests for the significance of differences in either direction):
$$2 \, \sum\limits_{i=0}^{k} \binom{N}{i} \, q^i \, (1-q)^{N-i}$$
where $$N = 2 \Big\lceil \frac{Null}{2}\Big\rceil + Plus + Minus$$ is the total
number of cases, and
$$k = \Big\lceil \frac{Null}{2}\Big\rceil + \min\{Plus,Minus\}$$ is the number of
cases with the less common sign.
In this experiment, $q = 0.5$. Here, we
treat ties by adding half a point to either side, rounding up to the
nearest integer if necessary.
#### (Q 2.1): Implement the sign test. Is the difference between the two symbolic systems significant? What is the p-value? (1 pt)
You should use the `comb` function from `scipy` and the `decimal` package for the stable adding of numbers in the final summation.
You can quickly verify the correctness of
your sign test code using a [free online
tool](https://www.graphpad.com/quickcalcs/binomial1.cfm).
```
from decimal import Decimal
from scipy.misc import comb
def sign_test(results_1, results_2):
"""test for significance
results_1 is a list of classification results (+ for correct, - incorrect)
results_2 is a list of classification results (+ for correct, - incorrect)
"""
ties, plus, minus = 0, 0, 0
# "-" carries the error
for i in range(0, len(results_1)):
if results_1[i]==results_2[i]:
ties += 1
elif results_1[i]=="-":
plus += 1
elif results_2[i]=="-":
minus += 1
n = Decimal(2 * math.ceil(ties / 2.) + plus + minus)
k = Decimal(math.ceil(ties / 2.) + min(plus, minus))
summation = Decimal(0.0)
for i in range(0,int(k)+1):
summation += Decimal(comb(n, i, exact=True))
# use two-tailed version of test
summation *= 2
summation *= (Decimal(0.5)**Decimal(n))
print("the difference is",
"not significant" if summation >= 0.05 else "significant")
return summation
p_value = sign_test(binary_results, magnitude_results)
print("p_value =", p_value)
```
## Using the Sign test
**From now on, report all differences between systems using the
sign test.** You can think about a change that you apply to one system, as a
new system.
You should report statistical test
results in an appropriate form – if there are several different methods
(i.e., systems) to compare, tests can only be applied to pairs of them
at a time. This creates a triangular matrix of test results in the
general case. When reporting these pair-wise differences, you should
summarise trends to avoid redundancy.
Naive Bayes (8pt + 1pt bonus)
==========
Your second task is to program a simple Machine Learning approach that operates
on a simple Bag-of-Words (BoW) representation of the text data, as
described in Pang et al. (2002). In this approach, the only features we
will consider are the words in the text themselves, without bringing in
external sources of information. The BoW model is a popular way of
representing text information as vectors (or points in space), making it
easy to apply classical Machine Learning algorithms on NLP tasks.
However, the BoW representation is also very crude, since it discards
all information related to word order and grammatical structure in the
original text.
## Writing your own classifier
Write your own code to implement the Naive Bayes (NB) classifier. As
a reminder, the Naive Bayes classifier works according to the following
equation:
$$\hat{c} = \operatorname*{arg\,max}_{c \in C} P(c|\bar{f}) = \operatorname*{arg\,max}_{c \in C} P(c)\prod^n_{i=1} P(f_i|c)$$
where $C = \{ \text{POS}, \text{NEG} \}$ is the set of possible classes,
$\hat{c} \in C$ is the most probable class, and $\bar{f}$ is the feature
vector. Remember that we use the log of these probabilities when making
a prediction:
$$\hat{c} = \operatorname*{arg\,max}_{c \in C} \Big\{\log P(c) + \sum^n_{i=1} \log P(f_i|c)\Big\}$$
You can find more details about Naive Bayes in [Jurafsky &
Martin](https://web.stanford.edu/~jurafsky/slp3/). You can also look at
this helpful
[pseudo-code](https://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html).
*Note: this section and the next aim to put you a position to replicate
Pang et al., Naive Bayes results. However, the numerical results
will differ from theirs, as they used different data.*
**You must write the Naive Bayes training and prediction code from
scratch.** You will not be given credit for using off-the-shelf Machine
Learning libraries.
The data contains the text of the reviews, where each document consists
of the sentences in the review, the sentiment of the review and an index
(cv) that you will later use for cross-validation. You will find the
text has already been tokenised and POS-tagged for you. Your algorithm
should read in the text, **lowercase it**, and store the words and their
frequencies in an appropriate data structure that allows for easy
computation of the probabilities used in the Naive Bayes algorithm, and
then make predictions for new instances.
#### (Q3.1) Train your classifier on (positive and negative) reviews with cv-value 000-899, and test it on the remaining reviews cv900–cv999. Report results using simple classification accuracy as your evaluation metric. Your features are the word vocabulary. The value of a feature is the count of that feature (word) in the document. (2pts)
The following code block contains our BagOfWords class
```
#
# This class represents our bag of words. It stores the words in a dictionary in the following format:
# BOW = {
# 'cat': {
# 'POS': 3, # 3 positive occurences
# 'NEG': 1, # 1 negative occurences
# 'P_POS': 0.001 # probability of this word occuring in positive review
# 'P_NEG': 0.00033 # probability of this word occuring in negative review
# },
# 'dog': {
# etc..
# }
#
class BagOfWords:
def __init__(self, positive_prior):
self.positive_prior = positive_prior
self.total_positive_words = 0
self.total_negative_words = 0
self.bag_of_words = {}
#
# Adds a words to the BOW, if it is already in the BOW it will increment the occurence of the word.
#
def add_word(self, word, sentiment):
# Keep a count of total number of positive and negative words.
if sentiment == 'POS':
self.count_positive_word()
else:
self.count_negative_word()
# If the word is not yet in our bag of words:
# Initialize the word with 0 POS and 0 NEG occurences.
if not word in self.bag_of_words.keys():
self.bag_of_words[word] = {}
self.bag_of_words[word]['POS'] = 0
self.bag_of_words[word]['NEG'] = 0
self.bag_of_words[word][sentiment] += 1
#
# Adds the P_POS and P_NEG to the BOW.
#
def add_probabilities(self, word, p_pos, p_neg):
if not word in self.bag_of_words.keys():
self.bag_of_words[word] = {}
self.bag_of_words[word]['POS'] = 0
self.bag_of_words[word]['NEG'] = 0
self.bag_of_words[word]['P_POS'] = p_pos
self.bag_of_words[word]['P_NEG'] = p_neg
#
# Increments the number of positive words it found by 1
#
def count_positive_word(self):
self.total_positive_words += 1
#
# Increments the number of negative words it found by 1
#
def count_negative_word(self):
self.total_negative_words += 1
#
# Returns the number of unique words in the BOW.
#
def get_n_unique_words(self):
return len(self.bag_of_words)
#
# Returns the words in the bag of words.
#
def get_words(self):
return self.bag_of_words
#
# Returns the number of occurences of the word with the given sentiment (POS or NEG)
#
def count_occurences(self, word, sentiment):
try:
return self.bag_of_words[word][sentiment]
except KeyError:
return 0
#
# Returns the computed P_POS or P_NEG for the given word, if it is a new word 0 is returned.
#
def get_probability(self, word, sentiment):
sentiment = "P_{}".format(sentiment)
try:
return self.bag_of_words[word][sentiment]
except KeyError:
return 0
```
The following code block contains our BayesClassifier class.
```
class BayesClassifier:
#
# use_smoothing: If True uses laplace smoothing with constant k=1
# use_stemming: If True uses stemming.
# n_grams: The number of features to use, e.g. if 2: both 1-grams and 2-grams are used as features.
# if 3: both 1-grams, 2-grams and 3-grams are used as features.
#
def __init__(self, use_smoothing=False, use_stemming=False, n_grams=1):
self.use_smoothing = use_smoothing
self.use_stemming = use_stemming
self.n_grams = n_grams
self.stemmer = PorterStemmer()
#
# Given a list of train indices and a list of test indices trains the classifier
# and returns the accuracy (number) and the results (list of + and -).
# For question 3.2 we want to be able to indicate if we want only the POS, NEG or BOTH of a CV index.
# This is the list train_indices_sentiment and test_indices_sentiment.
# train_and_classify([1, 2, 3], [4], ["BOTH", "POS", "POS"], ["NEG"])
# will train on [1-NEG, 1-POS, 2-POS, 3-POS] and test on [4-NEG]
#
def train_and_classify(self, train_indices, test_indices, train_indices_sentiment=[], test_indices_sentiment=[]):
bag_of_words = self.train(train_indices, train_indices_sentiment)
total = 0
correct = 0
results = []
for review in self.get_relevant_reviews(test_indices, test_indices_sentiment):
prediction = self.classify(bag_of_words, review)
true_label = review['sentiment']
if prediction == true_label:
correct += 1
results.append('+')
else:
results.append('-')
total += 1
accuracy = correct / total * 100
return accuracy, results
#
# Classifies a single review, returns POS or NEG.
#
def classify(self, bag_of_words, review):
score_positive = math.log(bag_of_words.positive_prior)
score_negative = math.log(1 - bag_of_words.positive_prior)
for word in self.get_words_of_review(review):
p_pos = bag_of_words.get_probability(word, 'POS')
p_neg = bag_of_words.get_probability(word, 'NEG')
if p_pos > 0:
score_positive += math.log(p_pos)
if p_neg > 0:
score_negative += math.log(p_neg)
# This word was not in the training set so the probability is 0!
if self.use_smoothing and (p_pos == 0 or p_neg == 0):
p_pos = 1 / (bag_of_words.total_positive_words + bag_of_words.get_n_unique_words())
p_neg = 1 / (bag_of_words.total_negative_words + bag_of_words.get_n_unique_words())
score_positive += math.log(p_pos)
score_negative += math.log(p_neg)
if (score_positive > score_negative):
return "POS"
else:
return "NEG"
#
# Trains the classifier, creates a BOW with occurences and probabilities.
#
def train(self, indices, indices_sentiment):
bag_of_words = self.create_bag_of_words(indices, indices_sentiment)
for word in bag_of_words.get_words():
positive_occurences = bag_of_words.count_occurences(word, "POS")
negative_occurences = bag_of_words.count_occurences(word, "NEG")
if self.use_smoothing:
probability_pos = (positive_occurences + 1) / (bag_of_words.total_positive_words + bag_of_words.get_n_unique_words())
probability_neg = (negative_occurences + 1) / (bag_of_words.total_negative_words + bag_of_words.get_n_unique_words())
else:
if bag_of_words.total_positive_words == 0:
probability_pos = 0
else:
probability_pos = positive_occurences / bag_of_words.total_positive_words
if bag_of_words.total_negative_words == 0:
probability_neg = 0
else:
probability_neg = negative_occurences / bag_of_words.total_negative_words
bag_of_words.add_probabilities(word, probability_pos, probability_neg)
return bag_of_words
#
# Returns a bag of word object created from the given indices.
#
def create_bag_of_words(self, indices, indices_sentiment):
bag_of_words = BagOfWords(self.get_positive_prior(indices, indices_sentiment))
relevant_reviews = self.get_relevant_reviews(indices, indices_sentiment)
for review in relevant_reviews:
for word in self.get_words_of_review(review):
bag_of_words.add_word(word, review['sentiment'])
return bag_of_words
#
# Given the train indices, gets the positive prior. (positive reviews / total reviews)
#
def get_positive_prior(self, indices, indices_sentiment):
n_positive = 0
n_total = 0
for review in reviews:
if not review['cv'] in indices:
continue
if len(indices_sentiment) > 0:
if indices_sentiment[indices.index(review['cv'])] != "BOTH":
if indices_sentiment[indices.index(review['cv'])] != review['sentiment']:
continue
if review['sentiment'] == 'POS':
n_positive += 1
n_total += 1
return n_positive / n_total
#
# Returns a list of the relevant reviews.
# - Only reviews with the given indices
# - If self.sentiment != "BOTH" only returns reviews of the same sentiment.
#
def get_relevant_reviews(self, indices, indices_sentiment):
relevant_reviews = []
for review in reviews:
if not review['cv'] in indices:
continue
if len(indices_sentiment) > 0:
if indices_sentiment[indices.index(review['cv'])] != "BOTH":
if indices_sentiment[indices.index(review['cv'])] != review['sentiment']:
continue
relevant_reviews.append(review)
return relevant_reviews
def get_words_of_review(self, review):
words = []
for line in review['content']:
for word_pair in line:
word = word_pair[0].lower()
if self.use_stemming:
word = self.stemmer.stem(word)
words.append(word)
ngrams = []
for i in range(0, self.n_grams):
for j in range(0, len(words) - i):
ngram_word = ""
for k in range(0, i+1):
ngram_word += "{}\\".format(words[j+k])
ngrams.append(ngram_word)
return ngrams
bayes = BayesClassifier(False, False, 1)
train_indices = list(range(0,900))
test_indices = list(range(900, 1000))
simple_bayes_accuracy, simple_bayes_results = bayes.train_and_classify(train_indices, test_indices)
print("Simple (no smoothing) bayes accuracy {0:.2f}%".format(simple_bayes_accuracy))
```
#### (Bonus Questions) Would you consider accuracy to also be a good way to evaluate your classifier in a situation where 90% of your data instances are of positive movie reviews? (1pt)
You can simulate this scenario by keeping the positive reviews
data unchanged, but only using negative reviews cv000–cv089 for
training, and cv900–cv909 for testing. Calculate the classification
accuracy, and explain what changed.
```
# Question is very vague, but here is what we think the data should look like:
# TRAIN
# 0 - 899 : positive reviews
# 0 - 89 : negative reviews
#
# TEST
# 900 - 999 : positive reviews
# 900 - 909 : negative reviews
#
bayes = BayesClassifier(False, False, 1)
train_indices = list(range(0,900))
train_sentiment = []
# For review 0 - 89 say that we want BOTH the positive and negative reviews.
# For review 90 - 899 say that we want only the POS reviews.
for i in range(0, len(train_indices)):
if i < 90:
train_sentiment.append("BOTH")
else:
train_sentiment.append("POS")
test_indices = list(range(900, 1000))
test_sentiment = []
# For review 900 - 909 say that we want BOTH the positive and negative reviews.
# For review 910 - 999 say that we want only the POS reviews.
for i in range(0, len(test_indices)):
if i < 10:
test_sentiment.append("BOTH")
else:
test_sentiment.append("POS")
simple_negative_bayes_accuracy, simple_negative_bayes_results = bayes.train_and_classify(train_indices, test_indices, train_sentiment, test_sentiment)
print("Simple (no smoothing) bayes accuracy trained on 90% positive reviews {0:.2f}%".format(simple_negative_bayes_accuracy))
```
As you can see the classifier behaves badly now with only 9.09% accuracy. It predicts everything as a negative review. This is because it says very little negative reviews in the training data, and thus, each negative term in the training set will count 10 times more as each positive term in the training data. Meaning that a lot of negative words will have a high probability, explaining why everything gets predicted as negative.
## Smoothing
The presence of words in the test dataset that
haven’t been seen during training can cause probabilities in the Naive
Bayes classifier to be $0$, thus making that particular test instance
undecidable. The standard way to mitigate this effect (as well as to
give more clout to rare words) is to use smoothing, in which the
probability fraction
$$\frac{\text{count}(w_i, c)}{\sum\limits_{w\in V} \text{count}(w, c)}$$ for a word
$w_i$ becomes
$$\frac{\text{count}(w_i, c) + \text{smoothing}(w_i)}{\sum\limits_{w\in V} \text{count}(w, c) + \sum\limits_{w \in V} \text{smoothing}(w)}$$
#### (Q3.2) Implement Laplace feature smoothing (1pt)
($smoothing(\cdot) = \kappa$, constant for all words) in your Naive
Bayes classifier’s code, and report the impact on performance.
Use $\kappa = 1$.
```
bayes = BayesClassifier(True, False, 1)
train_indices = list(range(0,900))
test_indices = list(range(900, 1000))
smoothing_bayes_accuracy, smoothing_bayes_results = bayes.train_and_classify(train_indices, test_indices)
print("Smoothed bayes accuracy {0:.2f}%".format(smoothing_bayes_accuracy))
```
#### (Q3.3) Is the difference between non smoothed (Q3.1) and smoothed (Q3.2) statistically significant? (0.5pt)
```
p_value = sign_test(simple_bayes_results, smoothing_bayes_results)
print("p_value =", p_value)
```
## Cross-validation
A serious danger in using Machine Learning on small datasets, with many
iterations of slightly different versions of the algorithms, is that we
end up with Type III errors, also called the “testing hypotheses
suggested by the data” errors. This type of error occurs when we make
repeated improvements to our classifiers by playing with features and
their processing, but we don’t get a fresh, never-before seen test
dataset every time. Thus, we risk developing a classifier that’s better
and better on our data, but worse and worse at generalizing to new,
never-before seen data.
A simple method to guard against Type III errors is to use
cross-validation. In N-fold cross-validation, we divide the data into N
distinct chunks / folds. Then, we repeat the experiment N times, each
time holding out one of the chunks for testing, training our classifier
on the remaining N - 1 data chunks, and reporting performance on the
held-out chunk. We can use different strategies for dividing the data:
- Consecutive splitting:
- cv000–cv099 = Split 1
- cv100–cv199 = Split 2
- etc.
- Round-robin splitting (mod 10):
- cv000, cv010, cv020, … = Split 1
- cv001, cv011, cv021, … = Split 2
- etc.
- Random sampling/splitting
- Not used here (but you may choose to split this way in a non-educational situation)
#### (Q3.4) Write the code to implement 10-fold cross-validation using round-robin splitting for your Naive Bayes classifier from Q3.2 and compute the 10 accuracies. Report the final performance, which is the average of the performances per fold. If all splits perform equally well, this is a good sign. (1pt)
```
#
# Returns test_indices, and train_indices according to the round robin split algorithm.
#
def round_robin_split_indices(n_split):
test_indices = []
train_indices = []
for i in range(0, 1000):
if i % 10 == n_split:
test_indices.append(i)
else:
train_indices.append(i)
return test_indices, train_indices
#
# Performs the kfold validation.
#
def do_kfold(use_smoothing=False, use_stemming=False, n_grams=1):
sum_accuracy = 0
accuracies = []
total_variance = 0
all_results = []
bayes = BayesClassifier(use_smoothing, use_stemming, n_grams)
for i in range(0, 10):
print("Progress {0:.0f}%".format(i / 10 * 100))
test_indices, train_indices = round_robin_split_indices(i)
accuracy, result = bayes.train_and_classify(train_indices, test_indices)
sum_accuracy += accuracy
accuracies.append(accuracy)
for r in result:
all_results.append(r)
avg_accuracy = sum_accuracy / 10
for i in range(0, 10):
sqred_error = (accuracies[i] - avg_accuracy)**2
total_variance += sqred_error
variance_accuracy = total_variance / 10
return avg_accuracy, variance_accuracy, all_results
smoothing_avg_accuracy, smoothing_variance, smoothing_results_kfold = do_kfold(True, False, 1)
print("10-fold validation average accuracy for 10 folds: {0:.2f}%".format(smoothing_avg_accuracy))
```
#### (Q3.5) Write code to calculate and report variance, in addition to the final performance. (1pt)
**Please report all future results using 10-fold cross-validation now
(unless told to use the held-out test set).**
```
print("10-fold validation variance: {0:.2f}".format(smoothing_variance))
```
## Features, overfitting, and the curse of dimensionality
In the Bag-of-Words model, ideally we would like each distinct word in
the text to be mapped to its own dimension in the output vector
representation. However, real world text is messy, and we need to decide
on what we consider to be a word. For example, is “`word`" different
from “`Word`", from “`word`”, or from “`words`"? Too strict a
definition, and the number of features explodes, while our algorithm
fails to learn anything generalisable. Too lax, and we risk destroying
our learning signal. In the following section, you will learn about
confronting the feature sparsity and the overfitting problems as they
occur in NLP classification tasks.
#### (Q3.6): A touch of linguistics (1pt)
Taking a step further, you can use stemming to
hash different inflections of a word to the same feature in the BoW
vector space. How does the performance of your classifier change when
you use stemming on your training and test datasets? Please use the [Porter stemming
algorithm](http://www.nltk.org/howto/stem.html) from NLTK.
Also, you should do cross validation and concatenate the predictions from all folds to compute the significance.
```
stemming_avg_accuracy, stemming_variance, stemming_results_kfold = do_kfold("BOTH", True, True)
print("10-fold stemming average accuracy {0:.2f}%".format(stemming_avg_accuracy))
```
#### (Q3.7): Is the difference between NB with smoothing and NB with smoothing+stemming significant? (0.5pt)
```
p_value = sign_test(stemming_results_kfold, smoothing_results_kfold)
print("p_value =", p_value)
```
#### Q3.8: What happens to the number of features (i.e., the size of the vocabulary) when using stemming as opposed to (Q3.2)? (0.5pt)
Give actual numbers. You can use the held-out training set to determine these.
```
bayes = BayesClassifier(False, False, 1)
bayes_stemming = BayesClassifier(False, True, 1)
train_indices = list(range(0, 900))
bow = bayes.train(train_indices, [])
bow_stemming = bayes_stemming.train(train_indices, [])
print("Number of words withouth stemming: {}".format(bow.get_n_unique_words()))
print("Number of words with stemming: {}".format(bow_stemming.get_n_unique_words()))
```
#### Q3.9: Putting some word order back in (0.5+0.5pt=1pt)
A simple way of retaining some of the word
order information when using bag-of-words representations is to add **n-grams** features.
Retrain your classifier from (Q3.4) using **unigrams+bigrams** and
**unigrams+bigrams+trigrams** as features, and report accuracy and statistical significances (in comparison to the experiment at (Q3.4) for all 10 folds, and between the new systems).
```
bigram_avg_accuracy, bigram_variance, bigram_results_kfold = do_kfold(True, False, 2)
trigram_avg_accuracy, trigram_variance, trigram_results_kfold = do_kfold(True, False, 3)
print("Unigram average accuracy {0:.2f}%".format(smoothing_avg_accuracy))
print("Bigram average accuracy {0:.2f}%".format(bigram_avg_accuracy))
print("Trigram average accuracy {0:.2f}%".format(trigram_avg_accuracy))
print("Improvement from unigrams to unigrams+bigrams")
p_value = sign_test(bigram_results_kfold, smoothing_results_kfold)
print("p_value =", p_value)
print("\n\nImprovement from unigrams+bigrams to unigrams+bigrams+trigrams")
p_value = sign_test(bigram_results_kfold, trigram_results_kfold)
print("p_value =", p_value)
```
#### Q3.10: How many features does the BoW model have to take into account now? (0.5pt)
How does this number compare (e.g., linear, square, cubed, exponential) to the number of features at (Q3.8)?
Use the held-out training set once again for this.
```
bayes_bigrams = BayesClassifier(False, False, 2)
bayes_trigrams = BayesClassifier(False, False, 3)
bayes_fourgrams = BayesClassifier(False, False, 4)
train_indices = list(range(0, 900))
bow_bigrams = bayes_bigrams.train(train_indices, [])
bow_trigrams = bayes_trigrams.train(train_indices, [])
print("Number of features with unigrams: {}".format(bow.get_n_unique_words()))
print("Number of features with bigrams: {}".format(bow_bigrams.get_n_unique_words()))
print("Number of features with trigrams: {}".format(bow_trigrams.get_n_unique_words()))
```
As you can see the number of features goes from ~50,000 to ~500,000 to ~1,500,000 this shows that the increase in features is exponential
# Support Vector Machines (4pts)
Though simple to understand, implement, and debug, one
major problem with the Naive Bayes classifier is that its performance
deteriorates (becomes skewed) when it is being used with features which
are not independent (i.e., are correlated). Another popular classifier
that doesn’t scale as well to big data, and is not as simple to debug as
Naive Bayes, but that doesn’t assume feature independence is the Support
Vector Machine (SVM) classifier.
You can find more details about SVMs in Chapter 7 of Bishop: Pattern Recognition and Machine Learning.
Other sources for learning SVM:
* http://web.mit.edu/zoya/www/SVM.pdf
* http://www.cs.columbia.edu/~kathy/cs4701/documents/jason_svm_tutorial.pdf
* https://pythonprogramming.net/support-vector-machine-intro-machine-learning-tutorial/
Use the scikit-learn implementation of
[SVM.](http://scikit-learn.org/stable/modules/svm.html) with the default parameters.
#### (Q4.1): Train SVM and compare to Naive Bayes (2pt)
Train an SVM classifier (sklearn.svm.LinearSVC) using your features. Compare the
classification performance of the SVM classifier to that of the Naive
Bayes classifier from (Q3.4) and report the numbers.
Do cross validation and concatenate the predictions from all folds to compute the significance. Are the results significantly better?
```
from sklearn import preprocessing, model_selection, neighbors, svm
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import LinearSVC as SVC
from nltk.tokenize import TreebankWordTokenizer
#
## Returns test_indices, and train_indices according to the round robin split algorithm.
## adjusted to before, as features object has 2000 rows and not just 1000
#
def round_robin_split_indices_features(n_split):
test_indices = []
train_indices = []
test_indices_features = []
for i in range(0, 1000):
if i % 10 == n_split:
test_indices.append(i)
else:
train_indices.append(i)
test_indices_features = test_indices + [x + 1000 for x in test_indices]
train_indices_features = train_indices + [x + 1000 for x in train_indices]
return train_indices_features, train_indices, test_indices_features, test_indices
# example: split 5 is testsplit at the moment
#train_features, train_ind, test_features, test_ind = round_robin_split_indices(5)
#
## vectorizes original text documents
#
def get_vectorized_corpus(reviews, indices, tags, without_closed):
corpus = []
for review in reviews:
if (review['cv'] not in indices):
continue
words = []
word_tag = []
for line in review['content']:
for word_pair in line:
word = word_pair[0].lower()
if(tags != False):
tag = word_pair[1]
if(without_closed != False):
if(tag.startswith(('V', 'N', 'RB', 'J'))):
word_tag = word + "_" + tag
else:
continue
else:
word_tag = word + "_" + tag
else:
word_tag = word
words.append(word_tag)
corpus.append(' '.join(map(str, words)))
count_vect = CountVectorizer(tokenizer = TreebankWordTokenizer().tokenize)
vectorized_features = count_vect.fit_transform(corpus).toarray()
return vectorized_features
#
## get original sentiment labels
#
def get_labels(reviews, indices):
labels = []
for review in reviews:
if (review['cv'] not in indices):
continue
else:
labels.append(review["sentiment"])
return labels
#
## for obtained indices, one svm is fitted and evaluated according to training and test documents
#
def train_and_classify_one_svm(features, train_indices_features, train_indices, test_indices_features, test_indices):
# training indices
train_features = features[train_indices_features]
train_labels = get_labels(reviews, train_indices)
# test indices
test_features = features[test_indices_features]
test_labels = get_labels(reviews, test_indices)
# linear SVM on training feature vector and labels
classifier = SVC()
classifier.fit(train_features, train_labels)
# label prediction for test_feature vector
label_prediction = classifier.predict(test_features)
# tracking results
total = 0
correct = 0
results = []
for i in range(0, len(test_labels)):
if label_prediction[i] == test_labels[i]:
correct += 1
results.append('+')
else:
results.append('-')
total += 1
accuracy = correct / total * 100
return accuracy, results
#
## similar to NB, kfold for SVM (only difference are slightly different indices (due to structure of feature object))
#
def do_kfold_svm(tags=False, without_closed=False):
# create features depending on type of tags [notags, alltags, withoutclosedform]
features = get_vectorized_corpus(reviews, list(range(0,1000)), tags, without_closed)
sum_accuracy = 0
accuracies = []
total_variance = 0
all_results = []
for i in range(0, 10):
print("Progress {0:.0f}%".format(i / 10 * 100))
train_features, train_ind, test_features, test_ind = round_robin_split_indices_features(i)
accuracy, results = train_and_classify_one_svm(features, train_features, train_ind, test_features, test_ind)
sum_accuracy += accuracy
accuracies.append(accuracy)
for r in results:
all_results.append(r)
avg_accuracy = sum_accuracy / 10
for i in range(0, 10):
sqred_error = (accuracies[i] - avg_accuracy)**2
total_variance += sqred_error
variance_accuracy = total_variance / 10
return avg_accuracy, variance_accuracy, all_results
# calculate svm kfold
svm_avg_accuracy, svm_variance, svm_results_kfold = do_kfold_svm()
print("10-fold average accuracy {0:.2f}%".format(svm_avg_accuracy))
print("10-fold accuracy variance {0:.2f}%".format(svm_variance))
# significant difference to Q3.4?
p_value = sign_test(smoothing_results_kfold, svm_results_kfold)
print("p_value =", p_value)
```
### More linguistics
Now add in part-of-speech features. You will find the
movie review dataset has already been POS-tagged for you. Try to
replicate what Pang et al. were doing:
#### (Q4.2) Replace your features with word+POS features, and report performance with the SVM. Does this help? Do cross validation and concatenate the predictions from all folds to compute the significance. Are the results significant? Why? (1pt)
```
# What canges, when pos tags are taken into consideration as well?
svm_avg_accuracy_with_tags, svm_variance_with_tags, svm_results_kfold_with_tags = do_kfold_svm(tags=True)
print("10-fold average accuracy {0:.2f}%".format(svm_avg_accuracy_with_tags))
print("10-fold accuracy variance {0:.2f}%".format(svm_variance_with_tags))
# significant difference to Q3.4?
p_value = sign_test(svm_results_kfold, svm_results_kfold_with_tags)
print("p_value =", p_value)
```
#### (Q4.3) Discard all closed-class words from your data (keep only nouns (N*), verbs (V*), adjectives (J*) and adverbs (RB*)), and report performance. Does this help? Do cross validation and concatenate the predictions from all folds to compute the significance. Are the results significantly better than when we don't discard the closed-class words? Why? (1pt)
```
svm_avg_accuracy_without_closed, svm_variance_without_closed, svm_results_kfold_without_closed = do_kfold_svm(tags=True, without_closed=True)
print("10-fold average accuracy {0:.2f}%".format(svm_avg_accuracy_without_closed))
print("10-fold accuracy variance {0:.2f}%".format(svm_variance_without_closed))
# significant difference to all POS tags allowed?
p_value = sign_test(svm_results_kfold_with_tags, svm_results_kfold_without_closed)
print("p_value =", p_value)
```
# (Q5) Discussion (max. 500 words). (5pts)
> Based on your experiments, what are the effective features and techniques in sentiment analysis? What information do different features encode?
Why is this important? What are the limitations of these features and techniques?
*Write your answer here in max. 500 words.*
Discussion:
effective features:
- Smoothing, thus not excluding words, that are not seen in the training data, but give them a non zero probability, leads to significantly better accuracy, than Naive Bayes without smoothing.
- Stemming:
# Submission
```
# Write your names and student numbers here:
# Dirk Hoekstra #12283878
# Philipp Lintl #12152498
```
**That's it!**
- Check if you answered all questions fully and correctly.
- Download your completed notebook using `File -> Download .ipynb`
- Also save your notebook as a Github Gist. Get it by choosing `File -> Save as Github Gist`. Make sure that the gist has a secret link (not public).
- Check if your answers are all included in the file you submit (e.g. check the Github Gist URL)
- Submit your .ipynb file and link to the Github Gist via *Canvas*. One submission per group.
| github_jupyter |
```
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
path = get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
maxlen = 40 # この長さのテキストに分割する
step = 3 # オーバーラップ
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen]) # 入力となる長さ40の文字列
next_chars.append(text[i + maxlen]) # 予測したい次の文字
print('num sequences:', len(sentences))
len(sentences[0]), sentences[0]
next_chars[0]
print('Vectorization...')
# 入力は長さ maxlen の文字列なのでmaxlenが必要
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
# 出力は1文字しかないので maxlen は不要
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1 # 対象文字のみTrueとなるベクトルにする
y[i, char_indices[next_chars[i]]] = 1
print(X.shape, y.shape)
print(X[0][0])
print(y[0])
print('Build model...')
model = Sequential()
# LSTMの入力は (バッチ数, 入力シーケンスの長さ, 入力の次元) となる(バッチ数は省略)
# maxlenを変えてもパラメータ数は変化しない(各時刻でパラメータは共有するため)
# 128は内部の射影と出力の次元(同じになる)
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
# 出力の128次元にさらにFCをくっつけて文字ベクトルを出力
model.add(Dense(len(chars))) # 出力
model.add(Activation('softmax'))
model.summary()
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
# 200285個の長さ40の時系列データ(各データは57次元ベクトル)の意味
print(X.shape, y.shape)
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
# temperatureによって確率が変わる???
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
# 確率が最大のインデックスを返す
return np.argmax(probas)
for iteration in range(1, 60):
print()
print('-' * 50)
print('Iteration', iteration)
# 時系列データを入力して学習
# model.fit(X, y, batch_size=128, epochs=1)
# 学習データのランダムな位置の40文字に続く文字列を生成する
start_index = random.randint(0, len(text) - maxlen - 1)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
# 400文字分生成する
# この400文字を生成している間、LSTMに内部状態が保持されている?
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
# sentenceを符号化
# このsentenceは400回のループで生成された文字を加えて次送りされていく
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.0
# 57次元(文字数)の出力分布
# (系列長=40, データ次元=57) を入力
preds = model.predict(x, verbose=0)[0]
# もっとも確率が高いのを選ぶのではなくサンプリングする?
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
# 入力は長さ40にしたいので今生成した文字を加えて1つ先に送る
# このsentenceが次の入力となる
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
```
| github_jupyter |
- 广发证券之《深度学习之股指期货日内交易策略》
- 《宽客人生》
- 《主动投资组合管理》
-
-------------------------------------------------------
量化研报只是应付客户而做的产物,对于实际交易用处不大
策略对于市场的参数时刻都在变化
策略+相应的参数调整才是完整的
策略本身也需要非常强的主观调整 ----------周杰
拿到一个静态的策略并不是一个万能钥匙,对于细节处没多大用处,挣钱完全是靠细节
世界不存在一种一成不变的交易体系能让你永远的挣钱
-------------------------------------------------------
**量化体系:**
定价体系:BSM期权定价,基于基本面的股票定价......
因子体系:来源与CAPM理论,通过将信号元做线性回归来提供信息量
产品体系:最常见的FOF,MOM
套利体系:通过协整的手段形成的一系列的策略
固收体系:基于收益率衍生概念对货币、外汇、债券市场的交易
高频体系:基于市场微观结构的验证
深度学习不是一个单独的策略系统,而是一种研究方法
----------------------------------------------------------------------
--->理论先行
--->拒绝信息素元
--->连续 & 收敛
--->市场不完全
--->预测性与表征性
--->策略的证伪方式
--->策略陷阱
* 量化交易不是计算机视角下的数字结果,而事实是,量化交易扎根于理论之中
* 信息的发掘和使用构成了交易整体的具体思路
* 信息素元是指市场无法再被细分的信息元
* 信息元连续且收敛,挖掘因子时,连续指在一段时间内因子连续有效或连续无效,连续的过程可能不是线性的
* 收敛是指单方向放大或缩小因子,必须在某个地方达到极值,不要求是单调的,但必须达到极值,调大一点,效果好一点,再调大一点,效果再好一点,当跳到某个值的时候,没效果了,即出现极值收敛了,不能出现因子无限大,收益无限大的情况,这种因子一定是错的
* 所有的有效的信息元全部都是连续且收敛的
* 市场不完全
* 真正有效的因子是机密,一定不要是市场都知道的东西,一个很有趣的例子:运用当天的第一根K线预测当天的涨跌幅(这个因子肯定无效)
* 天气一热,大街上的小姑凉穿裙子比较多,但不能通过大街上穿裙子的姑凉人数来预测明天的温度,温度与穿裙子女生的人数只有表征性,严格的区分这一点
* 函数的单调性:函数的单调性(monotonicity)也可以叫做函数的增减性。当函数 f(x) 的自变量在其定义区间内增大(或减小)时,函数值f(x)也随着增大(或减小),则称该函数为在该区间上具有单调性。
* 函数的连续性:连续函数是指函数y=f(x)当自变量x的变化很小时,所引起的因变量y的变化也很小。因变量关于自变量是连续变化的,连续函数在直角坐标系中的图像是一条没有断裂的连续曲线。
-----------------------------------------------------------------------------------------
**量化交易的研究方法论**
一切接套路
方法论意味着你的工作步骤与要求,可以理解为一个操作手册,在任一体系下每时每刻应该干什么
清楚了方法论,就知道每天每时每刻该干什么,但是市场上没有任何一本书,任何一门课程讲量化研究方法论的,都是每个人总结的
在研究的过程中,一定要把方法论放在前面,会让你少走很多弯路
----------------------------------------------------------------------------------------------------------------------------------------------------------
**套利**
内因套利
* 期限套
* 跨期套
* 跨市场套
* 产业链套
关联套
* 跨品种套
* 衍生品套
1.计算cu 和 ni的历史价格线性关系
2.给定一个显著性水平,通常选取**α**=5%
3.假设其残差服从正态分布,并计算置信区间
4.置信区间上下限作为交易开仓的阈值
5.突破置信区间上限开仓做空,突破下限开仓做多
均值回归
极值捕捉
---------------------------------------------------------------------
做量化的人没有止盈,没有止损
策略必须是一个严格的闭环,当前的亏损有没有超出模型的空间
止盈止损是主观交易法,在量化层面止盈止损是个伪命题
做的交易要在下单之前明确赢的可能性有多大,亏损的可能性有多大,整体的期望有多大
涨跌的概率之前要算清楚了才叫量化,这个可以是概率,可以是分布
--------------------------------------------
**数学方面的学习:**
* 随机过程分析
* 时间序列
* AI
形成自己的哲学基石
-------------------------------------------------------------
**黑白天鹅**
能够观察到的风险都是白天鹅
原油负价格从未出现之前它是黑天鹅,出现之后就不是了
明天外星人入侵地球,这是黑天鹅
黑天鹅应该在风控的层面考虑,而不是在策略层面考虑
-------------------------------------------------------------
| github_jupyter |
## Passing Messages to Processes
As with threads, a common use pattern for multiple processes is to divide a job up among several workers to run in parallel. Effective use of multiple processes usually requires some communication between them, so that work can be divided and results can be aggregated. A simple way to communicate between processes with multiprocessing is to use a Queue to pass messages back and forth. **Any object that can be serialized with pickle can pass through a Queue.**
```
import multiprocessing
class MyFancyClass:
def __init__(self, name):
self.name = name
def do_something(self):
proc_name = multiprocessing.current_process().name
print('Doing something fancy in {} for {}!'.format(
proc_name, self.name))
def worker(q):
obj = q.get()
obj.do_something()
if __name__ == '__main__':
queue = multiprocessing.Queue()
p = multiprocessing.Process(target=worker, args=(queue,))
p.start()
queue.put(MyFancyClass('Fancy Dan'))
# Wait for the worker to finish
queue.close()
queue.join_thread()
p.join()
```
A more complex example shows how to manage several workers consuming data from a JoinableQueue and passing results back to the parent process. The poison pill technique is used to stop the workers. After setting up the real tasks, the main program adds one “stop” value per worker to the job queue. When a worker encounters the special value, it breaks out of its processing loop. The main process uses the task queue’s join() method to wait for all of the tasks to finish before processing the results.
```
import multiprocessing
import time
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
def run(self):
proc_name = self.name
while True:
next_task = self.task_queue.get()
if next_task is None:
# Poison pill means shutdown
print('{}: Exiting'.format(proc_name))
self.task_queue.task_done()
break
print('{}: {}'.format(proc_name, next_task))
answer = next_task()
self.task_queue.task_done()
self.result_queue.put(answer)
class Task:
def __init__(self, a, b):
self.a = a
self.b = b
def __call__(self):
time.sleep(0.1) # pretend to take time to do the work
return '{self.a} * {self.b} = {product}'.format(
self=self, product=self.a * self.b)
def __str__(self):
return '{self.a} * {self.b}'.format(self=self)
if __name__ == '__main__':
# Establish communication queues
tasks = multiprocessing.JoinableQueue()
results = multiprocessing.Queue()
# Start consumers
num_consumers = multiprocessing.cpu_count() * 2
print('Creating {} consumers'.format(num_consumers))
consumers = [
Consumer(tasks, results)
for i in range(num_consumers)
]
for w in consumers:
w.start()
# Enqueue jobs
num_jobs = 10
for i in range(num_jobs):
tasks.put(Task(i, i))
# Add a poison pill for each consumer
for i in range(num_consumers):
tasks.put(None)
# Wait for all of the tasks to finish
tasks.join()
# Start printing results
while num_jobs:
result = results.get()
print('Result:', result)
num_jobs -= 1
```
## Signaling between Processes
The Event class provides a simple way to communicate state information between processes. An event can be toggled between set and unset states. Users of the event object can wait for it to change from unset to set, using an optional timeout value.
```
import multiprocessing
import time
def wait_for_event(e):
"""Wait for the event to be set before doing anything"""
print('wait_for_event: starting')
e.wait()
print('wait_for_event: e.is_set()->', e.is_set())
def wait_for_event_timeout(e, t):
"""Wait t seconds and then timeout"""
print('wait_for_event_timeout: starting')
e.wait(t)
print('wait_for_event_timeout: e.is_set()->', e.is_set())
if __name__ == '__main__':
e = multiprocessing.Event()
w1 = multiprocessing.Process(
name='block',
target=wait_for_event,
args=(e,),
)
w1.start()
w1 = multiprocessing.Process(
name='block',
target=wait_for_event,
args=(e,),
)
w1.start()
w2 = multiprocessing.Process(
name='nonblock',
target=wait_for_event_timeout,
args=(e, 2),
)
w2.start()
print('main: waiting before calling Event.set()')
time.sleep(3)
e.set()
print('main: event is set')
```
* When wait() times out it returns without an error. The caller is responsible for checking the state of the event using is_set().
* a event.set() will set off all process that are waiting for this event
## Controlling Access to Resources
In situations when a single resource needs to be shared between multiple processes, a Lock can be used to avoid conflicting accesses.
```
import multiprocessing
import sys
def worker_with(lock, stream):
with lock:
stream.write('Lock acquired via with\n')
def worker_no_with(lock, stream):
lock.acquire()
try:
stream.write('Lock acquired directly\n')
finally:
lock.release()
lock = multiprocessing.Lock()
w = multiprocessing.Process(
target=worker_with,
args=(lock, sys.stdout),
)
nw = multiprocessing.Process(
target=worker_no_with,
args=(lock, sys.stdout),
)
w.start()
nw.start()
w.join()
nw.join()
```
## Synchronizing Operations
### Condition
Condition objects can be used to synchronize parts of a workflow so that some run in parallel but others run sequentially, even if they are in separate processes.
```
import multiprocessing
import time
def stage_1(cond):
"""perform first stage of work,
then notify stage_2 to continue
"""
name = multiprocessing.current_process().name
print('Starting', name)
with cond:
print('{} done and ready for stage 2'.format(name))
cond.notify_all()
def stage_2(cond):
"""wait for the condition telling us stage_1 is done"""
name = multiprocessing.current_process().name
print('Starting', name)
with cond:
cond.wait()
print('{} running'.format(name))
if __name__ == '__main__':
condition = multiprocessing.Condition()
s1 = multiprocessing.Process(name='s1',
target=stage_1,
args=(condition,))
s2_clients = [
multiprocessing.Process(
name='stage_2[{}]'.format(i),
target=stage_2,
args=(condition,),
)
for i in range(1, 3)
]
for c in s2_clients:
c.start()
time.sleep(1)
s1.start()
s1.join()
for c in s2_clients:
c.join()
```
In this example, two process run the second stage of a job in parallel, but only after the first stage is done.
## Controlling Concurrent Access to Resources
Sometimes it is useful to allow more than one worker access to a resource at a time, while still limiting the overall number. For example, a connection pool might support a fixed number of simultaneous connections, or a network application might support a fixed number of concurrent downloads. A Semaphore is one way to manage those connections.
```
import random
import multiprocessing
import time
class ActivePool:
def __init__(self):
super(ActivePool, self).__init__()
self.mgr = multiprocessing.Manager()
self.active = self.mgr.list()
self.lock = multiprocessing.Lock()
def makeActive(self, name):
with self.lock:
self.active.append(name)
def makeInactive(self, name):
with self.lock:
self.active.remove(name)
def __str__(self):
with self.lock:
return str(self.active)
def worker(s, pool):
name = multiprocessing.current_process().name
with s:
pool.makeActive(name)
print('Activating {} now running {}'.format(
name, pool))
time.sleep(random.random())
pool.makeInactive(name)
if __name__ == '__main__':
pool = ActivePool()
s = multiprocessing.Semaphore(3)
jobs = [
multiprocessing.Process(
target=worker,
name=str(i),
args=(s, pool),
)
for i in range(10)
]
for j in jobs:
j.start()
while True:
alive = 0
for j in jobs:
if j.is_alive():
alive += 1
j.join(timeout=0.1)
print('Now running {}'.format(pool))
if alive == 0:
# all done
break
```
## Managing Shared State
In the previous example, the list of active processes is maintained centrally in the ActivePool instance via a special type of list object created by a Manager. The Manager is responsible for coordinating shared information state between all of its users.
```
import multiprocessing
import pprint
def worker(d, key, value):
d[key] = value
if __name__ == '__main__':
mgr = multiprocessing.Manager()
d = mgr.dict()
jobs = [
multiprocessing.Process(
target=worker,
args=(d, i, i * 2),
)
for i in range(10)
]
for j in jobs:
j.start()
for j in jobs:
j.join()
print('Results:', d)
```
By creating the list through the manager, it is shared and updates are seen in all processes. Dictionaries are also supported.
## Shared Namespaces
In addition to dictionaries and lists, a Manager can create a shared Namespace.
```
import multiprocessing
def producer(ns, event):
ns.value = 'This is the value'
event.set()
def consumer(ns, event):
try:
print('Before event: {}'.format(ns.value))
except Exception as err:
print('Before event, error:', str(err))
event.wait()
print('After event:', ns.value)
if __name__ == '__main__':
mgr = multiprocessing.Manager()
namespace = mgr.Namespace()
event = multiprocessing.Event()
p = multiprocessing.Process(
target=producer,
args=(namespace, event),
)
c = multiprocessing.Process(
target=consumer,
args=(namespace, event),
)
c.start()
p.start()
c.join()
p.join()
```
Any named value added to the Namespace is visible to all of the clients that receive the Namespace instance.
**It is important to know that updates to the contents of mutable values in the namespace are not propagated automatically.**
```
import multiprocessing
def producer(ns, event):
# DOES NOT UPDATE GLOBAL VALUE!
ns.my_list.append('This is the value')
event.set()
def consumer(ns, event):
print('Before event:', ns.my_list)
event.wait()
print('After event :', ns.my_list)
if __name__ == '__main__':
mgr = multiprocessing.Manager()
namespace = mgr.Namespace()
namespace.my_list = []
event = multiprocessing.Event()
p = multiprocessing.Process(
target=producer,
args=(namespace, event),
)
c = multiprocessing.Process(
target=consumer,
args=(namespace, event),
)
c.start()
p.start()
c.join()
p.join()
```
## Process Pools
The Pool class can be used to manage a fixed number of workers for simple cases where the work to be done can be broken up and distributed between workers independently. The return values from the jobs are collected and returned as a list. The pool arguments include the number of processes and a function to run when starting the task process (invoked once per child).
```
import multiprocessing
def do_calculation(data):
return data * 2
def start_process():
print('Starting', multiprocessing.current_process().name)
if __name__ == '__main__':
inputs = list(range(10))
print('Input :', inputs)
builtin_outputs = map(do_calculation, inputs)
print('Built-in:', [i for i in builtin_outputs])
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(
processes=pool_size,
initializer=start_process,
)
pool_outputs = pool.map(do_calculation, inputs)
pool.close() # no more tasks
pool.join() # wrap up current tasks
print('Pool :', pool_outputs)
```
By default, Pool creates a fixed number of worker processes and passes jobs to them until there are no more jobs. Setting the maxtasksperchild parameter tells the pool to restart a worker process after it has finished a few tasks, preventing long-running workers from consuming ever more system resources.
```
import multiprocessing
def do_calculation(data):
return data * 2
def start_process():
print('Starting', multiprocessing.current_process().name)
if __name__ == '__main__':
inputs = list(range(10))
print('Input :', inputs)
builtin_outputs = map(do_calculation, inputs)
print('Built-in:', builtin_outputs)
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(
processes=pool_size,
initializer=start_process,
maxtasksperchild=2,
)
pool_outputs = pool.map(do_calculation, inputs)
pool.close() # no more tasks
pool.join() # wrap up current tasks
print('Pool :', pool_outputs)
```
The pool restarts the workers when they have completed their allotted tasks, even if there is no more work. In this output, eight workers are created, even though there are only 10 tasks, and each worker can complete two of them at a time.
| github_jupyter |
# Random Forest Project
For this project we will be exploring publicly available data from [LendingClub.com](www.lendingclub.com). Lending Club connects people who need money (borrowers) with people who have money (investors). Hopefully, as an investor you would want to invest in people who showed a profile of having a high probability of paying you back. We will try to create a model that will help predict this.
Lending club had a [very interesting year in 2016](https://en.wikipedia.org/wiki/Lending_Club#2016), so let's check out some of their data and keep the context in mind. This data is from before they even went public.
We will use lending data from 2007-2010 and be trying to classify and predict whether or not the borrower paid back their loan in full.
Here are what the columns represent:
* credit.policy: 1 if the customer meets the credit underwriting criteria of LendingClub.com, and 0 otherwise.
* purpose: The purpose of the loan (takes values "credit_card", "debt_consolidation", "educational", "major_purchase", "small_business", and "all_other").
* int.rate: The interest rate of the loan, as a proportion (a rate of 11% would be stored as 0.11). Borrowers judged by LendingClub.com to be more risky are assigned higher interest rates.
* installment: The monthly installments owed by the borrower if the loan is funded.
* log.annual.inc: The natural log of the self-reported annual income of the borrower.
* dti: The debt-to-income ratio of the borrower (amount of debt divided by annual income).
* fico: The FICO credit score of the borrower.
* days.with.cr.line: The number of days the borrower has had a credit line.
* revol.bal: The borrower's revolving balance (amount unpaid at the end of the credit card billing cycle).
* revol.util: The borrower's revolving line utilization rate (the amount of the credit line used relative to total credit available).
* inq.last.6mths: The borrower's number of inquiries by creditors in the last 6 months.
* delinq.2yrs: The number of times the borrower had been 30+ days past due on a payment in the past 2 years.
* pub.rec: The borrower's number of derogatory public records (bankruptcy filings, tax liens, or judgments).
# Import Libraries
**Import the usual libraries for pandas and plotting.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
## Get the Data
** Use pandas to read loan_data.csv as a dataframe called loans.**
```
loans = pd.read_csv('loan_data.csv')
```
** Check out the info(), head(), and describe() methods on loans.**
```
loans.info()
loans.describe()
loans.head()
```
# Exploratory Data Analysis
** Create a histogram of two FICO distributions on top of each other, one for each credit.policy outcome.**
```
plt.figure(figsize=(10,6))
loans[loans['credit.policy']==1]['fico'].hist(alpha=0.5,bins=30,color='blue',label='Credit.Policy=1')
loans[loans['credit.policy']==0]['fico'].hist(alpha=0.5,bins=30,color='red',label='Credit.Policy=0')
plt.legend()
plt.xlabel('FICO')
```
** Create a similar figure, except this time select by the not.fully.paid column.**
```
plt.figure(figsize=(10,6))
loans[loans['not.fully.paid']==1]['fico'].hist(alpha=0.5,bins=30,color='blue',label='Credit.Policy=1')
loans[loans['not.fully.paid']==0]['fico'].hist(alpha=0.5,bins=30,color='red',label='Credit.Policy=0')
plt.legend()
plt.xlabel('FICO')
```
** Create a countplot using seaborn showing the counts of loans by purpose, with the color hue defined by not.fully.paid. **
```
sns.countplot(x='purpose',data=loans,hue='not.fully.paid')
```
** Let's see the trend between FICO score and interest rate.**
```
sns.jointplot(x='fico',y='int.rate',data=loans,color='purple')
```
** Create the following lmplots to see if the trend differed between not.fully.paid and credit.policy.**
```
sns.lmplot(x='fico',y='int.rate',data=loans,hue='credit.policy',col='not.fully.paid')
```
# Setting up the Data
Let's get ready to set up our data for our Random Forest Classification Model!
**Check loans.info() again.**
```
loans.info()
```
## Categorical Features
Notice that the **purpose** column as categorical
**Create a list of 1 element containing the string 'purpose'. Call this list cat_feats.**
```
cat_feat = ['purpose']
```
**Now use pd.get_dummies(loans,columns=cat_feats,drop_first=True) to create a fixed larger dataframe that has new feature columns with dummy variables. Set this dataframe as final_data.**
```
final_data = pd.get_dummies(loans,columns=cat_feat,drop_first=True)
final_data.info()
```
## Train Test Split
Now its time to split our data into a training set and a testing set!
```
from sklearn.model_selection import train_test_split
X = final_data.drop('not.fully.paid',axis=1)
y = final_data['not.fully.paid']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
```
## Training a Decision Tree Model
Let's start by training a single decision tree first!
** Import DecisionTreeClassifier**
```
from sklearn.tree import DecisionTreeClassifier
```
**Create an instance of DecisionTreeClassifier() called dtree and fit it to the training data.**
```
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
```
## Predictions and Evaluation of Decision Tree
**Create predictions from the test set and create a classification report and a confusion matrix.**
```
pred = dtree.predict(X_test)
from sklearn.metrics import classification_report,confusion_matrix
print(classification_report(y_test,pred))
print(confusion_matrix(y_test,pred))
```
## Training the Random Forest model
**Create an instance of the RandomForestClassifier class and fit it to our training data from the previous step.**
```
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=600)
rfc.fit(X_train,y_train)
```
## Predictions and Evaluation
```
pred = rfc.predict(X_test)
```
**Now create a classification report from the results.**
```
print(classification_report(y_test,pred))
```
**Show the Confusion Matrix for the predictions.**
```
print(confusion_matrix(y_test,pred))
```
**What performed better the random forest or the decision tree?**
```
# RandomForestClassifier showed better results but the results are still not good enough, so future engineering is needed
```
| github_jupyter |
# Chainer MNIST Model Deployment
* Wrap a Chainer MNIST python model for use as a prediction microservice in seldon-core
* Run locally on Docker to test
* Deploy on seldon-core running on minikube
## Dependencies
* [Helm](https://github.com/kubernetes/helm)
* [Minikube](https://github.com/kubernetes/minikube)
* [S2I](https://github.com/openshift/source-to-image)
```bash
pip install seldon-core
pip install chainer==6.2.0
```
## Train locally
```
#!/usr/bin/env python
import argparse
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
import chainerx
# Network definition
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def forward(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
def main():
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--frequency', '-f', type=int, default=-1,
help='Frequency of taking a snapshot')
parser.add_argument('--device', '-d', type=str, default='-1',
help='Device specifier. Either ChainerX device '
'specifier or an integer. If non-negative integer, '
'CuPy arrays with specified device id are used. If '
'negative integer, NumPy arrays are used')
parser.add_argument('--out', '-o', default='result',
help='Directory to output the result')
parser.add_argument('--resume', '-r', type=str,
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
parser.add_argument('--noplot', dest='plot', action='store_false',
help='Disable PlotReport extension')
group = parser.add_argument_group('deprecated arguments')
group.add_argument('--gpu', '-g', dest='device',
type=int, nargs='?', const=0,
help='GPU ID (negative value indicates CPU)')
args = parser.parse_args(args=[])
device = chainer.get_device(args.device)
print('Device: {}'.format(device))
print('# unit: {}'.format(args.unit))
print('# Minibatch-size: {}'.format(args.batchsize))
print('# epoch: {}'.format(args.epoch))
print('')
# Set up a neural network to train
# Classifier reports softmax cross entropy loss and accuracy at every
# iteration, which will be used by the PrintReport extension below.
model = L.Classifier(MLP(args.unit, 10))
model.to_device(device)
device.use()
# Setup an optimizer
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
# Load the MNIST dataset
train, test = chainer.datasets.get_mnist()
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
repeat=False, shuffle=False)
# Set up a trainer
updater = training.updaters.StandardUpdater(
train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
# Evaluate the model with the test dataset for each epoch
trainer.extend(extensions.Evaluator(test_iter, model, device=device))
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
# TODO(niboshi): Temporarily disabled for chainerx. Fix it.
if device.xp is not chainerx:
trainer.extend(extensions.DumpGraph('main/loss'))
# Take a snapshot for each specified epoch
frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)
trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
if args.plot and extensions.PlotReport.available():
trainer.extend(
extensions.PlotReport(['main/loss', 'validation/main/loss'],
'epoch', file_name='loss.png'))
trainer.extend(
extensions.PlotReport(
['main/accuracy', 'validation/main/accuracy'],
'epoch', file_name='accuracy.png'))
# Print selected entries of the log to stdout
# Here "main" refers to the target link of the "main" optimizer again, and
# "validation" refers to the default name of the Evaluator extension.
# Entries other than 'epoch' are reported by the Classifier link, called by
# either the updater or the evaluator.
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
# Print a progress bar to stdout
trainer.extend(extensions.ProgressBar())
if args.resume is not None:
# Resume from a snapshot
chainer.serializers.load_npz(args.resume, trainer)
# Run the training
trainer.run()
if __name__ == '__main__':
main()
```
Wrap model using s2i
```
!s2i build . seldonio/seldon-core-s2i-python3:1.3.0-dev chainer-mnist:0.1
!docker run --name "mnist_predictor" -d --rm -p 5000:5000 chainer-mnist:0.1
```
Send some random features that conform to the contract
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p
!docker rm mnist_predictor --force
```
# Test using Minikube
**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
```
!minikube start --memory 4096
```
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
```
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3:1.3.0-dev chainer-mnist:0.1
!kubectl create -f chainer_mnist_deployment.json
!kubectl rollout status deploy/chainer-mnist-deployment-chainer-mnist-predictor-76478b2
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
```
| github_jupyter |
# An Introduction to SageMaker LDA
***Finding topics in synthetic document data using Spectral LDA algorithms.***
---
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Training](#Training)
1. [Inference](#Inference)
1. [Epilogue](#Epilogue)
# Introduction
***
Amazon SageMaker LDA is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. Latent Dirichlet Allocation (LDA) is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics. Since the method is unsupervised, the topics are not specified up front, and are not guaranteed to align with how a human may naturally categorize documents. The topics are learned as a probability distribution over the words that occur in each document. Each document, in turn, is described as a mixture of topics.
In this notebook we will use the Amazon SageMaker LDA algorithm to train an LDA model on some example synthetic data. We will then use this model to classify (perform inference on) the data. The main goals of this notebook are to,
* learn how to obtain and store data for use in Amazon SageMaker,
* create an AWS SageMaker training job on a data set to produce an LDA model,
* use the LDA model to perform inference with an Amazon SageMaker endpoint.
The following are ***not*** goals of this notebook:
* understand the LDA model,
* understand how the Amazon SageMaker LDA algorithm works,
* interpret the meaning of the inference output
If you would like to know more about these things take a minute to run this notebook and then check out the SageMaker LDA Documentation and the **LDA-Science.ipynb** notebook.
```
%matplotlib inline
import os, re
import boto3
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=3, suppress=True)
# some helpful utility functions are defined in the Python module
# "generate_example_data" located in the same directory as this
# notebook
from generate_example_data import generate_griffiths_data, plot_lda, match_estimated_topics
# accessing the SageMaker Python SDK
import sagemaker
from sagemaker.amazon.common import RecordSerializer
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer
```
# Setup
***
*This notebook was created and tested on an ml.m4.xlarge notebook instance.*
Before we do anything at all, we need data! We also need to setup our AWS credentials so that AWS SageMaker can store and access data. In this section we will do four things:
1. [Setup AWS Credentials](#SetupAWSCredentials)
1. [Obtain Example Dataset](#ObtainExampleDataset)
1. [Inspect Example Data](#InspectExampleData)
1. [Store Data on S3](#StoreDataonS3)
## Setup AWS Credentials
We first need to specify some AWS credentials; specifically data locations and access roles. This is the only cell of this notebook that you will need to edit. In particular, we need the following data:
* `bucket` - An S3 bucket accessible by this account.
* Used to store input training data and model data output.
* Should be within the same region as this notebook instance, training, and hosting.
* `prefix` - The location in the bucket where this notebook's input and and output data will be stored. (The default value is sufficient.)
* `role` - The IAM Role ARN used to give training and hosting access to your data.
* See documentation on how to create these.
* The script below will try to determine an appropriate Role ARN.
```
from sagemaker import get_execution_role
session = sagemaker.Session()
role = get_execution_role()
bucket = session.default_bucket()
prefix = "sagemaker/DEMO-lda-introduction"
print("Training input/output will be stored in {}/{}".format(bucket, prefix))
print("\nIAM Role: {}".format(role))
```
## Obtain Example Data
We generate some example synthetic document data. For the purposes of this notebook we will omit the details of this process. All we need to know is that each piece of data, commonly called a *"document"*, is a vector of integers representing *"word counts"* within the document. In this particular example there are a total of 25 words in the *"vocabulary"*.
$$
\underbrace{w}_{\text{document}} = \overbrace{\big[ w_1, w_2, \ldots, w_V \big] }^{\text{word counts}},
\quad
V = \text{vocabulary size}
$$
These data are based on that used by Griffiths and Steyvers in their paper [Finding Scientific Topics](http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf). For more information, see the **LDA-Science.ipynb** notebook.
```
print("Generating example data...")
num_documents = 6000
num_topics = 5
known_alpha, known_beta, documents, topic_mixtures = generate_griffiths_data(
num_documents=num_documents, num_topics=num_topics
)
vocabulary_size = len(documents[0])
# separate the generated data into training and tests subsets
num_documents_training = int(0.9 * num_documents)
num_documents_test = num_documents - num_documents_training
documents_training = documents[:num_documents_training]
documents_test = documents[num_documents_training:]
topic_mixtures_training = topic_mixtures[:num_documents_training]
topic_mixtures_test = topic_mixtures[num_documents_training:]
print("documents_training.shape = {}".format(documents_training.shape))
print("documents_test.shape = {}".format(documents_test.shape))
```
## Inspect Example Data
*What does the example data actually look like?* Below we print an example document as well as its corresponding known *topic-mixture*. A topic-mixture serves as the "label" in the LDA model. It describes the ratio of topics from which the words in the document are found.
For example, if the topic mixture of an input document $\mathbf{w}$ is,
$$\theta = \left[ 0.3, 0.2, 0, 0.5, 0 \right]$$
then $\mathbf{w}$ is 30% generated from the first topic, 20% from the second topic, and 50% from the fourth topic. For more information see **How LDA Works** in the SageMaker documentation as well as the **LDA-Science.ipynb** notebook.
Below, we compute the topic mixtures for the first few training documents. As we can see, each document is a vector of word counts from the 25-word vocabulary and its topic-mixture is a probability distribution across the five topics used to generate the sample dataset.
```
print("First training document =\n{}".format(documents[0]))
print("\nVocabulary size = {}".format(vocabulary_size))
print("Known topic mixture of first document =\n{}".format(topic_mixtures_training[0]))
print("\nNumber of topics = {}".format(num_topics))
print("Sum of elements = {}".format(topic_mixtures_training[0].sum()))
```
Later, when we perform inference on the training data set we will compare the inferred topic mixture to this known one.
---
Human beings are visual creatures, so it might be helpful to come up with a visual representation of these documents. In the below plots, each pixel of a document represents a word. The greyscale intensity is a measure of how frequently that word occurs. Below we plot the first few documents of the training set reshaped into 5x5 pixel grids.
```
%matplotlib inline
fig = plot_lda(documents_training, nrows=3, ncols=4, cmap="gray_r", with_colorbar=True)
fig.suptitle("Example Document Word Counts")
fig.set_dpi(160)
```
## Store Data on S3
A SageMaker training job needs access to training data stored in an S3 bucket. Although training can accept data of various formats we convert the documents MXNet RecordIO Protobuf format before uploading to the S3 bucket defined at the beginning of this notebook. We do so by making use of the SageMaker Python SDK utility `RecordSerializer`.
```
# convert documents_training to Protobuf RecordIO format
recordio_protobuf_serializer = RecordSerializer()
fbuffer = recordio_protobuf_serializer.serialize(documents_training)
# upload to S3 in bucket/prefix/train
fname = "lda.data"
s3_object = os.path.join(prefix, "train", fname)
boto3.Session().resource("s3").Bucket(bucket).Object(s3_object).upload_fileobj(fbuffer)
s3_train_data = "s3://{}/{}".format(bucket, s3_object)
print("Uploaded data to S3: {}".format(s3_train_data))
```
# Training
***
Once the data is preprocessed and available in a recommended format the next step is to train our model on the data. There are number of parameters required by SageMaker LDA configuring the model and defining the computational environment in which training will take place.
First, we specify a Docker container containing the SageMaker LDA algorithm. For your convenience, a region-specific container is automatically chosen for you to minimize cross-region data communication. Information about the locations of each SageMaker algorithm is available in the documentation.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
# select the algorithm container based on this notebook's current location
region_name = boto3.Session().region_name
container = get_image_uri(region_name, "lda")
print("Using SageMaker LDA container: {} ({})".format(container, region_name))
```
Particular to a SageMaker LDA training job are the following hyperparameters:
* **`num_topics`** - The number of topics or categories in the LDA model.
* Usually, this is not known a priori.
* In this example, howevever, we know that the data is generated by five topics.
* **`feature_dim`** - The size of the *"vocabulary"*, in LDA parlance.
* In this example, this is equal 25.
* **`mini_batch_size`** - The number of input training documents.
* **`alpha0`** - *(optional)* a measurement of how "mixed" are the topic-mixtures.
* When `alpha0` is small the data tends to be represented by one or few topics.
* When `alpha0` is large the data tends to be an even combination of several or many topics.
* The default value is `alpha0 = 1.0`.
In addition to these LDA model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,
* Recommended instance type: `ml.c4`
* Current limitations:
* SageMaker LDA *training* can only run on a single instance.
* SageMaker LDA does not take advantage of GPU hardware.
* (The Amazon AI Algorithms team is working hard to provide these capabilities in a future release!)
```
# specify general training job information
lda = sagemaker.estimator.Estimator(
container,
role,
output_path="s3://{}/{}/output".format(bucket, prefix),
train_instance_count=1,
train_instance_type="ml.c4.2xlarge",
sagemaker_session=session,
)
# set algorithm-specific hyperparameters
lda.set_hyperparameters(
num_topics=num_topics,
feature_dim=vocabulary_size,
mini_batch_size=num_documents_training,
alpha0=1.0,
)
# run the training job on input data stored in S3
lda.fit({"train": s3_train_data})
```
If you see the message
> `===== Job Complete =====`
at the bottom of the output logs then that means training sucessfully completed and the output LDA model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
```
print("Training job name: {}".format(lda.latest_training_job.job_name))
```
# Inference
***
A trained model does nothing on its own. We now want to use the model we computed to perform inference on data. For this example, that means predicting the topic mixture representing a given document.
We create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up.
```
lda_inference = lda.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge", # LDA inference may work better at scale on ml.c4 instances
)
```
Congratulations! You now have a functioning SageMaker LDA inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below:
```
print("Endpoint name: {}".format(lda_inference.endpoint_name))
```
With this realtime endpoint at our fingertips we can finally perform inference on our training and test data.
We can pass a variety of data formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted, JSON-sparse-formatter, and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `CSVSerializer` and `JSONDeserializer` when configuring the inference endpoint.
```
lda_inference.serializer = CSVSerializer()
lda_inference.deserializer = JSONDeserializer()
```
We pass some test documents to the inference endpoint. Note that the serializer and deserializer will atuomatically take care of the datatype conversion from Numpy NDArrays.
```
results = lda_inference.predict(documents_test[:12])
print(results)
```
It may be hard to see but the output format of SageMaker LDA inference endpoint is a Python dictionary with the following format.
```
{
'predictions': [
{'topic_mixture': [ ... ] },
{'topic_mixture': [ ... ] },
{'topic_mixture': [ ... ] },
...
]
}
```
We extract the topic mixtures, themselves, corresponding to each of the input documents.
```
computed_topic_mixtures = np.array(
[prediction["topic_mixture"] for prediction in results["predictions"]]
)
print(computed_topic_mixtures)
```
If you decide to compare these results to the known topic mixtures generated in the [Obtain Example Data](#ObtainExampleData) Section keep in mind that SageMaker LDA discovers topics in no particular order. That is, the approximate topic mixtures computed above may be permutations of the known topic mixtures corresponding to the same documents.
```
print(topic_mixtures_test[0]) # known test topic mixture
print(computed_topic_mixtures[0]) # computed topic mixture (topics permuted)
```
## Stop / Close the Endpoint
Finally, we should delete the endpoint before we close the notebook.
To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select "Delete" from the "Actions" dropdown menu.
```
sagemaker.Session().delete_endpoint(lda_inference.endpoint_name)
```
# Epilogue
---
In this notebook we,
* generated some example LDA documents and their corresponding topic-mixtures,
* trained a SageMaker LDA model on a training set of documents,
* created an inference endpoint,
* used the endpoint to infer the topic mixtures of a test input.
There are several things to keep in mind when applying SageMaker LDA to real-word data such as a corpus of text documents. Note that input documents to the algorithm, both in training and inference, need to be vectors of integers representing word counts. Each index corresponds to a word in the corpus vocabulary. Therefore, one will need to "tokenize" their corpus vocabulary.
$$
\text{"cat"} \mapsto 0, \; \text{"dog"} \mapsto 1 \; \text{"bird"} \mapsto 2, \ldots
$$
Each text document then needs to be converted to a "bag-of-words" format document.
$$
w = \text{"cat bird bird bird cat"} \quad \longmapsto \quad w = [2, 0, 3, 0, \ldots, 0]
$$
Also note that many real-word applications have large vocabulary sizes. It may be necessary to represent the input documents in sparse format. Finally, the use of stemming and lemmatization in data preprocessing provides several benefits. Doing so can improve training and inference compute time since it reduces the effective vocabulary size. More importantly, though, it can improve the quality of learned topic-word probability matrices and inferred topic mixtures. For example, the words *"parliament"*, *"parliaments"*, *"parliamentary"*, *"parliament's"*, and *"parliamentarians"* are all essentially the same word, *"parliament"*, but with different conjugations. For the purposes of detecting topics, such as a *"politics"* or *governments"* topic, the inclusion of all five does not add much additional value as they all essentiall describe the same feature.
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from functions import *
%matplotlib inline
```
Per trovare i materiali che compongono i cluster scegliamo di **non eseguire un fit su ogni spettro all'interno di un determinato cluster**, ma **procediamo mediando su tutti gli spettri presenti nei singoli cluster e fittiamo sul risultato della media.**
Importiamo cosi i centroidi dei vari cluster, che vengono restituiti in automatico dall'algoritmo k-means utilizzato per la clusterizzazione.
## Import dei dati
```
#import dei pesi dei cluster
labels=np.loadtxt("../data/processed/CLUSTERING_labels.txt")
# import dei centroidi
data = pd.read_csv("../data/processed/CLUSTERING_data_centres.csv")
data.drop(labels='Unnamed: 0',inplace=True,axis=1)
pure_material_names,pure_materials = import_pure_spectra('../data/raw/Database Raman/BANK_LIST.dat','../data/raw/Database Raman/')
```
## Interpolazione
**Le frequenze di campionamento degli spettri puri sono diverse tra loro e diverse da quelle utilizzate per il campionamento degli spettri sperimentali.** Per poter eseguire un fit dobbiamo per prima cosa interpolare i dati degli spettri puri con quelli degli spettri sperimentali. Dopo l'interpolazione le frequenze degli spettri puri saranno le stesse delle frequenze degli spettri sperimentlai.
```
pure_materials_interpoled=pd.DataFrame(data.wn.copy())
for temp in pure_material_names:
pure_materials_interpoled=pure_materials_interpoled.join(pd.DataFrame(np.interp(data.wn, pure_materials[temp+'_wn'] ,pure_materials[temp+'_I']),columns=[temp]))
```
Dopo aver interpolato i dati normalizziamo gli spettri puri.
```
#Normalizzazione
for i in pure_material_names:
pure_materials_interpoled[i]=pure_materials_interpoled[i]/np.trapz(abs(pure_materials_interpoled[i].dropna()), x=pure_materials_interpoled.wn)
```
## Fit
Per fittare gli spettri puri ai dati ragioniamo in questo modo:
- possiamo vedere lo spettro incognito del centroide $C$ come una combinazione lineare con coefficienti non negativi di tutti gli spettri puri $P_{i}$.
- Nel nostro modello gli spettri puri $P_{i} \in \mathbb{R}^n$ dove $n$ e' la dimesnionalita' del vettore delle intensita' dei vari spettri.
- $C = \sum \alpha_{i}P_{i} + P_{0} $ dove $P_{0}$ e' un parametro costante.
Fortunatamente nell'ultimo aggiornamento di SikitLearn (rilasciato proprio nel periodo in cui abbiamo lavorato al progetto) e' stato introdotto il parametro "positive" in LinearRegression, che permette di considerare solo coefficienti non negativi. Questo è stato fondamentale dato che in ogni caso sarebbe stato necessario implementarlo, altrimenti il FIT utilzzava combinazioni di coeficienti positivi e negativi per utilizzando tutti gli spettri in modo da fittare il rumore.
```
ols=LinearRegression(positive=True) #definisco il regressore
```
Per ogni cluster facciamo quindi una regressione lineare estrapolando i coefficienti e l'intercetta.
```
N_cluster=len(data.columns)-1
coeff=[]
intercept=[]
for i in range(N_cluster):
ols.fit(pure_materials_interpoled[pure_material_names], data[str(i)]) #ottimizziamo il modello (lineare) su i dati di training
coeff.append(ols.coef_)
intercept.append(ols.intercept_)
```
### Plot dei vari centroidi dei cluster e del rispettivo fit
```
fig, axs = plt.subplots(nrows = N_cluster,figsize = (16,38))
for i in enumerate(range(N_cluster)):
axs[i[0]].plot(data.wn,data[str(i[0])])
axs[i[0]].plot(pure_materials_interpoled.wn,intercept[i[0]]+np.sum(pure_materials_interpoled[pure_material_names] * coeff[i[0]] ,axis=1))
axs[i[0]].set_title('Cluster ' + str(i[0]))
axs[i[0]].legend(['centroide','fit'], loc='upper right')
```
## Determinazione dell'abbondanza del materiale
Tenendo conto del numero di spettri presenti in ogni cluster, determiniamo il materiale piu' abbondande nel campione utilizzando i coefficienti dei fit. **Otteniamo così il risultato finale: le abbondanze nel campione**.
```
#elimino il cluster a 0 (se presente) e normalizzare i coefficienti per ogni cluster
for temp in np.unique(labels):
if max(data[str(int(temp))])>1e-10:
coeff[int(temp)]=coeff[int(temp)]/sum(coeff[int(temp)])
else:
print(f'Identificato lo spettro piatto, non utilizzato il cluster {int(temp)}')
coeff[int(temp)]=np.zeros(len(coeff[int(temp)]))
#numero di spettri per cluster in ordine
weights=[np.count_nonzero(labels==i) for i in range(len(data.columns)-1)]
#moltiplico i coeficienti del cluster i-esimo per questo numero
abb_notnormalized=[coeff[i]*weights[i] for i in range(len(data.columns)-1)]
#e infine ho la media pesata dei coeficienti
abb=sum(abb_notnormalized)/(sum(abb_notnormalized).sum())
#Creo un Pandas dataframe con nomi e abbondanze
abb_table=pd.DataFrame({'names':pure_material_names,'abbundances':abb})
#riordino in base alla concenrazione
abb_table.sort_values('abbundances',ascending=False,inplace=True, ignore_index=True)
abb_table[abb_table['abbundances']>0.01]
abb_table.to_csv("../data/processed/abb_table.csv")
```
| github_jupyter |
# Teste para Duas Médias - ANOVA (Analysis of Variance)
Análise de variância é a técnica estatística que permite avaliar afirmações sobre as médias de populações. A análise visa, fundamentalmente, verificar se existe uma diferença significativa entre as médias e se os fatores exercem influência em alguma variável dependente, com $k$ populaçõess com médias $\mu_i$ desconhecidas.
Os pressupostos básicos da análise de variância são:
- As amostras são aleatórias e independentes
- As populações têm distribuição normal (o teste é paramétrico)
- As variâncias populacionais são iguais
Na prática, esses pressupostos não precisam ser todos rigorosamente satisfeitos. Os resultados são empiricamente verdadeiros sempre que as populações são aproximadamente normais (isso é, não muito assimétricas) e têm variâncias próximas.
Queremos testar se as $k$ médias são iguais, para isto vamos utilizara tabela **ANOVA - Analysis of Variance**
Variação dos dados:
<br>
$$SQT = \sum_{i=1}^{k}\sum_{j=1}^{n_i} (x_{ij}- \overline x)^2 =
\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}^2 -
\frac{1}{n}\Big(\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}\Big)^2 $$
<br><br>
$$SQE = \sum_{i=1}^{k} n_i(\overline x_{i}- \overline x)^2 =
\sum_{i=1}^{k} \frac{1}{n_i}\Big (\sum_{j=1}^{n_i} x_{ij}\Big)^2 -
\frac{1}{n}\Big(\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}\Big)^2 $$
<br><br>
$$SQR = \sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}^2 -
\sum_{i=1}^{k} \frac{1}{n_i}\Big (\sum_{j=1}^{n_i} x_{ij}\Big)^2$$
<br><br>
Verifica-se que:
$$SQT=SQE+SQR$$
onde:
- SQT: Soma dos Quadrados Total
- SQE: Soma dos Quadrados Explicada
- SQR: Soma dos Quadrados dos Resíduos
<br><br>
<img src="img/anova.png" width="450" />
<br><br>
Dentro das premissas de variáveis aleatórias e independentes, o ideal é que cada uma das variáveis de um modelo explique uma determinadda parte da variável dependente. Com isso, podemos imaginar como o *fit* desejado, veriáveis independentes entre si conforme ilustrado na figura abaixo.
<br><br>
<img src="img/anova_explicada.png" width="350" />
<br><br>
# Exemplo: DataSet de crescimento de dentes com duas terapias diferentes
O DataSet representa o crescimento de dentes em animais submetidos a duas terapias alternativas, onde a resposta é o comprimento dos odontoblastos (células responsáveis pelo crescimento dentário) em 60 porquinhos-da-índia. Cada animal recebeu um dos três níveis de dose de vitamina C (0,5, 1 e 2 mg / dia) por um dos dois métodos de entrega (suco de laranja "OJ" ou ácido ascórbico (uma forma de vitamina C e codificada como "CV").
Uma vantagem importante do ANOVA de duas vias é que ele é mais eficiente em comparação com o one-way. Existem duas fontes de variação designáveis supp e dose em nosso exemplo - e isso ajuda a reduzir a variação de erros, tornando esse design mais eficiente. A ANOVA bidirecional (fatorial) pode ser usada para, por exemplo, comparar as médias das populações que são diferentes de duas maneiras. Também pode ser usado para analisar as respostas médias em um experimento com dois fatores. Ao contrário do One-Way ANOVA, ele nos permite testar o efeito de dois fatores ao mesmo tempo. Pode-se também testar a independência dos fatores, desde que haja mais de uma observação em cada célula. A única restrição é que o número de observações em cada célula deve ser igual (não existe tal restrição no caso de ANOVA unidirecional).
Discutimos modelos lineares mais cedo - e ANOVA é de fato um tipo de modelo linear - a diferença é que ANOVA é onde você tem fatores discretos cujo efeito em um resultado contínuo (variável) você quer entender.
## Importando as bibliotecas
```
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
from statsmodels.stats.anova import anova_lm
from statsmodels.graphics.factorplots import interaction_plot
import matplotlib.pyplot as plt
from scipy import stats
```
## Importando os dados
```
datafile = "../../99 Datasets/ToothGrowth.csv.zip"
data = pd.read_csv(datafile)
data.head()
data.info()
data.describe()
fig = interaction_plot(data.dose, data.supp, data.len,
colors=['red','blue'], markers=['D','^'], ms=10)
```
## Calculando a soma dos quadrados
<br>
<img src="img/SS.png">
<br>
```
# Graus de liberdade
N = len(data.len)
df_a = len(data.supp.unique()) - 1
df_b = len(data.dose.unique()) - 1
df_axb = df_a*df_b
df_w = N - (len(data.supp.unique())*len(data.dose.unique()))
grand_mean = data['len'].mean()
# SS para o fator A
ssq_a = sum([(data[data.supp ==l].len.mean()-grand_mean)**2 for l in data.supp])
# SS para o fator B
ssq_b = sum([(data[data.dose ==l].len.mean()-grand_mean)**2 for l in data.dose])
# SS total
ssq_t = sum((data.len - grand_mean)**2)
## SS do resíduo
vc = data[data.supp == 'VC']
oj = data[data.supp == 'OJ']
vc_dose_means = [vc[vc.dose == d].len.mean() for d in vc.dose]
oj_dose_means = [oj[oj.dose == d].len.mean() for d in oj.dose]
ssq_w = sum((oj.len - oj_dose_means)**2) +sum((vc.len - vc_dose_means)**2)
# SS de AxB (iterativa)
ssq_axb = ssq_t-ssq_a-ssq_b-ssq_w
```
## Média dos Quadrados
```
# MQ da A
ms_a = ssq_a/df_a
# MQ de B
ms_b = ssq_b/df_b
# MQ de AxB
ms_axb = ssq_axb/df_axb
# MQ do resíduo
ms_w = ssq_w/df_w
```
## F-Score
```
# F-Score de A
f_a = ms_a/ms_w
# F-Score de B
f_b = ms_b/ms_w
# F-Score de C
f_axb = ms_axb/ms_w
```
## p-Value
```
# p-Value de A
p_a = stats.f.sf(f_a, df_a, df_w)
# p-Value de B
p_b = stats.f.sf(f_b, df_b, df_w)
# p-Value de C
p_axb = stats.f.sf(f_axb, df_axb, df_w)
```
## Resultados
```
# Colocando os resultados em um DataFrame
results = {'sum_sq':[ssq_a, ssq_b, ssq_axb, ssq_w],
'df':[df_a, df_b, df_axb, df_w],
'F':[f_a, f_b, f_axb, 'NaN'],
'PR(>F)':[p_a, p_b, p_axb, 'NaN']}
columns=['sum_sq', 'df', 'F', 'PR(>F)']
aov_table1 = pd.DataFrame(results, columns=columns,
index=['supp', 'dose',
'supp:dose', 'Residual'])
# Calculando Eta-Squared e Omega-Squared, e imprimindo a tabela
def eta_squared(aov):
aov['eta_sq'] = 'NaN'
aov['eta_sq'] = aov[:-1]['sum_sq']/sum(aov['sum_sq'])
return aov
def omega_squared(aov):
mse = aov['sum_sq'][-1]/aov['df'][-1]
aov['omega_sq'] = 'NaN'
aov['omega_sq'] = (aov[:-1]['sum_sq']-(aov[:-1]['df']*mse))/(sum(aov['sum_sq'])+mse)
return aov
eta_squared(aov_table1)
omega_squared(aov_table1)
print(aov_table1)
```
### Comentários
Os resultados da variável dose tem a maior distância do valor médio (sum_sq) e portanto a maior variância relatica (F-Score). Isto pode ser comprovado pelo Eta-Squared e Omega-Squared (definição abaixo).
### Mais sobre Eta-Squared e Omega-Squared
Outro conjunto de medidas de tamanho de efeito para variáveis independentes categóricas tem uma interpretação mais intuitiva e é mais fácil de avaliar. Eles incluem o Eta Squared, o Parcial Eta Squared e o Omega Squared. Como a estatística R Squared, todos eles têm a interpretação intuitiva da proporção da variância contabilizada.
Eta Squared é calculado da mesma forma que R Squared, e tem a interpretação mais equivalente: da variação total em Y, a proporção que pode ser atribuída a um X específico.
O Eta Squared, no entanto, é usado especificamente em modelos ANOVA. Cada efeito categórico no modelo tem seu próprio Eta Squared, de modo que você obtenha uma medida específica e intuitiva do efeito dessa variável.
A desvantagem do Eta Squared é que é uma medida tendenciosa da variância da população explicada (embora seja exata para a amostra), sempre superestima.
Esse viés fica muito pequeno à medida que o tamanho da amostra aumenta, mas para amostras pequenas, uma medida de tamanho de efeito imparcial é Omega Squared. Omega Squared tem a mesma interpretação básica, mas usa medidas imparciais dos componentes de variância. Por ser uma estimativa imparcial das variâncias populacionais, o Omega Squared é sempre menor que o Eta Squared (ES).
Não há padrões acordados sobre como interpretar um ES. A interpretação é basicamente subjetiva. Melhor abordagem é comparar com outros estudos.
Cohen (1977):
- 0.2 = pequeno
- 0.5 = moderado
- 0.8 = grande
## ANOVA com Statsmodels
```
formula = 'len ~ C(supp) + C(dose) + C(supp):C(dose)'
model = ols(formula, data).fit()
aov_table = anova_lm(model, typ=2)
eta_squared(aov_table)
omega_squared(aov_table)
print(aov_table)
```
## Quantile-Quantile (QQplot)
```
res = model.resid
fig = sm.qqplot(res, line='s')
plt.show()
```
| github_jupyter |
# Import development libraries
```
import bw2data as bd
import bw2calc as bc
import bw_processing as bwp
import numpy as np
import matrix_utils as mu
```
# Create new project
```
bd.projects.set_current("Multifunctionality")
```
Our existing implementation allows us to distinguish activities and prodducts, though not everyone does this.
```
db = bd.Database("background")
db.write({
("background", "1"): {
"type": "process",
"name": "1",
"exchanges": [{
"input": ("background", "bio"),
"amount": 1,
"type": "biosphere",
}]
},
("background", "2"): {
"type": "process",
"name": "2",
"exchanges": [{
"input": ("background", "bio"),
"amount": 10,
"type": "biosphere",
}]
},
("background", "bio"): {
"type": "biosphere",
"name": "bio",
"exchanges": [],
},
("background", "3"): {
"type": "process",
"name": "2",
"exchanges": [
{
"input": ("background", "1"),
"amount": 2,
"type": "technosphere",
}, {
"input": ("background", "2"),
"amount": 4,
"type": "technosphere",
}, {
"input": ("background", "4"),
"amount": 1,
"type": "production",
}
]
},
("background", "4"): {
"type": "product",
}
})
method = bd.Method(("something",))
method.write([(("background", "bio"), 1)])
```
# LCA of background system
This database is fine and normal. It work the way we expect.
Here we use the preferred calling convention for Brightway 2.5, with the convenience function `prepare_lca_inputs`.
```
fu, data_objs, _ = bd.prepare_lca_inputs(demand={("background", "4"): 1}, method=("something",))
lca = bc.LCA(fu, data_objs=data_objs)
lca.lci()
lca.lcia()
lca.score
```
# Multifunctional activities
What happens when we have an activity that produces multiple products?
```
db = bd.Database("example mf")
db.write({
# Activity
("example mf", "1"): {
"type": "process",
"name": "mf 1",
"exchanges": [
{
"input": ("example mf", "2"),
"amount": 2,
"type": "production",
}, {
"input": ("example mf", "3"),
"amount": 4,
"type": "production",
},
{
"input": ("background", "1"),
"amount": 2,
"type": "technosphere",
}, {
"input": ("background", "2"),
"amount": 4,
"type": "technosphere",
}
]
},
# Product
("example mf", "2"): {
"type": "good",
"price": 4
},
# Product
("example mf", "3"): {
"type": "good",
"price": 6
}
})
```
We can do an LCA of one of the products, but we will get a warning about a non-square matrix:
```
fu, data_objs, _ = bd.prepare_lca_inputs(demand={("example mf", "1"): 1}, method=("something",))
lca = bc.LCA(fu, data_objs=data_objs)
lca.lci()
```
If we look at the technosphere matrix, we can see our background database (upper left quadrant), and the two production exchanges in the lower right:
```
lca.technosphere_matrix.toarray()
```
# Handling multifunctionality
There are many ways to do this. This notebook is an illustration of how such approaches can be madde easier using the helper libraries [bw_processing](https://github.com/brightway-lca/bw_processing) and [matrix_utils](https://github.com/brightway-lca/matrix_utils), not a statement that one approach is better (or even correct).
We create a new, in-memory "delta" `bw_processing` data package that gives new values for some additional columns in the matrix (the virtual activities generated by allocating each product), as well as updating values in the existing matrix.
```
def economic_allocation(dataset):
assert isinstance(dataset, bd.backends.Activity)
# Split exchanges into functional and non-functional
functions = [exc for exc in dataset.exchanges() if exc.input.get('type') in {'good', 'waste'}]
others = [exc for exc in dataset.exchanges() if exc.input.get('type') not in {'good', 'waste'}]
for exc in functions:
assert exc.input.get("price") is not None
total_value = sum([exc.input['price'] * exc['amount'] for exc in functions])
# Plus one because need to add (missing) production exchanges
n = len(functions) * (len(others) + 1) + 1
data = np.zeros(n)
indices = np.zeros(n, dtype=bwp.INDICES_DTYPE)
flip = np.zeros(n, dtype=bool)
for i, f in enumerate(functions):
allocation_factor = f['amount'] * f.input['price'] / total_value
col = bd.get_id(f.input)
# Add explicit production
data[i * (len(others) + 1)] = f['amount']
indices[i * (len(others) + 1)] = (col, col)
for j, o in enumerate(others):
index = i * (len(others) + 1) + j + 1
data[index] = o['amount'] * allocation_factor
flip[index] = o['type'] in {'technosphere', 'generic consumption'}
indices[index] = (bd.get_id(o.input), col)
# Add implicit production of allocated dataset
data[-1] = 1
indices[-1] = (dataset.id, dataset.id)
# Note: This assumes everything is in technosphere, a real function would also
# patch the biosphere
allocated = bwp.create_datapackage(sum_intra_duplicates=True, sum_inter_duplicates=False)
allocated.add_persistent_vector(
matrix="technosphere_matrix",
indices_array=indices,
flip_array=flip,
data_array=data,
name=f"Allocated version of {dataset}",
)
return allocated
dp = economic_allocation(bd.get_activity(("example mf", "1")))
lca = bc.LCA({bd.get_id(("example mf", "2")): 1}, data_objs=data_objs + [dp])
lca.lci()
```
Note that the last two columns, when summed together, form the unallocated activity (column 4):
```
lca.technosphere_matrix.toarray()
```
To make sure what we have done is clear, we can create the matrix just for the "delta" data package:
```
mu.MappedMatrix(packages=[dp], matrix="technosphere_matrix").matrix.toarray()
```
And we can now do LCAs of both allocated products:
```
lca.lcia()
lca.score
lca = bc.LCA({bd.get_id(("example mf", "3")): 1}, data_objs=data_objs + [dp])
lca.lci()
lca.lcia()
lca.score
```
| github_jupyter |
___
<a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
# DataFrames
DataFrames are the workhorse of pandas and are directly inspired by the R programming language. We can think of a DataFrame as a bunch of Series objects put together to share the same index. Let's use pandas to explore this topic!
```
import pandas as pd
import numpy as np
from numpy.random import randint
columns= ['W', 'X', 'Y', 'Z'] # four columns
index= ['A', 'B', 'C', 'D', 'E'] # five rows
np.random.seed(42)
data = randint(-100,100,(5,4))
data
df = pd.DataFrame(data,index,columns)
df
```
# Selection and Indexing
Let's learn the various methods to grab data from a DataFrame
# COLUMNS
## Grab a single column
```
df['W']
```
## Grab multiple columns
```
# Pass a list of column names
df[['W','Z']]
```
### DataFrame Columns are just Series
```
type(df['W'])
```
### Creating a new column:
```
df['new'] = df['W'] + df['Y']
df
```
## Removing Columns
```
# axis=1 because its a column
df.drop('new',axis=1)
# Not inplace unless reassigned!
df
df = df.drop('new',axis=1)
df
```
## Working with Rows
## Selecting one row by name
```
df.loc['A']
```
## Selecting multiple rows by name
```
df.loc[['A','C']]
```
## Select single row by integer index location
```
df.iloc[0]
```
## Select multiple rows by integer index location
```
df.iloc[0:2]
```
## Remove row by name
```
df.drop('C',axis=0)
# NOT IN PLACE!
df
```
### Selecting subset of rows and columns at same time
```
df.loc[['A','C'],['W','Y']]
```
# Conditional Selection
An important feature of pandas is conditional selection using bracket notation, very similar to numpy:
```
df
df>0
df[df>0]
df['X']>0
df[df['X']>0]
df[df['X']>0]['Y']
df[df['X']>0][['Y','Z']]
```
For two conditions you can use | and & with parenthesis:
```
df[(df['W']>0) & (df['Y'] > 1)]
```
## More Index Details
Let's discuss some more features of indexing, including resetting the index or setting it something else. We'll also talk about index hierarchy!
```
df
# Reset to default 0,1...n index
df.reset_index()
df
newind = 'CA NY WY OR CO'.split()
newind
df['States'] = newind
df
df.set_index('States')
df
df = df.set_index('States')
df
```
## DataFrame Summaries
There are a couple of ways to obtain summary data on DataFrames.<br>
<tt><strong>df.describe()</strong></tt> provides summary statistics on all numerical columns.<br>
<tt><strong>df.info and df.dtypes</strong></tt> displays the data type of all columns.
```
df.describe()
df.dtypes
df.info()
```
# Great Job!
| github_jupyter |
```
# general tools
import warnings
import requests
import pickle
import math
import re
# visualization tools
import matplotlib.pyplot as plt
from tqdm.auto import tqdm
import seaborn as sns
# data preprocessing tools
import pandas as pd
from shapely.geometry import Point
import numpy as np
from scipy.spatial.distance import cdist
tqdm.pandas()
plt.style.use('seaborn')
warnings.filterwarnings("ignore")
%run ../src/utils.py
traffic = pd.read_csv('../data/external/Traffic_Published_2016.csv')
traffic.shape
traffic.info()
traffic = traffic.dropna(subset=['Lat'])
traffic.shape
train = pd.read_csv('../data/raw/data_train.zip', index_col='Unnamed: 0', low_memory=True)
test = pd.read_csv('../data/raw/data_test.zip', index_col='Unnamed: 0', low_memory=True)
train.shape, test.shape
data = pd.concat([train, test], axis=0)
data.shape
import pyproj
converter = pyproj.Proj("+proj=merc +lat_ts=0 +lat_0=0 +lon_0=0 +x_0=0 \
+y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs")
data['lat_lon_entry'] = [converter(x, y, inverse=True) for x, y in zip(data.x_entry, data.y_entry)]
data['lat_entry'] = data.lat_lon_entry.apply(lambda row: row[0])
data['lon_entry'] = data.lat_lon_entry.apply(lambda row: row[1])
data['lat_lon_exit'] = [converter(x, y, inverse=True) for x, y in zip(data.x_exit, data.y_exit)]
data['lat_exit'] = data.lat_lon_exit.apply(lambda row: row[0])
data['lon_exit'] = data.lat_lon_exit.apply(lambda row: row[1])
data['euclidean_distance'] = euclidean(data.x_entry.values, data.y_entry.values,
data.x_exit.values, data.y_exit.values)
from math import hypot
from scipy.spatial.distance import cdist
from tqdm import tqdm
traffic = traffic.reset_index(drop=True)
coords_traff = list(zip(traffic.Lat.values, traffic.Long.values))
data['idx_traffic'] = np.zeros(data.shape[0])
df_copy = data.copy()
df_copy = df_copy[df_copy.euclidean_distance!=0]
df_copy = df_copy.reset_index(drop=True)
def minimum_distance(data, row_type='entry'):
for idx, (lat, long) in tqdm(enumerate(list(zip(data['lat_'+row_type].values, data['lon_'+row_type].values)))):
minimum_dist = 0
idx_traffic = cdist([(lat, long)], coords_traff).argmin()
data.loc[idx, 'idx_traffic'] = idx_traffic
return data
df_copy = minimum_distance(df_copy, row_type='exit')
df_copy['idx_traffic'] = df_copy.idx_traffic.astype(int)
df_copy.head(4)
traffic_cols = traffic.columns.tolist()
traffic = traffic.reset_index(drop=False)
#traffic.columns = ['idx_traffic']+[traffic_cols]
df_copy['index'] = df_copy.idx_traffic.values
df_final = df_copy.merge(traffic, on='index')
df_final.head(4)
final_columns = list(set(traffic.columns.tolist()) - set(['level_0', 'index']))
final_columns += ['hash', 'trajectory_id']
for col in final_columns:
if col not in ['hash', 'trajectory_id']:
df_final = df_final.rename(index=str, columns={col: col+'_exit'})
df_final.head(4)
final_columns = ['hash', 'trajectory_id'] + [col+'_exit' for col in final_columns if col not in ['hash', 'trajectory_id']]
df_final[final_columns].head(4)
df_final = df_final.drop('COUNTY_NAME_exit', axis=1)
final_columns = list(set(final_columns) - set(['COUNTY_NAME_exit']))
df_final[final_columns].to_hdf('../data/external/traffic_exit_features.hdf', key='exit', mode='w')
```
From this point, we will perform a round of exploration and visualization regarding the newfound external data.
```
traffic_exit = pd.read_hdf('../data/raw/traffic_exit_features.hdf', key='exit', mode='r')
traffic_entry = pd.read_hdf('../data/raw/traffic_entry_features.hdf', key='entry', mode='r')
traffic_entry.shape, traffic_exit.shape
traffic_entry.head(4).T
```
- AADT: Annual Average Daily Traffic (AADT), is the total volume of vehicle traffic. of a roadway for a year divided by 365 days.
- K_FACTOR: is defined as the proportion of annual average daily traffic occurring in an hour. This factor is used for designing and analyzing the flow of traffic on highways.
- ROUTE_ID: Integer value representing each road on Georgia Federative's roads.
```
29.8% red, 74.1% green and 74.9% blue
fig, ax = plt.subplots(2, 1, figsize=(18, 15))
sns.set_style("whitegrid")
sns.distplot(traffic_entry.AADT_entry.dropna().values,
kde=False,
hist_kws={"linewidth": 3,
"alpha": 1,
"color": "coral"},
ax=ax[0])
sns.distplot(traffic_entry.K_FACTOR_entry.dropna().values,
kde=False,
hist_kws={"linewidth": 3,
"alpha": 1,
"color": [(0.298, 0.741, 0.749)]},
ax=ax[1])
ax[0].set_title('Annual Average Daily Traffic Distribution', fontsize=30)
ax[1].set_title('K-Factor: Proportion of annual average daily traffic occurring in an hour',
fontsize=30)
ax[0].set_xlim(0, 150000)
ax[1].set_xlim(0, 25)
ax[0].grid(False)
ax[1].grid(False)
ax[0].tick_params(axis='both', which='major', labelsize=20)
ax[1].tick_params(axis='both', which='major', labelsize=20)
traffic_entry.AADT_entry.hist(bins=100)
traffic_entry.K_FACTOR_entry.hist(bins=100)
sns.countplot(x='ROUTE_ID_entry', data=traffic_entry)
```
| github_jupyter |
## Ensembl to RefSeq Mapping
The constraint table from gnomAD has duplicate gene ID's - in the example of TUBB3 one gene ID is missannotated. Given out analysis is by transcript, it is probably better to use the transcript table from gnomAD. Howver, gnomAD used ENSEMBL transcripts and we used RefSeq Transcripts. Can map the two through biomart:
http://www.ensembl.org/biomart/martview/e81bf786e69482239d8e7799ec2c9e9e
```
import org.apache.spark.sql.{Row, SparkSession}
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val customSchema = new StructType(Array(
StructField("GeneID",StringType,true),
StructField("GeneIDVer",StringType,true),
StructField("TranscriptID",StringType,true),
StructField("TranscriptIDVer",StringType,true),
StructField("EnsemblGeneSymbol",StringType,true),
StructField("GeneType",StringType,true),
StructField("GENE",StringType,true),
StructField("RefSeq",StringType,true),
StructField("NCBIGeneID",IntegerType,true)))
val df_ens2ref = (spark.read
.format("csv")
.option("header", "true")
.option("delimiter", "\t")
.option("nullValues", "")
.schema(customSchema)
.load("s3://nch-igm-research-projects/rna_stability/peter/ensembl_2_refeq.txt")
)
df_ens2ref.printSchema
df_ens2ref.filter($"GENE" === "TUBB3").show
```
Double check there are no duplicates
```
val df_ref2ens = df_ens2ref.filter($"RefSeq".isNotNull).select("TranscriptID","RefSeq").sort($"TranscriptID").distinct
df_ref2ens.show
df_ref2ens.count
df_ref2ens.select($"RefSeq").distinct.count
df_ref2ens.groupBy($"RefSeq").count.sort($"count".desc).show
```
## gnomAD lof metrics by transcript
We want to link the data to gnomAD constraint metrics (LOEUF and pLI):
Supplemental Table describing data fields:
https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-020-2308-7/MediaObjects/41586_2020_2308_MOESM1_ESM.pdf
Select the following columns from the main file:
* **gene**: Gene name
* **gene_id**: Ensembl gene ID
* **transcript**: Ensembl transcript ID (Gencode v19)
* **obs_mis**: Number of observed missense variants in transcript
* **exp_mis**: Number of expected missense variants in transcript
* **oe_mis**: Observed over expected ratio for missense variants in transcript (obs_mis divided by exp_mis)
* **obs_syn**: Number of observed synonymous variants in transcript
* **exp_syn**: Number of expected synonymous variants in transcript
* **oe_syn**: Observed over expected ratio for missense variants in transcript (obs_syn divided by exp_syn)
* **p**: The estimated proportion of haplotypes with a pLoF variant. Defined as: 1 -sqrt(no_lofs / defined)
* **pLI**: Probability of loss-of-function intolerance; probability that transcript falls into distribution of haploinsufficient genes (~9% o/e pLoF ratio; computed from gnomAD data)
* **pRec**: Probability that transcript falls into distribution of recessive genes (~46% o/e pLoF ratio; computed from gnomAD data)
* **pNull**: Probability that transcript falls into distribution of unconstrained genes (~100% o/epLoF ratio; computed from gnomAD data)
* **oe_lof_upper**: LOEUF: upper bound of 90% confidence interval for o/e ratio for pLoF variants (lower values indicate more constrained)
* **oe_lof_upper_rank**: Transcript’s rank of LOEUF value compared to all transcripts (lower values indicate more constrained)
* **oe_lof_upper_bin**: Decile bin of LOEUF for given transcript (lower values indicate more constrained)
```
val cols_constraint = Seq("gene","gene_id","transcript","obs_mis","exp_mis","oe_mis","obs_syn","exp_syn","oe_syn",
"p","pLI","pRec","pNull","oe_lof_upper","oe_lof_upper_rank","oe_lof_upper_bin")
val df_loft_import = (spark.read
.format("csv")
.option("header", "true")
.option("delimiter", "\t")
.option("inferSchema", "true")
.option("nullValues", "NA")
.option("nanValue=", "NA")
.load("s3://nch-igm-research-projects/rna_stability/peter/gnomad.v2.1.1.lof_metrics.by_transcript.txt")
.select(cols_constraint.map(col): _*)
)
val df_loft = (df_loft_import.withColumn("p",col("p").cast(DoubleType))
.withColumn("pLI",col("pLI").cast(DoubleType))
.withColumn("pRec",col("pRec").cast(DoubleType))
.withColumn("pNull",col("pNull").cast(DoubleType))
.withColumn("oe_lof_upper",col("oe_lof_upper").cast(DoubleType))
.withColumn("oe_lof_upper_rank",col("oe_lof_upper_rank").cast(IntegerType))
.withColumn("oe_lof_upper_bin",col("oe_lof_upper_bin").cast(IntegerType))
.withColumnRenamed("gene", "gnomadGeneSymbol")
.withColumnRenamed("gene_id", "gnomadGeneID")
.withColumnRenamed("transcript", "TranscriptID")
)
df_loft.printSchema
df_loft.filter($"gene" === "TUBB3").show
```
Lets check that TranscriptID is not duplicated
```
df_loft.select($"TranscriptID").distinct.count
df_loft.count
df_loft.filter($"p".isNotNull && $"pLI" < 0.9).
select("gnomadGeneSymbol","p","pLI","oe_lof_upper","oe_lof_upper_rank","oe_lof_upper_bin").sort($"pLI".desc).show
```
## Join gnomAD lof table to RefSeq to Ensembl table
```
val df_loft_ref = (df_loft.as("df_loft")
.join(df_ref2ens.as("df_ref2ens"), $"df_loft.TranscriptID" === $"df_ref2ens.TranscriptID", "inner")
.drop($"df_ref2ens.TranscriptID"))
```
Note that the number of rows has now increased from 80,950 to 95,806 - this is becuase ensemble transcripts can map to multiple RefSeq transcripts and vice versa. We now need to make a table where the RefSeq field is not duplicated.
```
df_loft_ref.groupBy($"RefSeq").count.sort($"count".desc).show
```
For duplicate RefSeq's we will choose the value with the highest pLI value (i.e. most contrained) and where pLI is the same lowest oef rank (i.e. most constrained).
```
df_loft_ref.filter($"RefSeq" === "NM_206955" || $"RefSeq" === "NM_145021").show
val df_high_pLI = df_loft_ref.groupBy($"RefSeq").agg(max($"pLI"), min($"oe_lof_upper_rank"))
df_high_pLI.filter($"RefSeq" === "NM_206955" || $"RefSeq" === "NM_145021").show
```
Finally, create a table with unique RefSeq by joining to high pLI table
```
val df_loft_ref_uniq = ( df_loft_ref.join(df_high_pLI.as("pli"),
df_loft_ref("RefSeq") === df_high_pLI("RefSeq") &&
df_loft_ref("pLI") === df_high_pLI("max(pLI)") &&
df_loft_ref("oe_lof_upper_rank") === df_high_pLI("min(oe_lof_upper_rank)"),
"inner")
.drop($"pli.RefSeq").drop($"pli.max(pLI)").drop($"pli.min(oe_lof_upper_rank)") )
df_loft_ref_uniq.groupBy($"RefSeq").count.sort($"count".desc).show
df_loft_ref_uniq.orderBy(rand()).limit(10).show
```
Constrained genes are those with a pLi > 0.9
```
df_loft_ref_uniq.filter($"pLi" >= 0.9).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
```
Probability that transcript falls into distribution of unconstrained genes
```
df_loft_ref_uniq.filter($"pNull" <= 0.05).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
df_loft_ref_uniq.filter($"pNull" > 0.05).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
```
Probability that transcript falls into distribution of recessive genes (~46% o/e pLoF ratio; computed from gnomAD data)
```
df_loft_ref_uniq.filter($"pRec" <= 0.05).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
df_loft_ref_uniq.filter($"pRec" > 0.05).groupBy($"oe_lof_upper_bin").count.sort($"count".desc).show
```
### Write out gnomAD pLI with RefSeq Metrics
```
(df_loft_ref_uniq.write.mode("overwrite")
.parquet("s3://nch-igm-research-projects/rna_stability/peter/gnomAD_pLI_RefSeq.parquet"))
```
| github_jupyter |
练习 1:求n个随机整数均值的平方根,整数范围在m与k之间。
```
import random, math
def test():
i = 0
total = 0
average = 0
number = random.randint(m, k)
while i < n:
i += 1
total += number
number = random.randint(m, k)
print('随机数是:', number)
average = int(total/n)
return math.sqrt(average)
#主程序
m=int(input('请输入一个整数下限:'))
k=int(input('请输入一个整数上限:'))
n=int(input('随机整数的个数是:'))
test()
```
练习 2:写函数,共n个随机整数,整数范围在m与k之间,(n,m,k由用户输入)。求1:西格玛log(随机整数),2:西格玛1/log(随机整数)
```
import random, math
def test1():
i = 0
total = 0
number = random.randint(m,k)
result = math.log10(number)
while i < n:
i += 1
number = random.randint(m,k)
print('执行1的随机整数是:', number)
result += math.log10(number)
return result
def test2():
i = 0
total = 0
number = random.randint(m,k)
result = 1/(math.log10(number))
while i < n:
i += 1
number = random.randint(m,k)
print('执行2的随机整数是:', number)
result += 1/(math.log10(number))
return result
#主程序
n = int(input('随机整数的个数是:'))
m = int(input('请输入一个整数下限:'))
k = int(input('请输入一个整数上限:'))
print()
print('执行1的结果是:', test1())
print()
print('执行2的结果是:', test2())
```
练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。
```
import random
def test():
a = random.randint(1,9)
print('随机整数a是:', a)
i = 0
s = 0
number = 0
total = 0
while i < n:
s = 10**i
number += a * s
total += number
i += 1
return total
#主程序
n = int(input('需要相加的个数是:'))
print('结果是:', test())
```
挑战性练习:仿照task5,将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏,要求和task5中人猜测的方法类似,但是人机角色对换,由人来判断猜测是大、小还是相等,请写出完整的猜数游戏。
```
import random, math
def win():
print(
'''
======YOU WIN=======
."". ."",
| | / /
| | / /
| | / /
| |/ ;-._
} ` _/ / ;
| /` ) / /
| / /_/\_/\
|/ / |
( ' \ '- |
\ `. /
| |
| |
======YOU WIN=======
'''
)
def lose():
print(
'''
======YOU LOSE=======
.-" "-.
/ \
| |
|, .-. .-. ,|
| )(__/ \__)( |
|/ /\ \|
(@_ (_ ^^ _)
_ ) \_______\__|IIIIII|__/__________________________
(_)@8@8{}<________|-\IIIIII/-|___________________________>
)_/ \ /
(@ `--------`
======YOU LOSE=======
'''
)
def game_over():
print(
'''
======GAME OVER=======
_________
/ ======= \
/ __________\
| ___________ |
| | - | |
| | | |
| |_________| |________________
\=____________/ )
/ """"""""""" \ /
/ ::::::::::::: \ =D-'
(_________________)
======GAME OVER=======
'''
)
def show_team():
print('''
***声明***
本游戏由PXS小机智开发''')
def show_instruction():
print('''
游戏说明
玩家选择一个任意整数,计算机来猜测该数。
若计算机在规定次数内猜中该数,则计算机获胜。
若规定次数内没有猜中,则玩家获胜。''')
def menu():
print('''
=====游戏菜单=====
1. 游戏说明
2. 开始游戏
3. 退出游戏
4. 制作团队
=====游戏菜单=====''')
def guess_game():
n = int(input('请输入一个大于0的整数,作为神秘整数的上界,回车结束。'))
max_times = int(math.log(n,2))
print('规定猜测次数是:', max_times, '次')
print()
guess = random.randint(1, n)
print('我猜这个数是:', guess)
guess_times = 1
max_number = n
min_number = 1
while guess_times < max_times:
answer = input('我猜对了吗?(请输入“对”或“不对”)')
if answer == '对':
print(lose())
break
if answer == '不对':
x = input('我猜大了还是小了?(请输入“大”或“小”)')
print()
if x == '大':
max_number = guess-1
guess = random.randint(min_number,max_number)
print('我猜这个数是:', guess)
guess_times += 1
print('我已经猜了', guess_times, '次')
print()
if guess_times == max_times:
ask = input('''***猜测已达规定次数***
我猜对了吗?(请输入“对”或“不对”)''')
if ask == '不对':
end()
break
else:
lose()
if x == '小':
min_number = guess + 1
guess = random.randint(min_number,max_number)
print('我猜这个数是:', guess)
guess_times += 1
print('我已经猜了', guess_times, '次')
print()
if guess_times == max_times:
ask = input('''***猜测已达规定次数***
我猜对了吗?(请输入“对”或“不对”)''')
if ask == '不对':
end()
break
else:
lose()
def end():
a = input('你的神秘数字是:')
print()
print('原来是', a, '啊!')
win()
#主函数
def main():
while True:
menu()
choice = int(input('请输入你的选择'))
if choice == 1:
show_instruction()
elif choice == 2:
guess_game()
elif choice == 3:
game_over()
break
else:
show_team()
#主程序
if __name__ == '__main__':
main()
```
| github_jupyter |
# Locality Sensitive Hashing
```
import numpy as np
import pandas as pd
from scipy.sparse import csr_matrix
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
'''compute norm of a sparse vector
Thanks to: Jaiyam Sharma'''
def norm(x):
sum_sq=x.dot(x.T)
norm=np.sqrt(sum_sq)
return(norm)
```
## Load in the Wikipedia dataset
```
wiki = pd.read_csv('people_wiki.csv')
wiki.head()
```
## Extract TF-IDF matrix
```
def load_sparse_csr(filename):
loader = np.load(filename)
data = loader['data']
indices = loader['indices']
indptr = loader['indptr']
shape = loader['shape']
return csr_matrix((data, indices, indptr), shape)
corpus = load_sparse_csr('people_wiki_tf_idf.npz')
assert corpus.shape == (59071, 547979)
print('Check passed correctly!')
```
## Train an LSH model
```
def generate_random_vectors(num_vector, dim):
return np.random.randn(dim, num_vector)
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
np.random.seed(0) # set seed=0 for consistent results
generate_random_vectors(num_vector=3, dim=5)
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
random_vectors = generate_random_vectors(num_vector=16, dim=547979)
random_vectors.shape
doc = corpus[0, :] # vector of tf-idf values for document 0
doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign
doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
doc = corpus[0, :] # first document
index_bits = (doc.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print(index_bits)
print(powers_of_two)
print(index_bits.dot(powers_of_two))
index_bits = corpus.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
def train_lsh(data, num_vector=16, seed=None):
dim = data.shape[1]
if seed is not None:
np.random.seed(seed)
random_vectors = generate_random_vectors(num_vector, dim)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
table = {}
# Partition data points into bins
bin_index_bits = (data.dot(random_vectors) >= 0)
# Encode bin index bits into integers
bin_indices = bin_index_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i.
for data_index, bin_index in enumerate(bin_indices):
if bin_index not in table:
# If no list yet exists for this bin, assign the bin an empty list.
table[bin_index] = list() # YOUR CODE HERE
# Fetch the list of document ids associated with the bin and add the document id to the end.
table[bin_index].append(data_index)# YOUR CODE HERE
model = {'data': data,
'bin_index_bits': bin_index_bits,
'bin_indices': bin_indices,
'table': table,
'random_vectors': random_vectors,
'num_vector': num_vector}
return model
model = train_lsh(corpus, num_vector=16, seed=143)
table = model['table']
if 0 in table and table[0] == [39583] and \
143 in table and table[143] == [19693, 28277, 29776, 30399]:
print('Passed!')
else:
print('Check your code.')
```
## Inspect bins
```
wiki[wiki['name'] == 'Barack Obama']
print(model['bin_indices'][35817])
wiki[wiki['name'] == 'Joe Biden']
print(np.array(model['bin_index_bits'][24478], dtype=int)) # list of 0/1's
print(model['bin_indices'][24478]) # integer format
sum(model['bin_index_bits'][24478] == model['bin_index_bits'][35817])
wiki[wiki['name']=='Wynn Normington Hugh-Jones']
print(np.array(model['bin_index_bits'][22745], dtype=int)) # list of 0/1's
print(model['bin_indices'][22745])# integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][22745]
model['table'][model['bin_indices'][35817]]
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki[wiki.index.isin(doc_ids)]
docs
def cosine_distance(x, y):
xy = x.dot(y.T)
dist = xy/(norm(x)*norm(y))
return 1-dist[0,0]
obama_tf_idf = corpus[35817,:]
biden_tf_idf = corpus[24478,:]
print('================= Cosine distance from Barack Obama')
print('Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',
cosine_distance(obama_tf_idf, biden_tf_idf)))
for doc_id in doc_ids:
doc_tf_idf = corpus[doc_id,:]
print('Barack Obama - {0:24s}: {1:f}'.format(wiki.iloc[doc_id]['name'],
cosine_distance(obama_tf_idf, doc_tf_idf)))
```
## Query the LSH model
```
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print(diff)
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
"""
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
"""
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = deepcopy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = deepcopy(query_bin_bits)
for i in different_bits:
alternate_bits[i] = 1-alternate_bits[0] # YOUR CODE HERE
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
candidate_set.update(table[nearby_bin])# YOUR CODE HERE: Update candidate_set with the documents in this bin.
return candidate_set
obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set([35817, 21426, 53937, 39426, 50261]):
print('Passed test')
else:
print('Check your code')
print('List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261')
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,
23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,
19699, 2804, 20347]):
print('Passed test')
else:
print('Check your code')
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in range(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = pd.DataFrame({'id':list(candidate_set)})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.nsmallest(k,'distance',)[['id','distance']], len(candidate_set)
query(corpus[35817,:], model, k=10, max_search_radius=3)
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].set_index('id').join(wiki[['name']], how='inner').sort_values('distance')
```
| github_jupyter |
# Classification
```
from nltk.corpus import reuters
import spacy
import re
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import f1_score, precision_score, recall_score
nlp = spacy.load("en_core_web_md")
def tokenize(text):
min_length = 3
tokens = [word.lemma_ for word in nlp(text) if not word.is_stop]
p = re.compile('[a-zA-Z]+');
filtered_tokens = list(filter (lambda token: p.match(token) and len(token) >= min_length,tokens))
return filtered_tokens
def represent_tfidf(train_docs, test_docs):
representer = TfidfVectorizer(tokenizer=tokenize)
# Learn and transform train documents
vectorised_train_documents = representer.fit_transform(train_docs)
vectorised_test_documents = representer.transform(test_docs)
return vectorised_train_documents, vectorised_test_documents
def doc2vec(text):
min_length = 3
p = re.compile('[a-zA-Z]+')
tokens = [token for token in nlp(text) if not token.is_stop and
p.match(token.text) and
len(token.text) >= min_length]
doc = np.average([token.vector for token in tokens], axis=0)
return doc
def represent_doc2vec(train_docs, test_docs):
vectorised_train_documents = [doc2vec(doc) for doc in train_docs]
vectorised_test_documents = [doc2vec(doc) for doc in test_docs]
return vectorised_train_documents, vectorised_test_documents
def evaluate(test_labels, predictions):
precision = precision_score(test_labels, predictions, average='micro')
recall = recall_score(test_labels, predictions, average='micro')
f1 = f1_score(test_labels, predictions, average='micro')
print("Micro-average quality numbers")
print("Precision: {:.4f}, Recall: {:.4f}, F1-measure: {:.4f}".format(precision, recall, f1))
precision = precision_score(test_labels, predictions, average='macro')
recall = recall_score(test_labels, predictions, average='macro')
f1 = f1_score(test_labels, predictions, average='macro')
print("Macro-average quality numbers")
print("Precision: {:.4f}, Recall: {:.4f}, F1-measure: {:.4f}".format(precision, recall, f1))
documents = reuters.fileids()
train_docs_id = list(filter(lambda doc: doc.startswith("train"), documents))
test_docs_id = list(filter(lambda doc: doc.startswith("test"), documents))
train_docs = [reuters.raw(doc_id) for doc_id in train_docs_id]
test_docs = [reuters.raw(doc_id) for doc_id in test_docs_id]
# Transform multilabel labels
mlb = MultiLabelBinarizer()
train_labels = mlb.fit_transform([reuters.categories(doc_id) for doc_id in train_docs_id])
test_labels = mlb.transform([reuters.categories(doc_id) for doc_id in test_docs_id])
# TFIDF Experiment
model = OneVsRestClassifier(LinearSVC(random_state=42))
vectorised_train_docs, vectorised_test_docs = represent_tfidf(train_docs, test_docs)
model.fit(vectorised_train_docs, train_labels)
predictions = model.predict(vectorised_test_docs)
evaluate(test_labels, predictions)
# Embeddings Experiment
model = OneVsRestClassifier(LinearSVC(random_state=42))
vectorised_train_docs, vectorised_test_docs = represent_doc2vec(train_docs, test_docs)
model.fit(vectorised_train_docs, train_labels)
predictions = model.predict(vectorised_test_docs)
evaluate(test_labels, predictions)
```
| github_jupyter |
```
import torch
import torch.utils.data
from torch.autograd import Variable
import torch.nn as nn
import torch.optim as optim
import numpy as np
import h5py
from data_utils import get_data
import matplotlib.pyplot as plt
from solver_pytorch import Solver
# Load data from all .mat files, combine them, eliminate EOG signals, shuffle and
# seperate training data, validation data and testing data.
# Also do mean subtraction on x.
data = get_data('../project_datasets',num_validation=100, num_test=100)
for k in data.keys():
print('{}: {} '.format(k, data[k].shape))
# class flatten to connect to FC layer
class Flatten(nn.Module):
def forward(self, x):
N, C, H = x.size() # read in N, C, H
return x.view(N, -1)
# turn x and y into torch type tensor
dtype = torch.FloatTensor
X_train = Variable(torch.Tensor(data.get('X_train'))).type(dtype)
y_train = Variable(torch.Tensor(data.get('y_train'))).type(torch.IntTensor)
X_val = Variable(torch.Tensor(data.get('X_val'))).type(dtype)
y_val = Variable(torch.Tensor(data.get('y_val'))).type(torch.IntTensor)
X_test = Variable(torch.Tensor(data.get('X_test'))).type(dtype)
y_test = Variable(torch.Tensor(data.get('y_test'))).type(torch.IntTensor)
# train a 1D convolutional neural network
# optimize hyper parameters
best_model = None
parameters =[] # a list of dictionaries
parameter = {} # a dictionary
best_params = {} # a dictionary
best_val_acc = 0.0
# hyper parameters in model
filter_nums = [20]
filter_sizes = [12]
pool_sizes = [4]
# hyper parameters in solver
batch_sizes = [100]
lrs = [5e-4]
for filter_num in filter_nums:
for filter_size in filter_sizes:
for pool_size in pool_sizes:
linear_size = int((X_test.shape[2]-filter_size)/4)+1
linear_size = int((linear_size-pool_size)/pool_size)+1
linear_size *= filter_num
for batch_size in batch_sizes:
for lr in lrs:
model = nn.Sequential(
nn.Conv1d(22, filter_num, kernel_size=filter_size, stride=4),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5),
nn.BatchNorm1d(num_features=filter_num),
nn.MaxPool1d(kernel_size=pool_size, stride=pool_size),
Flatten(),
nn.Linear(linear_size, 20),
nn.ReLU(inplace=True),
nn.Linear(20, 4)
)
model.type(dtype)
solver = Solver(model, data,
lr = lr, batch_size=batch_size,
verbose=True, print_every=50)
solver.train()
# save training results and parameters of neural networks
parameter['filter_num'] = filter_num
parameter['filter_size'] = filter_size
parameter['pool_size'] = pool_size
parameter['batch_size'] = batch_size
parameter['lr'] = lr
parameters.append(parameter)
print('Accuracy on the validation set: ', solver.best_val_acc)
print('parameters of the best model:')
print(parameter)
if solver.best_val_acc > best_val_acc:
best_val_acc = solver.best_val_acc
best_model = model
best_solver = solver
best_params = parameter
# Plot the loss function and train / validation accuracies of the best model
plt.subplot(2,1,1)
plt.plot(best_solver.loss_history)
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.subplot(2,1,2)
plt.plot(best_solver.train_acc_history, '-o', label='train accuracy')
plt.plot(best_solver.val_acc_history, '-o', label='validation accuracy')
plt.xlabel('Iteration')
plt.ylabel('Accuracies')
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(10, 10)
plt.show()
print('Accuracy on the validation set: ', best_val_acc)
print('parameters of the best model:')
print(best_params)
# test set
y_test_pred = model(X_test)
_, y_pred = torch.max(y_test_pred,1)
test_accu = np.mean(y_pred.data.numpy() == y_test.data.numpy())
print('Test accuracy', test_accu, '\n')
```
| github_jupyter |
Rossler performance experiments
```
import numpy as np
import torch
import sys
sys.path.append("../")
import utils as utils
import NMC as models
import importlib
```
## SVAM
```
# LiNGAM / SVAM performance with sparse data
import warnings
warnings.filterwarnings("ignore")
for p in [10, 50]:
perf = []
for i in range(20):
# Simulate data
T = 1000
num_points = T
data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)
# format for NeuralODE
data = torch.from_numpy(data[:, None, :].astype(np.float32))
from benchmarks.lingam_benchmark import lingam_method
importlib.reload(utils)
graph = lingam_method(data.squeeze().detach())
perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr
print("Means and standard deviations for TPR, FDR and AUC with", p, "dimensions")
print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))
```
# DCM
```
# DCM performance with sparse data
for p in [10, 50]:
perf = []
for i in range(10):
# Simulate data
T = 1000
num_points = T
data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)
from benchmarks.DCM import DCM_full
graph = DCM_full(data, lambda1=0.001, s=4, w_threshold=0.1)
# plt.matshow(abs(graph),cmap='Reds')
# plt.colorbar()
# plt.show()
perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr
print("Means and standard deviations for TPR, FDR and AUC with", p, "dimensions")
print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))
```
# PCMCI
```
# pcmci performance with sparse data
for p in [10, 50]:
perf = []
for i in range(5):
# Simulate data
T = 1000
num_points = T
data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)
from benchmarks.pcmci import pcmci
importlib.reload(utils)
graph = pcmci(data)
perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr
print("Means and standard deviations for TPR, FDR and AUC with", p, "dimensions")
print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))
```
## NGM
```
# NGM performance with sparse data
import warnings
warnings.filterwarnings("ignore")
for p in [10, 50]:
perf = []
for i in range(5):
# Simulate data
T = 1000
num_points = T
data, GC = utils.simulate_rossler(p=p, a=0, T=T, delta_t=0.1, sd=0.05, burn_in=0, sigma=0.0)
# format for NeuralODE
data = torch.from_numpy(data[:, None, :])
import NMC as models
func = models.MLPODEF(dims=[p, 12, 1], GL_reg=0.1)
# GL training
models.train(func, data, n_steps=2000, plot=False, plot_freq=20)
# AGL training
# weights = func.group_weights()
# func.GL_reg *= (1 / weights)
# func.reset_parameters()
# models.train(func,data,n_steps=1000,plot = True, plot_freq=20)
graph = func.causal_graph(w_threshold=0.1)
perf.append(utils.compare_graphs(GC, graph)) # tpr, fdr
print("Means and standard deviations for TPR, FDR and AUC with", p, "dimensions")
print(np.mean(np.reshape(perf, (-1, 3)), axis=0), np.std(np.reshape(perf, (-1, 3)), axis=0))
```
| github_jupyter |
#### Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# 探索 TF-Hub CORD-19 Swivel 嵌入向量
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/cord_19_embeddings_keras"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cord_19_embeddings_keras.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/cord_19_embeddings_keras.ipynb">{img1下载笔记本</a></td>
</table>
TF-Hub (https://tfhub.dev/tensorflow/cord-19/swivel-128d/3) 上的 CORD-19 Swivel 文本嵌入向量模块旨在支持研究员分析与 COVID-19 相关的自然语言文本。这些嵌入针对 [CORD-19 数据集](https://pages.semanticscholar.org/coronavirus-research)中文章的标题、作者、摘要、正文文本和参考文献标题进行了训练。
在此 Colab 中,我们将进行以下操作:
- 分析嵌入向量空间中语义相似的单词
- 使用 CORD-19 嵌入向量在 SciCite 数据集上训练分类器
## 设置
```
import functools
import itertools
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
from tqdm import trange
```
# 分析嵌入向量
首先,我们通过计算和绘制不同术语之间的相关矩阵来分析嵌入向量。如果嵌入向量学会了成功捕获不同单词的含义,则语义相似的单词的嵌入向量应相互靠近。我们来看一些与 COVID-19 相关的术语。
```
# Use the inner product between two embedding vectors as the similarity measure
def plot_correlation(labels, features):
corr = np.inner(features, features)
corr /= np.max(corr)
sns.heatmap(corr, xticklabels=labels, yticklabels=labels)
# Generate embeddings for some terms
queries = [
# Related viruses
'coronavirus', 'SARS', 'MERS',
# Regions
'Italy', 'Spain', 'Europe',
# Symptoms
'cough', 'fever', 'throat'
]
module = hub.load('https://tfhub.dev/tensorflow/cord-19/swivel-128d/3')
embeddings = module(queries)
plot_correlation(queries, embeddings)
```
可以看到,嵌入向量成功捕获了不同术语的含义。每个单词都与其所在簇的其他单词相似(即“coronavirus”与“SARS”和“MERS”高度相关),但与其他簇的术语不同(即“SARS”与“Spain”之间的相似度接近于 0)。
现在,我们来看看如何使用这些嵌入向量解决特定任务。
## SciCite:引用意图分类
本部分介绍了将嵌入向量用于下游任务(如文本分类)的方法。我们将使用 TensorFlow 数据集中的 [SciCite 数据集](https://tensorflow.google.cn/datasets/catalog/scicite)对学术论文中的引文意图进行分类。给定一个带有学术论文引文的句子,对引文的主要意图进行分类:是背景信息、使用方法,还是比较结果。
```
builder = tfds.builder(name='scicite')
builder.download_and_prepare()
train_data, validation_data, test_data = builder.as_dataset(
split=('train', 'validation', 'test'),
as_supervised=True)
#@title Let's take a look at a few labeled examples from the training set
NUM_EXAMPLES = 10#@param {type:"integer"}
TEXT_FEATURE_NAME = builder.info.supervised_keys[0]
LABEL_NAME = builder.info.supervised_keys[1]
def label2str(numeric_label):
m = builder.info.features[LABEL_NAME].names
return m[numeric_label]
data = next(iter(train_data.batch(NUM_EXAMPLES)))
pd.DataFrame({
TEXT_FEATURE_NAME: [ex.numpy().decode('utf8') for ex in data[0]],
LABEL_NAME: [label2str(x) for x in data[1]]
})
```
## 训练引用意图分类器
我们将使用 Keras 在 [SciCite 数据集](https://tensorflow.google.cn/datasets/catalog/scicite)上对分类器进行训练。我们构建一个模型,该模型使用 CORD-19 嵌入向量,并在顶部具有一个分类层。
```
#@title Hyperparameters { run: "auto" }
EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/3' #@param {type: "string"}
TRAINABLE_MODULE = False #@param {type: "boolean"}
hub_layer = hub.KerasLayer(EMBEDDING, input_shape=[],
dtype=tf.string, trainable=TRAINABLE_MODULE)
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(3))
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
## 训练并评估模型
让我们训练并评估模型以查看在 SciCite 任务上的性能。
```
EPOCHS = 35#@param {type: "integer"}
BATCH_SIZE = 32#@param {type: "integer"}
history = model.fit(train_data.shuffle(10000).batch(BATCH_SIZE),
epochs=EPOCHS,
validation_data=validation_data.batch(BATCH_SIZE),
verbose=1)
from matplotlib import pyplot as plt
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
```
## 评估模型
我们来看看模型的表现。模型将返回两个值:损失(表示错误的数字,值越低越好)和准确率。
```
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print('%s: %.3f' % (name, value))
```
可以看到,损失迅速减小,而准确率迅速提高。我们绘制一些样本来检查预测与真实标签的关系:
```
prediction_dataset = next(iter(test_data.batch(20)))
prediction_texts = [ex.numpy().decode('utf8') for ex in prediction_dataset[0]]
prediction_labels = [label2str(x) for x in prediction_dataset[1]]
predictions = [label2str(x) for x in model.predict_classes(prediction_texts)]
pd.DataFrame({
TEXT_FEATURE_NAME: prediction_texts,
LABEL_NAME: prediction_labels,
'prediction': predictions
})
```
可以看到,对于此随机样本,模型大多数时候都会预测正确的标签,这表明它可以很好地嵌入科学句子。
# 后续计划
现在,您已经对 TF-Hub 中的 CORD-19 Swivel 嵌入向量有了更多了解,我们鼓励您参加 CORD-19 Kaggle 竞赛,为从 COVID-19 相关学术文本中获得更深入的科学洞见做出贡献。
- 参加 [CORD-19 Kaggle Challenge](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge)
- 详细了解 [COVID-19 Open Research Dataset (CORD-19)](https://pages.semanticscholar.org/coronavirus-research)
- 访问 https://tfhub.dev/tensorflow/cord-19/swivel-128d/3,参阅文档并详细了解 TF-Hub 嵌入向量
- 使用 [TensorFlow Embedding Projector](http://projector.tensorflow.org/?config=https://storage.googleapis.com/tfhub-examples/tensorflow/cord-19/swivel-128d/3/tensorboard/projector_config.json) 探索 CORD-19 嵌入向量空间
| github_jupyter |
# CME Smart Stream on Google Cloud Platform Tutorials
## Getting CME Binary Data from CME Smart Stream on Google Cloud Platform (GCP)
This workbook demonstrates the ability to quickly use the CME Smart Stream on GCP solution. Through the examples, we will
- Authenticate using GCP IAM information
- Configure which CME Smart Stream on GCP Topic containing the Market Data
- Download a single message from your Cloud Pub/Sub Subscription
- Delete your Cloud Pub/Sub Subscription
The following example references the following webpage to pull the information:
https://www.cmegroup.com/confluence/display/EPICSANDBOX/CME+Smart+Stream+GCP+Topic+Names
Author: Aaron Walters (Github: @aaronwalters79).
OS: MacOS
```
#import packages. These are outlined in the environment.ymal file as part of this project.
#these can also be directly imported.
# Google SDK: https://cloud.google.com/sdk/docs/quickstarts
# Google PubSub: https://cloud.google.com/pubsub/docs/reference/libraries
from google.cloud import pubsub_v1
import os
import google.auth
```
# Authentication using Google IAM
CME Smart Stream uses Google Cloud native Idenity and Accesss Management (IAM) solutions. Using this approach, customers are able to natively access CME Smart Stream solution without custom SDK's or authentication routines. All the code in this workboard is native Google Python SDK. While the Google Pub/Sub examples below are using python, there are native SDK's for other popular languages including Java, C#, Node.js, PHP, and others.
To download those libraries, please see the following location: https://cloud.google.com/pubsub/docs/reference/libraries
When onboarding to CME Smart Stream, you will supply at least one Google IAM Member accounts. https://cloud.google.com/iam/docs/overview. When accessing CME Smart Stream Topics, you will use the same IAM account infromation to create your Subscription using navative GCP authenticaion routines within the GCP SDK.
The following authentication routines below use either a Service Account or User Account. Google highly recomends using an Service Account with associated authorization json. This document also contains authentication via User Account in the event you requested CME to use User Account for access. You only need to use one of these for the example.
## Authentication Routine for Service Account
This section is for customers using Service Accounts. You should update the './gcp-auth.json' to reference your local authorization json file downloaded from google.
Further documentation is located here: https://cloud.google.com/docs/authentication/getting-started
```
## Authentication Method Options -- SERVICE ACCOUNT JSON FILE
# This should point to the file location of the json file downloaded from GCP console. This will load it into your os variables and be automtically leverage when your system interacts with GCP.
#os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "./gcp-auth.json" #Uncomment if using this method.
```
## Authentication for User Account
This section is for customers that registered thier GCP User Account (i.e. user@domain.com). This routine will launch the OS SDK to authenticate you as that user and then set it as your default credentials for the rest of the workflow when interacting with GCP.
IN OS TERMINAL: 'gcloud auth application-default login' without quotes.
```
## Authentication Method User Machine Defaults
#
#Run "gcloud auth login" in command line and login as the user. The code below will do that automatically.
#It should laucnh a browser to authenticate into GCP that user name and associated permissions will be used in the remaining of the code below
# This code will put out a warning about using end user credentials.
# Reference: https://google-auth.readthedocs.io/en/latest/user-guide.html
credentials, project = google.auth.default()
```
# Set Your Smart Stream on GCP Projects and Topics
## Set CME Smart Stream Project
CME Smart Stream on GCP data is avaliable in two GCP Projects based upon Production and Non-Production (i.e. certification and new release) data. Customers are granted access to projects through the onboarding process.
The example below sets the target CME Smart Stream on GCP Project as an OS variable for easy reference.
```
#This is the project at CME
os.environ['GOOGLE_CLOUD_PROJECT_CME'] = "cmegroup-marketdata-newrel" #CERT and NEW RELEASE
#os.environ['GOOGLE_CLOUD_PROJECT_CME'] = "cmegroup-marketdata" #PRODUCTION
```
## Set CME Smart Stream Topics
CME Smart Stream on GCP follows the traditional data segmentation of the CME Multicast solution.
Each channel on Multicast is avaliable as a Topic in Google Cloud PubSub. This workbook will create 1 subscription in the customer's account against 1 Topic from the CME project. Clearly, customers can script this to create as many subscriptions as needed.
Please see: https://www.cmegroup.com/confluence/display/EPICSANDBOX/CME+Smart+Stream+GCP+Topic+Names for all the topic names.
You can also review the notebook included in this git project named Google PubSub Get CME Topics on how to read the names from the website into a local CSV file or use in automated scripts.
```
# The CME TOPIC that a Subscription will be created against
os.environ['GOOGLE_CLOUD_TOPIC_CME'] = "CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310" #CERT
#s.environ['GOOGLE_CLOUD_TOPIC_CME'] = "NR.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310" #NEW RELEASE
#os.environ['GOOGLE_CLOUD_TOPIC_CME'] = "CERT.SSCL.GCP.MD.RT.CMEG.FIXBIN.v01000.INCR.310" #PRODUCTION
```
# Set Customer Configurations
## Set Customer Project & Subscription Name
Smart Stream on GCP solution requires that the customer create a Cloud Pub/Sub Subscription in thier account. This subscription will automatically collect data from the CME Smart Stream Pub/Sub Topic. Since the Subscriptin is in the customer account we must specify the customer GCP Project and the name of the Subscription they want in the project.
In the example below, we set the project directly based upon our GCP project name. We also create a subscription name by prepending 'MY_' to the name of the Topic we are joining.
```
#Your Configurations for the project you want to have access;
#will use the defaults from credentials
os.environ['GOOGLE_CLOUD_PROJECT'] = "prefab-rampart-794"
#My Subscipription Name -- Take the CME Topic Name and prepend 'MY_' -- Can be any thing the customer wants
os.environ['MY_SUBSCRIPTION_NAME'] = 'MY_'+os.environ['GOOGLE_CLOUD_TOPIC_CME'] #MY SUBSCRIPTION NAME
```
# Final Configuration
The following is the final configuration for your setup.
```
print ('Target Project: \t',os.environ['GOOGLE_CLOUD_PROJECT_CME'] )
print ('Target Topic: \t\t', os.environ['GOOGLE_CLOUD_TOPIC_CME'] , '\n' )
print ('My Project: \t\t',os.environ['GOOGLE_CLOUD_PROJECT'])
print ('My Subscriptions: \t',os.environ['MY_SUBSCRIPTION_NAME'] )
```
# Create Your Subscription to CME Smart Stream Data Topics
We have all the main variables set and can pass them to the Cloud Pub/Sub Python SDK. The following attempts to create a Subscription (MY_SUBSCRIPTION_NAME) in your specified project (GOOGLE_CLOUD_PROJECT) that points to the CME Topic (GOOGLE_CLOUD_TOPIC_CME) and Project (GOOGLE_CLOUD_PROJECT_CME) of that Topic.
Once created or determined it already exists we will join our python session to the Subscription as 'subscriber'.
Full documentation on this Pub/Sub example is avaliable: https://googleapis.github.io/google-cloud-python/latest/pubsub/#subscribing
```
#https://googleapis.github.io/google-cloud-python/latest/pubsub/#subscribing
#Create Topic Name from Config Above
topic_name = 'projects/{cme_project_id}/topics/{cme_topic}'.format( cme_project_id=os.getenv('GOOGLE_CLOUD_PROJECT_CME'), cme_topic=os.getenv('GOOGLE_CLOUD_TOPIC_CME'), )
#Create Subscription Name from Config Above
subscription_name = 'projects/{my_project_id}/subscriptions/{my_sub}'.format(my_project_id=os.getenv('GOOGLE_CLOUD_PROJECT'),my_sub=os.environ['MY_SUBSCRIPTION_NAME'], )
#Try To Create a subscription in your Project
subscriber = pubsub_v1.SubscriberClient(credentials=credentials)
try:
subscriber.create_subscription(
name=subscription_name,
topic=topic_name,
ack_deadline_seconds=60, #This limits the likelihood google will redeliver a recieved message, default is 10s.
)
print ('Created Subscriptions in Project \n')
print ('Listing Subscriptions in Your Project %s : ' % os.getenv('GOOGLE_CLOUD_PROJECT'))
for subscription in subscriber.list_subscriptions(subscriber.project_path(os.environ['GOOGLE_CLOUD_PROJECT'])):
print('\t', subscription.name)
except:
e = sys.exc_info()[1]
print( "Error: %s \n" % e )
```
## Subscription View in Google Cloud Console
Subscriptions are also avaliable for viewing in Google Cloud Console (https://console.cloud.google.com/). Navigate to Cloud Pub/Sub and click Subscription. If you click your Subscription Name, it will open up the details about that Subscription. You can see the all queued messages and core settings which are set to default settings as we did not specify special settings and the functions above used the defaults.
Another thing shown in this view is the total queued messages from GCP in the Subscription.
## Pull a Single Message from CME
The following will do a simple message pull from your Subscription and print it out locally. There are extensive examples on data pulling from a Subscription including batch and async (https://cloud.google.com/pubsub/docs/pull).
```
#Pull 1 Message
print ('Pulling a Single Message and Displaying:')
CME_DATA = subscriber.pull(subscription_name, max_messages=1)
#Print that Message
print (CME_DATA)
```
# Delete Subscriptions
You can also use the Python SDK to delete your Cloud Pub/Sub Subscriptions. The following will attempt to delete ALL the subscriptions in your Project.
```
#List Subscriptions in My Project / Delete Subscription
delete = True
subscriber = pubsub_v1.SubscriberClient()
project_path = subscriber.project_path(os.environ['GOOGLE_CLOUD_PROJECT'])
if not delete:
print ('Did you mean to Delete all Subscriptions? If yes, then set delete = True')
for subscription in subscriber.list_subscriptions(project_path):
#Delete Subscriptions
if delete:
subscriber.delete_subscription(subscription.name)
print ("\tDeleted: {}".format(subscription.name))
else:
print("\tActive Subscription: {}".format(subscription.name))
```
# Summary
This notebook went through the bare minimum needed to create a Cloud Pub/Sub Subscription against the CME Smart Stream on GCP solutions.
# Questions?
If you have questions or think we can update this to additional use cases, please use the Issues feature in Github or reach out to CME Sales team at markettechsales@cmegroup.com
| github_jupyter |
```
from __future__ import division
%matplotlib inline
import sys
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.io as io
import pickle
import scipy.stats
SBJ = 'colin_test2'
prj_dir = '/Volumes/hoycw_clust/PRJ_Error_eeg/'#'/Users/sheilasteiner/Desktop/Knight_Lab/PRJ_Error_eeg/'
results_dir = prj_dir+'results/'
fig_type = '.png'
data_dir = prj_dir+'data/'
sbj_dir = data_dir+SBJ+'/'
```
### Load paradigm parameters
```
prdm_fname = os.path.join(sbj_dir,'03_events',SBJ+'_prdm_vars.pkl')
with open(prdm_fname, 'rb') as f:
prdm = pickle.load(f)
```
### Load Log Info
```
behav_fname = os.path.join(sbj_dir,'03_events',SBJ+'_behav.csv')
data = pd.read_csv(behav_fname)
# Remove second set of training trials in restarted runs (EEG12, EEG24, EEG25)
if len(data[(data['Trial']==0) & (data['Block']==-1)])>1:
train_start_ix = data[(data['Trial']==0) & (data['Block']==-1)].index
train_ix = [ix for ix in data.index if data.loc[ix,'Block']==-1]
later_ix = [ix for ix in data.index if ix >= train_start_ix[1]]
data = data.drop(set(later_ix).intersection(train_ix))
data = data.reset_index()
# Change block numbers on EEG12 to not overlap
if SBJ=='EEG12':
b4_start_ix = data[(data['Trial']==0) & (data['Block']==4)].index
for ix in range(b4_start_ix[1]):
if data.loc[ix,'Block']!=-1:
data.loc[ix,'Block'] = data.loc[ix,'Block']-4
# Label post-correct (PC), post-error (PE) trials
data['PE'] = [False for _ in range(len(data))]
for ix in range(len(data)):
# Exclude training data and first trial of the block
if (data.loc[ix,'Block']!=-1) and (data.loc[ix,'Trial']!=0):
if data.loc[ix-1,'Hit']==0:
data.loc[ix,'PE'] = True
# pd.set_option('max_rows', 75)
# data[data['Block']==3]
```
# Add specific analysis computations
```
# Find middle of blocks to plot accuracy
block_start_ix = data[data['Trial']==0].index
if SBJ=='EP11':#deal with missing BT_T0
block_mid_ix = [ix+prdm['n_trials']/2 for ix in block_start_ix]
else:
block_mid_ix = [ix+prdm['n_trials']/2 for ix in block_start_ix[1:]]
# Add in full_vis + E/H training: 0:4 + 5:19 = 10; 20:34 = 27.5
block_mid_ix.insert(0,np.mean([prdm['n_examples']+prdm['n_training'],
prdm['n_examples']+2*prdm['n_training']])) #examples
block_mid_ix.insert(0,np.mean([0, prdm['n_examples']+prdm['n_training']]))
#easy training (would be 12.5 if splitting examples/train)
# Compute accuracy per block
accuracy = data['Hit'].groupby([data['Block'],data['Condition']]).mean()
acc_ITI = data['Hit'].groupby([data['ITI type'],data['Condition']]).mean()
for ix in range(len(data)):
data.loc[ix,'Accuracy'] = accuracy[data.loc[ix,'Block'],data.loc[ix,'Condition']]
data.loc[ix,'Acc_ITI'] = acc_ITI[data.loc[ix,'ITI type'],data.loc[ix,'Condition']]
# Break down by post-long and post-short trials
data['postlong'] = [False if ix==0 else True if data['RT'].iloc[ix-1]>1 else False for ix in range(len(data))]
# Compute change in RT
data['dRT'] = [0 for ix in range(len(data))]
for ix in range(len(data)-1):
data.loc[ix+1,'dRT'] = data.loc[ix+1,'RT']-data.loc[ix,'RT']
# Grab rating data to plot
rating_trial_idx = [True if rating != -1 else False for rating in data['Rating']]
rating_data = data['Rating'][rating_trial_idx]
```
# Plot Full Behavior Across Dataset
```
# Accuracy, Ratings, and Tolerance
f, ax1 = plt.subplots()
x = range(len(data))
plot_title = '{0} Tolerance and Accuracy: easy={1:0.3f}; hard={2:0.3f}'.format(
SBJ, data[data['Condition']=='easy']['Hit'].mean(),
data[data['Condition']=='hard']['Hit'].mean())
colors = {'easy': [0.5, 0.5, 0.5],#[c/255 for c in [77,175,74]],
'hard': [1, 1, 1],#[c/255 for c in [228,26,28]],
'accuracy': 'k'}#[c/255 for c in [55,126,184]]}
scat_colors = {'easy': [1,1,1],#[c/255 for c in [77,175,74]],
'hard': [0,0,0]}
accuracy_colors = [scat_colors[accuracy.index[ix][1]] for ix in range(len(accuracy))]
#scale = {'Hit Total': np.max(data['Tolerance'])/np.max(data['Hit Total']),
# 'Score Total': np.max(data['Tolerance'])/np.max(data['Score Total'])}
# Plot Tolerance Over Time
ax1.plot(data['Tolerance'],'b',label='Tolerance')
ax1.plot(x,[prdm['tol_lim'][0] for _ in x],'b--')
ax1.plot(x,[prdm['tol_lim'][1] for _ in x],'b--')
ax1.set_ylabel('Target Tolerance (s)', color='b')
ax1.tick_params('y', colors='b')
ax1.set_xlim([0,len(data)])
ax1.set_ylim([0, 0.41])
ax1.set_facecolor('white')
ax1.grid(False)
# Plot Accuracy per Block
ax2 = ax1.twinx()
# ax2.plot(data['Hit Total']/np.max(data['Hit Total']),'k',label='Hit Total')
ax2.fill_between(x, 1, 0, where=data['Condition']=='easy',
facecolor=colors['easy'], alpha=0.3)#, label='hard')
ax2.fill_between(x, 1, 0, where=data['Condition']=='hard',
facecolor=colors['hard'], alpha=0.3)#, label='easy')
ax2.scatter(block_mid_ix, accuracy, s=50, c=accuracy_colors,
edgecolors='k', linewidths=1)#colors['accuracy'])#,linewidths=2)
ax2.scatter(rating_data.index.values, rating_data.values/100, s=25, c=[1, 0, 0])
ax2.set_ylabel('Accuracy', color=colors['accuracy'])
ax2.tick_params('y', colors=colors['accuracy'])
ax2.set_xlabel('Trials')
ax2.set_xlim([0,len(data)])
ax2.set_ylim([0, 1])
ax2.set_facecolor('white')
ax2.grid(False)
plt.title(plot_title)
plt.savefig(results_dir+'BHV/ratings_tolerance/'+SBJ+'_tolerance'+fig_type)
```
# Plot only real data (exclude examples + training)
```
data_all = data
# Exclude: Training/Examples, non-responses, first trial of each block
if data[data['RT']<0].shape[0]>0:
print 'WARNING: '+str(data[data['RT']<0].shape[0])+' trials with no response!'
data = data[(data['Block']!=-1) & (data['RT']>0) & (data['ITI']>0)]
```
## Histogram of ITIs
```
# ITI Histogram
f,axes = plt.subplots(1,2)
bins = np.arange(0,1.1,0.01)
hist_real = sns.distplot(data['ITI'],bins=bins,kde=False,label=SBJ,ax=axes[0])
hist_adj = sns.distplot(data['ITI type'],bins=bins,kde=False,label=SBJ,ax=axes[1])
axes[0].set_xlim([0, 1.1])
axes[1].set_xlim([0, 1.1])
plt.subplots_adjust(top=0.93)
f.suptitle(SBJ)
plt.savefig(results_dir+'BHV/ITIs/'+SBJ+'_ITI_hist'+fig_type)
```
## Histogram of all RTs
```
# RT Histogram
f,ax = plt.subplots()
hist = sns.distplot(data['RT'],label=SBJ)
plt.subplots_adjust(top=0.9)
hist.legend() # can also get the figure from plt.gcf()
plt.savefig(results_dir+'BHV/RTs/histograms/'+SBJ+'_RT_hist'+fig_type)
```
## RT Histograms by ITI
```
# ANOVA for RT differences across ITI
itis = np.unique(data['ITI type'])
if len(prdm['ITIs'])==4:
f,iti_p = scipy.stats.f_oneway(data.loc[data['ITI type']==itis[0],('RT')].values,
data.loc[data['ITI type']==itis[1],('RT')].values,
data.loc[data['ITI type']==itis[2],('RT')].values,
data.loc[data['ITI type']==itis[3],('RT')].values)
elif len(prdm['ITIs'])==3:
f,iti_p = scipy.stats.f_oneway(data.loc[data['ITI type']==itis[0],('RT')].values,
data.loc[data['ITI type']==itis[1],('RT')].values,
data.loc[data['ITI type']==itis[2],('RT')].values)
elif len(prdm['ITIs'])==2:
f,iti_p = scipy.stats.ttest_ind(data.loc[data['ITI type']==itis[0],('RT')].values,
data.loc[data['ITI type']==itis[1],('RT')].values)
else:
print 'WARNING: some weird paradigm version without 2, 3, or 4 ITIs!'
# print f, p
f, axes = plt.subplots(1,2)
# RT Histogram
rt_bins = np.arange(0.7,1.3,0.01)
for iti in itis:
sns.distplot(data['RT'].loc[data['ITI type'] == iti],bins=rt_bins,label=str(round(iti,2)),ax=axes[0])
axes[0].legend() # can also get the figure from plt.gcf()
axes[0].set_xlim(min(rt_bins),max(rt_bins))
# Factor Plot
sns.boxplot(data=data,x='ITI type',y='RT',hue='ITI type',ax=axes[1])
# Add overall title
plt.subplots_adjust(top=0.9,wspace=0.3)
f.suptitle(SBJ+' RT by ITI (p='+str(round(iti_p,4))+')') # can also get the figure from plt.gcf()
# Save plot
plt.savefig(results_dir+'BHV/RTs/hist_ITI/'+SBJ+'_RT_ITI_hist_box'+fig_type)
```
## RT adjustment after being short vs. long
```
# t test for RT differences across ITI
itis = np.unique(data['ITI type'])
f,postlong_p = scipy.stats.ttest_ind(data.loc[data['postlong']==True,('dRT')].values,
data.loc[data['postlong']==False,('dRT')].values)
f, axes = plt.subplots(1,2)
# RT Histogram
drt_bins = np.arange(-0.6,0.6,0.025)
sns.distplot(data['dRT'].loc[data['postlong']==True],bins=drt_bins,label='Post-Long',ax=axes[0])
sns.distplot(data['dRT'].loc[data['postlong']==False],bins=drt_bins,label='Post-Short',ax=axes[0])
axes[0].legend() # can also get the figure from plt.gcf()
axes[0].set_xlim(min(drt_bins),max(drt_bins))
# Factor Plot
sns.boxplot(data=data,x='postlong',y='dRT',hue='postlong',ax=axes[1])
# Add overall title
plt.subplots_adjust(top=0.9,wspace=0.3)
f.suptitle(SBJ+' RT by ITI (p='+str(round(postlong_p,6))+')') # can also get the figure from plt.gcf()
# Save plot
plt.savefig(results_dir+'BHV/RTs/hist_dRT/'+SBJ+'_dRT_postlong_hist_box'+fig_type)
```
##RT and Accuracy Effects by ITI and across post-error
```
# RTs by condition
# if len(prdm_params['ITIs'])==4: # target_time v1.8.5+
# data['ITI type'] = ['short' if data['ITI'][ix]<0.5 else 'long' for ix in range(len(data))]
# ITI_plot_order = ['short','long']
# elif len(prdm_params['ITIs'])==3: # target_time v1.8.4 and below
# data['ITI type'] = ['short' if data['ITI'][ix]<prdm_params['ITI_bounds'][0] else 'long' \
# if data['ITI'][ix]>prdm_params['ITI_bounds'][1] else 'medium'\
# for ix in range(len(data))]
# ITI_plot_order = ['short','medium','long']
# else: # Errors for anything besides len(ITIs)==3,4
# assert len(prdm_params['ITIs'])==4
plot = sns.factorplot(data=data,x='ITI type',y='dRT',hue='PE',col='Condition',kind='point',
ci=95);#,order=ITI_plot_order
plt.subplots_adjust(top=0.9)
plot.fig.suptitle(SBJ) # can also get the figure from plt.gcf()
plt.savefig(results_dir+'BHV/RTs/hist_PE_ITI/'+SBJ+'_RT_PE_ITI_hit'+fig_type)
# WARNING: I would need to go across subjects to get variance in accuracy by ITI
plot = sns.factorplot(data=data,x='ITI type',y='Acc_ITI',col='Condition',kind='point',sharey=False,
ci=95);#,order=ITI_plot_order
#plot.set(alpha=0.5)
plt.subplots_adjust(top=0.9)
plot.fig.suptitle(SBJ) # can also get the figure from plt.gcf()
plt.savefig(results_dir+'BHV/accuracy/'+SBJ+'_acc_ITI'+fig_type)
```
## Look for behavioral adjustments following short and long responses
```
plot = sns.factorplot(data=data_PL,x='ITI type',y='RT',hue='PE',col='Condition',kind='point',
ci=95,order=prdm['ITIs']);
plt.subplots_adjust(top=0.9)
plot.fig.suptitle(SBJ+'_post-long') # can also get the figure from plt.gcf()
# plt.savefig(results_dir+'RT_plots/'+SBJ+'_RT_PE_ITI_hit'+fig_type)
plot2 = sns.factorplot(data=data_PS,x='ITI type',y='RT',hue='PE',col='Condition',kind='point',
ci=95,order=prdm['ITIs']);
plt.subplots_adjust(top=0.9)
plot2.fig.suptitle(SBJ+'_post-short') # can also get the figure from plt.gcf()
# plt.savefig(results_dir+'RT_plots/'+SBJ+'_RT_PE_ITI_hit'+fig_type)
```
| github_jupyter |
#### _Speech Processing Labs 2021: SIGNALS 1: Digital Signals: Sampling and Superposition_
```
## Run this first!
%matplotlib inline
import sys
import matplotlib.pyplot as plt
import numpy as np
import cmath
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
plt.style.use('ggplot')
from dspMisc import *
```
# Digital Signals: Sampling and Superposition
### Learning Outcomes
* Understand how we can approximate a sine wave with a specific frequency, given a specific sampling rate
* Understand how sampling rate limits the frequencies of sinusoids we can describe with discrete sequences
* Explain when aliasing will occurr and how this relates the sampling rate and the Nyquist frequency.
* Observe how compound waveforms can be described as a linear combination of phasors ('superposition')
### Background
* Topic Videos: Digital Signal, Short Term Analysis, Series Expansion
* [Interpreting the discrete fourier transform](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb)
#### Extra background (extension material)
* [Phasors, complex numbers and sinusoids](./signals-1-2a-digital-signals-complex-numbers.ipynb)
## 1 Introduction
In the class videos, you've seen that sound waves are changes in air pressure (amplitude) over time. In the notebook [interpreting the discrete fourier transform](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb), we saw that we can
decompose complex sound waves into 'pure tone' frequency components. We also saw that the output of the DFT was actually a sequence of complex numbers! In this notebook, we'll give a bit more background on the relationship
between complex numbers and sinusoids, and why it's useful to characterise sinusoids in the complex plane.
## 2 Phasors and Sinusoids: tl;dr
At this point, I should say that you can get a conceptual understanding of digital signal processing concepts without going through _all_ the math. We certainly won't be examining your knowledge of complex numbers or geometry in this class. Of course, if you want to go further in understanding digital signal processing then you will have to learn a bit more about complex numbers, algebra, calculus and geometry than we'll touch upon here.
However, right now the main point that we'd like you to take away from this notebook is that we can conveniently represent periodic functions, like sine waves, in terms of **phasors**: basically what shown on the left hand side of the following gif:

You can think of the **phasor as an analogue clockface** with one moving hand. On the right hand side is one period of a 'pure tone' sinusoid, sin(t).
Now, we can think of every movement of the 'clockhand' (the phasor is actually this **vector**) as a step in time on the sinusoid graph: at every time step, the phasor (i.e., clockhand) rotates by some angle. If you follow the blue dots on both graphs, you should be able to see that the amplitude of the sinusoid matches the height of the clockhand on the phasor at each time step.
This gives us a different way of viewing the periodicity of $\sin(t)$. The sinusoid starts to repeat itself when the phasor has done one full circle. So, rather than drawing out an infinite time vs amplitude graph, we can capture the behaviour of this periodic function in terms rotations with respect to this finite circle.
So, what's the connection with complex numbers? Well, that blue dot on the phasor actually represents a complex number, and the dimensions of that graph are actually the **real** (horizontal) and **imaginary** (vertical) parts of that number. That is, a complex number of the form $a + jb$, where $a$ is the real part and $b$ is the imaginary part. Quite conveniently, we can also express complex numbers in terms of a **magnitude** or radius $r$ (length of the clockhand) and a **phase angle** $\theta$ (angle of rotation from the point (1,0)) and an exponential. So, we can write each point that the phasor hits in the form $re^{j\theta}$. This will be familiar if you've had a look at the DFT formulae.
This relationship with complex numbers basically allows us to describe complicated periodic waveforms in terms of combinations of 'pure tone' sinusoids. It turns out that maths for this works very elegantly using the phasor/complex number based representation.
The basic things you need to know are:
* A **sinusoid** (time vs amplitude, i.e. in the **time domain**) can be described in terms of a vector rotating around a circle (i.e. a phasor in the complex plane)
* The **phasor** vector (i.e., 'clockhand') is described by a complex number $re^{j\theta}$
* $re^{j\theta}$ is a point on a circle centered at (0,0) with radius $r$, $\theta$ degrees rotated from $(r,0)$ on the 2D plane.
* the **magnitude** $r$ tells us what the peak amplitude of the corresponding sine wave is
* the **phase angle** $\theta$ tells us how far around the circle the phasor has gone:
* zero degrees (0 radians) corresponds to the point (r,0), while 90 degrees ($\pi/2$ radians) corresponds to the point (0,r)
* The vertical projection of the vector (onto the y-axis) corresponds to the amplitude of a **sine wave** $\sin(\theta)$
* The horizontal projection of the vector (onto the x-axis) corresponds to the amplitude of a **cosine wave** $\cos(\theta)$
* The **period** of these sine and cosine waves is the same as the time it takes to make one full circle of the phasor (in seconds). As such the **frequency** of the sine and cosine waves is the same as the frequency with which the phasor makes a full cycle (in cycles/second = Hertz).
If you take the maths on faith, you can see all of this just from the gif above. You'll probably notice in most phonetics text books, if they show this at all, they will just show the rotating phasor without any of the details.
If you want to know more about how this works, you can find a quick tour of these concepts in the (extension) notebook on [complex numbers and sinusoids](./sp-m1-2-digital-signals-complex-numbers). But it's fine if you don't get all the details right now. In fact, if you get the intuition behind from the phasor/sinusoid relationship above, it's fine to move on now to the rest of the content in this notebook.
## Changing the frequency of a sinusoid
So, we think of sine (and cosine) waves in terms of taking steps around a circle in the 2D (complex) plane. Each of these 'steps' was represented by a complex number, $re^{j\theta}$ (the phasor) where the magnitude $r$ tells you the radius of the circle, and the phase angle $\theta$ tells you how far around the circle you are. When $\theta = 0$, means you are at the point (r,0), while $\theta = 90$ degrees means you are at the point (0,r). There are 360 degrees (or 2$\pi$ radians) makes a complete cycle, i.e. when $\theta = 360$ degrees, you end up back at (r,0).
<div class="alert alert-success">
It's often easier to deal with angles measured in <strong>radians</strong> rather than <strong>degrees</strong>. The main thing to note is that:
$$2\pi \text{ radians} = 360 \text{ degrees, i.e. 1 full circle }$$
Again, it may not seem obvious why we should want to use radians instead of the more familiar degrees. The reason is that it makes dividing up a circle really nice and neat and so ends up making calculations much easier in the long run!
</div>
So that describes a generic sinusoid, e.g. $\sin(\theta)$, but now you might ask yourself how do we generate a generate a sine wave with a specific frequency $f$ Herzt (Hz=cycles/second)?
Let's take a concrete example, if we want a sinusoid with a frequency of $f=10$ Hz, that means:
* **Frequency:** we need to complete 10 full circles of the phasor in 1 second.
* **Period:** So, we have to complete 1 full cycle every 1/10 seconds (i.e. the period of this sinusoid $T=0.1$ seconds).
* **Angular velocity:** So, the phasor has to rotate at a speed of $2\pi/0.1 = 20\pi$ radians per second
So if we take $t$ to represent time, a sine wave with frequency 10 Hz has the form $\sin(20\pi t)$
* Check: at $t=0.1$ seconds we have $\sin(20 \times \pi \times 0.1) = \sin(2\pi)$, one full cycle.
* This corresponds to the phasor $e^{20\pi t j}$, where $t$ represents some point in time.
In general:
* A sine wave with peak amplitude R and frequency $f$ Hz is expressed as $R\sin(2 \pi f t)$
* The amplitude of this sine wave at time $t$ corresponds to the imaginary part of the phasor $Re^{2\pi ftj}$.
* A cosine wave with peak amplitude R and frequency $f$ Hz is expressed as $\cos (2 \pi f t$)
* The amplitude of this cosine wave at time $t$ corresponds to the real part of the phasor $Re^{2\pi ftj}$.
The term $2\pi f$ corresponds to the angular velocity, often written as $\omega$ which is measured in radians per second.
### Exercise
Q: What's the frequency of $\sin(2\pi t)$?
## Frequency and Sampling Rate
The representation above assumes we're dealing with a continuous sinusoid, but since we're dealing with computers we need to think about digital (i.e. discrete) representations of waveforms.
So if we want to analyze a wave, we also need to sample it at a specific **sampling rate**, $f_s$.
For a given sampling rate $f_s$ (samples/second) we can work out the time between each sample, the **sampling period** as:
$$ t_s = \frac{1}{f_s}$$
The units of $t_s$ is seconds/sample. That means that if we want a phasor to complete $f$ cycles/second, the angle between each sampled $\theta_s$ step will need to be a certain size in order to complete a full cycle every $t_s$ seconds.
The units here help us figure this out: the desired frequency $f$ has units cycles/second. So, we can calculate what fraction of a complete cycle we need to take with each sample by multiplying $f$ with the sampling time $t_s$.
* $c_s = ft_s$.
* cycles/sample = cycles/second x seconds/sample
We know each cycle is $2\pi$ radians (360 degrees), so we can then convert $c_s$ to an angle as follows:
* $ \theta_s = 2 \pi c_s $
### Exercise
Q: Calculate the period $t_s$ and angle $\theta_s$ between samples for a sine wave with frequency $f=8$ Hz and sampling rate of $f_s=64$
### Notes
### Setting the Phasor Frequency
I've written a function `gen_phasors_vals_freq` that calculates the complex phasor values (`zs`), angles (`thetas`) and time steps (`tsteps`) for a phasor with a given frequency `freq` over a given time period (`Tmin` to `Tmax`). In the following we'll use this to plot how changes in the phasor relate to changes in the corresponding sinusoid given a specific sampling rate (`sampling_rate`).
#### Example:
Let's look at a phasor and corresponding sine wave with frequency $f=2$ Hz (`freq`), given a sampling rate of $f_s=16$ (`sampling_rate`) over 4 seconds.
```
## Our parameters:
Tmin = 0
Tmax = 4
freq = 2 # cycles/second
sampling_rate = 16 # i.e, f_s above
t_step=1/sampling_rate # i.e., t_s above
## Get our complex values corresponding to the phasor with frequency freq
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Project to real and imaginary parts for plotting
Xs = np.real(zs)
Ys = np.imag(zs)
## generate the background for the plot: a phasor diagram on the left, a time v amplitude graph on the right
fig, phasor, sinusoid = create_anim_bkg(tsteps, thetas, freq)
## the phasor is plotted on the left on the left with a circle with radius 1 for reference
phasor.set_xlabel("Real values")
phasor.set_ylabel("Imaginary values")
# plot the points the phasor will "step on"
phasor.scatter(Xs, Ys)
## Plot our actual sampled sine wave in magenta on the right
sinusoid.plot(tsteps, Ys, 'o', color='magenta')
sinusoid.set_xlabel("Time (s)")
sinusoid.set_ylabel("Amplitude")
```
You should see two graphs above:
* On the left is the phasor diagram: the grey circle represents a phasor with magnitude 1, the red dots represents the points on the circle that the phasor samples between `tmin` and `tmax` given the `sampling_rate`.
* On the right is the time versus amplitude graph: The grey line shows a continuous sine wave with with frequency `freq`, the magenta dots show the points we actually sample between times `tmin` and `tmax` given the `sampling_rate`.
You can see that although we sample 64 points for the sine wave, we actually just hit the same 8 values per cycle on the phasor.
It's clearer when we animate it the phasor in time:
```
## Now let's animate it!
## a helper to draw the 'clockhand' line
X, Y, n_samples = get_line_coords(Xs, Ys)
## initialize the animation
line = phasor.plot([], [], color='b', lw=3)[0]
sin_t = sinusoid.plot([], [], 'o', color='b')[0]
figs = (line, sin_t)
anim = FuncAnimation(
fig, lambda x: anim_sinusoid(x, X=X, Y=Y, tsteps=tsteps, figs=figs), interval=600, frames=n_samples)
HTML(anim.to_html5_video())
```
### Exercise
Change the `freq` variable in the code below to investigate:
* What happens when the sine wave frequency (cycles/second) `freq` is set to `sampling_rate/2`?
* What happens when the frequency `freq` approaches the half the `sampling_rate`?
* What happens when the frequency `freq` equals the half the `sampling_rate`?
* What happens when the frequency `freq` is greater than `sampling_rate/2`
```
## Example: Play around with these values
Tmax = 1
Tmin = 0
freq = 15 # cycles/second
sampling_rate = 16 # f_s above
t_step=1/sampling_rate
print("freq=%.2f cycles/sec, sampling rate=%.2f samples/sec, sampling period=%.2f sec" % (freq, sampling_rate, t_step) )
## Get our complex values corresponding to the sine wave
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Project to real and imaginary parts for plotting
Xs = np.real(zs)
Ys = np.imag(zs)
## generate the background
fig, phasor, sinusoid = create_anim_bkg(tsteps, thetas, freq)
## Plot the phasor samples
phasor.scatter(Xs, Ys)
phasor.set_xlabel("Real values")
phasor.set_ylabel("Imaginary values")
## Plot our actual sampled sine wave in magenta
sinusoid.plot(tsteps, Ys, 'o-', color='magenta')
sinusoid.set_xlabel("Time (s)")
sinusoid.set_ylabel("Amplitude")
## Animate the phasor and sinusoid
X, Y, n_samples = get_line_coords(Xs, Ys)
line = phasor.plot([], [], color='b', lw=3)[0]
sin_t = sinusoid.plot([], [], 'o', color='b')[0]
figs = (line, sin_t)
anim = FuncAnimation(
fig, lambda x: anim_sinusoid(x, X=X, Y=Y, tsteps=tsteps, figs=figs), interval=600, frames=n_samples)
HTML(anim.to_html5_video())
```
### Notes
## Aliasing
If you change the frequency (`freq`) for the phasor to be higher than half the sampling rate , you'll see that the actual frequency of the sinusoid doesn't actually keep getting higher. In fact, with `freq=8` the sine wave (i.e. projection of the vertical (imaginary) component) doesn't appear to have any amplitude modulation at all. However, keen readers will note that for `sampling_rate=16` and `freq=8` in the example above, the real projection (i.e. cosine) would show amplitude modulations since $\cos(t)$ is 90 degree phase shifted relative to $\sin(t)$. The phasor `freq=15` appears to complete only one cycle per second, just like for `freq=1`, but appears to rotating the opposite way.
These are examples of **aliasing**: given a specific sampling rate there is a limit to which we can distinguish different frequencies because we simply can't take enough samples to show the difference!
In the example above, even though we are sampling from a 15 Hz wave for `freq=15`, we only get one sample per cycle and the overall sampled sequence looks like a 1 Hz wave. So, the fact that the phasor appears to rotate the opposite way to `freq=1` is because it's actually just the 15th step of the `freq=1` phasor.
<div class="alert alert-success">
In general, with a sampling rate of $f_s$ we can't distinguish between a sine wave of frequency $f_0$ and a sine wave of $f_0 + kf_s$ for any integer $k$.
</div>
This means that we can't actually tell the frequency of the underlying waveform based on the sample amplitudes alone.
The practical upshot of this is that for sampling rate $f_s$, the highest frequency we can actually sample is $f_s/2$, the **Nyquist Frequency**. This is one of the most important concepts in digital signal processing and will effect pretty much all the methods we use. It's why we see the mirroring effect in [the DFT output spectrum](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb). So, if you remember just one thing, remember this!
## Superposition
This use of phasors to represent sinusoids may seem excessively complex at the moment, but it actually gives us a nice way of visualizing what happens when we add two sine waves together, i.e. linear superposition.
We've seen how the Fourier Transform gives us a way of breaking down periodic waveforms (no matter how complicated) into a linear combination of sinusoids (cosine waves, specifically). But if you've seen the actual DFT equations, you'll have noticed that each DFT output is actually is described in terms of phasors of specific frequencies (e.g. sums over $e^{-j \theta}$ values). We can now get at least a visual idea of what this means.
Let's look at how can combining phasors can let us define complicated waveforms in a simple manner.
### Magnitude and Phase Modifications
First, let's note that we can easily change the magnitude and phase of a sine wave before adding it to others to make a complex waveform.
* We can change the magnitude of a sinusoidal component by multiplying all the values of that sinusoid by a scalar $r$.
* We can apply a phase shift of $\phi$ radians to $\sin(\theta)$ to gives us a sine wave of the form: $\sin(\theta + \phi)$. It basically means we start our cycles around the unit circle at $e^{i\phi}$ instead of at $e^{i0} = 1 + i0 \mapsto (1,0)$
### Generating linear combinations of sinusoids
Let's plot some combinations of sinusoids.
First let's set the sampling rate and the start and end times of the sequence we're going to generate:
```
## Some parameters to play with
Tmax = 2
Tmin = 0
sampling_rate = 16
t_step=1/sampling_rate
```
Now, let's create some phasors with different magnitudes, frequencies and phases. Here we create 2 phasors with magnitude 1 and no phase shift, one with `freq=2` Hz and another phasor with frequency `2*freq`.
We then add the two phasors values together at each timestep (`zs_sum` in the code below):
```
## Define a bunch of sinusoids. We can do this in terms of 3 parameters:
## (magnitude, frequency, phase)
## The following defines two sinusoids, both with magnitude (peak amplitude) 1 and the same phase (no phase shift)
## The second has double the frequency of the first:
freq=2
params = [(1, freq, 0), (1, 2*freq, 0)]
## Later: change these values and see what happens, e.g.
#params = [(1, freq, 0), (0.4, 5*freq, 0), (0.4, 5*freq, np.pi)]
phasor_list = []
theta_list = []
tsteps_list = []
## Generate a list of phasors for each set of (mag, freq, phase) parameters
for mag, freq, phase in params:
## Generate a phasor with frequency freq
## zs are the phasor values
## thetas are the corresponding angles for each value in zs
## tsteps are the corresponding time steps for each value in zs
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Apply the phase_shift
phase_shift = np.exp(1j*phase)
## scale by the magnitude mag - changes the peak amplitude
zs = mag*zs*phase_shift
## Append the phasor to a list
phasor_list.append(zs)
## The angle sequence and time sequence in case you want to inspect them
## We don't actually use them below
theta_list.append(thetas)
tsteps_list.append(tsteps)
## Superposition: add the individual phasors in the list together (all with the same weights right now)
zs_sum = np.zeros(len(tsteps_list[0]))
for z in phasor_list:
zs_sum = zs_sum + z
```
Now, we can plot the sine (vertical) component of the individual phasors (on the right), ignoring the cosine (horizontal) component for the moment.
```
## Plot the phasor (left) and the projection of the imaginary (vertical) component (right)
## cosproj would be the projection to the real axis, but let's just ignore that for now
fig, phasor, sinproj, cosproj = create_phasor_sinusoid_bkg(Tmin, Tmax, ymax=3, plot_phasor=True, plot_real=False, plot_imag=True,)
dense_tstep=0.001
for mag, freq, phase in params:
## We just want to plot the individual sinusoids (time v amplitude), so we ignore
## the complex numbers we've been using to plot the phasors
_, dense_thetas, dense_tsteps = gen_phasor_vals_freq(Tmin, Tmax, dense_tstep, freq)
sinproj.plot(dense_tsteps, mag*np.sin(dense_thetas+phase), color='grey')
```
Now plot the sum of the phasors (left) and the projected imaginary component in magenta (right) - that is, the sum of the sine components (in grey)
```
## Plot sinusoids as sampled
Xlist = []
Ylist = []
## some hacks to get to represent the individual phasors as lines from the centre of a circle as well as points
for i, zs in enumerate(phasor_list):
Xs_ = np.real(zs)
Ys_ = np.imag(zs)
X_, Y_, _ = get_line_coords(Xs_, Ys_)
Xlist.append(X_)
Ylist.append(Y_)
## Project the real and imaginary parts of the timewise summed phasor values
Xs = np.real(zs_sum)
Ys = np.imag(zs_sum)
Xline, Yline, _ = get_line_coords(Xs, Ys)
## plot the summed phasor values as 2-d coordinates (left)
## plot the sine projection of the phasor values in time (right)
sinproj.plot(tsteps_list[0], Ys, color='magenta')
fig
```
Now let's see an animation of how we're adding these phasors together!
```
anim = get_phasor_animation(Xline, Yline, tsteps, phasor, sinproj, cosproj, fig, Xlist=Xlist, Ylist=Ylist, params=params)
anim
```
In the animation above you should see:
* the red circle represents the first phasor (`freq=2`)
* the blue circle represents the 2nd phasor (`freq=4`)
* In adding the the two phasors together, we add the corresponding vectors for each phasor at each point in time.
### Exercise:
* What happens when you add up two sinusoids with the same frequency but different magnitudes
* e.g. `params = [(1, freq, 0), (2, freq, 0)]`
* What happens when you change the phase?
* Can you find $\phi$ such that $\sin(\theta+\phi) = \cos(\theta)$ ?
* When do the individual sinusoids cancel each other out?
* Assume you have a compound sinusoid defined by the following params:
* `params = [(1, freq, 0), (0.4, 5*freq, 0)]`
* What sinusoid could you add to cancel the higher frequency component out while keeping the lower frequency one?
### Notes
## Maths Perspective: The DFT equation as a sum of phasors
Now if you look at the mathematical form of the DFT, you can start to recognize this as representing a sequence of phasors of different frequencies, which have a real (cosine) and imaginary (sine) component.
The DFT is defined as follows:
* For input: $x[n]$, for $n=0..N-1$ (i.e. a time series of $N$ samples)
* We calculate an output of N complex numbers $\mapsto$ magnitude and phases of specific phasors:
Where the $k$th output, DFT[k], is calculated using the following equation:
$$
\begin{align}
DFT[k] &= \sum_{n=0}^{N-1} x[n] e^{-j \frac{2\pi n}{N} k} \\
\end{align}
$$
Which is equivalent to the following (using Euler's rule):
$$
\begin{align}
DFT[k] &= \sum_{n=0}^{N-1} x[n]\big[\cos(\frac{2\pi n}{N} k) - j \sin(\frac{2\pi n}{N} k) \big]
\end{align}
$$
This basically says that each DFT output is the result of multiplying the $n$th input value $x[n]$ with the $n$th sample of a phasor (hence sine and cosine waves) of a specific frequency, and summing the result (hence the complex number output). The frequency of DFT[k] is $k$ times the frequency of DFT[1], where the frequency of DFT[1] depends on the input size $N$ and the sampling rate (as discussed the [this notebook](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb)). The sampling rate determines the time each phasor step takes, hence how much time it takes to make a full phasor cycle, hence what frequencies we can actually compare the input against.
The pointwise multiplication and summation is also known as a dot product (aka inner product). The dot product between two vectors tells us how similar those two vectors are. So in a very rough sense, the DFT 'figures out' which frequency components are present in the input, by looking at how similar the input is to each of the N phasors represented in the DFT output.
There are two more notebooks on the DFT for this module, but both are extension material (not essential).
* [This notebook](./signals-1-3-discrete-fourier-transform-in-detail.ipynb) goes into more maths details but is purely extension (you can skip)
* [This notebook](./signals-1-4-more-interpreting-the-dft.ipynb) looks at a few more issues in interpreting the DFT
So, you can look at those if you want more details. Otherwise, we'll move onto the source-filter model in the second signals lab!
| github_jupyter |
<img src="../../img/logo_amds.png" alt="Logo" style="width: 128px;"/>
# AmsterdamUMCdb - Freely Accessible ICU Database
version 1.0.2 March 2020
Copyright © 2003-2020 Amsterdam UMC - Amsterdam Medical Data Science
## Sequential Organ Failure Assessment (SOFA)
The sequential organ failure assessment score (SOFA score), originally published as as the Sepsis-related Organ Failure Assessment score ([Vincent et al., 1996](http://link.springer.com/10.1007/BF01709751)), is a disease severity score designed to track the severity of critical ilness throughout the ICU stay. In contrast to APACHE (II/IV), which only calculates a score for the first 24 hours, it can be used sequentially for every following day. The code performs some data cleanup and calculates the SOFA score for the first 24 hours of ICU admission for all patients in the database.
**Note**: Requires creating the [dictionaries](../../dictionaries/create_dictionaries.ipynb) before running this notebook.
## Imports
```
%matplotlib inline
import amsterdamumcdb
import psycopg2
import pandas as pd
import numpy as np
import re
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import io
from IPython.display import display, HTML, Markdown
sofa = pd.read_csv('sofa/sofa.csv')
oxy_flow = pd.read_csv("sofa/oxy_flow.csv" )
sofa_respiration = pd.read_csv("sofa/sofa_respiration.csv" )
sofa_platelets = pd.read_csv("sofa/sofa_platelets.csv" )
sofa_bilirubin = pd.read_csv("sofa/sofa_bilirubin.csv" )
sofa_cardiovascular = pd.read_csv("sofa/sofa_cardiovascular.csv" )
mean_abp = pd.read_csv("sofa/mean_abp.csv" )
sofa_cardiovascular_map = pd.read_csv("sofa/sofa_cardiovascular_map.csv" )
gcs = pd.read_csv("sofa/gcs.csv" )
sofa_cns = pd.read_csv("sofa/sofa_cns.csv" )
sofa_renal_urine_output = pd.read_csv("sofa/sofa_renal_urine_output.csv" )
sofa_renal_daily_urine_output = pd.read_csv("sofa/sofa_renal_daily_urine_output.csv" )
creatinine = pd.read_csv("sofa/creatinine.csv" )
sofa_renal_creatinine = pd.read_csv("sofa/sofa_renal_creatinine.csv" )
sofa_renal = pd.read_csv("sofa/sofa_renal.csv" )
'''
bloc,icustayid,charttime,gender,age,elixhauser,re_admission,died_in_hosp,died_within_48h_of_out_time,
mortality_90d,delay_end_of_record_and_discharge_or_death,
Weight_kg,GCS,HR,SysBP,MeanBP,DiaBP,RR,SpO2,Temp_C,FiO2_1,Potassium,Sodium,Chloride,Glucose,
BUN,Creatinine,Magnesium,Calcium,Ionised_Ca,CO2_mEqL,SGOT,SGPT,Total_bili,Albumin,Hb,WBC_count,
Platelets_count,PTT,PT,INR,Arterial_pH,paO2,paCO2,Arterial_BE,Arterial_lactate,HCO3,mechvent,
Shock_Index,PaO2_FiO2,median_dose_vaso,max_dose_vaso,input_total,
input_4hourly,output_total,output_4hourly,cumulated_balance,SOFA,SIRS
'''
```
| github_jupyter |
<a href="https://colab.research.google.com/github/aubricot/computer_vision_with_eol_images/blob/master/object_detection_for_image_cropping/chiroptera/chiroptera_train_tf2_ssd_rcnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Train Tensorflow Faster-RCNN and SSD models to detect bats (Chiroptera) from EOL images
---
*Last Updated 19 Oct 2021*
-Now runs in Python 3 with Tensorflow 2.0-
Use EOL user generated cropping coordinates to train Faster-RCNN and SSD Object Detection Models implemented in Tensorflow to detect bats from EOL images. Training data consists of the user-determined best square thumbnail crop of an image, so model outputs will also be a square around objects of interest.
Datasets were downloaded to Google Drive in [chiroptera_preprocessing.ipynb](https://github.com/aubricot/computer_vision_with_eol_images/blob/master/object_detection_for_image_cropping/chiroptera/chiroptera_preprocessing.ipynb).
***Models were trained in Python 2 and TF 1 in Jan 2020: RCNN trained for 2 days to 200,000 steps and SSD for 4 days to 450,000 steps.***
Notes:
* Before you you start: change the runtime to "GPU" with "High RAM"
* Change parameters using form fields on right (/where you see 'TO DO' in code)
* For each 24 hour period on Google Colab, you have up to 12 hours of free GPU access.
References:
* [Official Tensorflow Object Detection API Instructions](https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html)
* [Medium Blog on training using Tensorflow Object Detection API in Colab](https://medium.com/analytics-vidhya/training-an-object-detection-model-with-tensorflow-api-using-google-colab-4f9a688d5e8b)
## Installs & Imports
---
```
# Mount google drive to import/export files
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# For running inference on the TF-Hub module
import tensorflow as tf
import tensorflow_hub as hub
# For downloading and displaying images
import matplotlib
import matplotlib.pyplot as plt
import tempfile
import urllib
from urllib.request import urlretrieve
from six.moves.urllib.request import urlopen
from six import BytesIO
# For drawing onto images
from PIL import Image
from PIL import ImageColor
from PIL import ImageDraw
from PIL import ImageFont
from PIL import ImageOps
# For measuring the inference time
import time
# For working with data
import numpy as np
import pandas as pd
import os
import csv
# Print Tensorflow version
print('Tensorflow Version: %s' % tf.__version__)
# Check available GPU devices
print('The following GPU devices are available: %s' % tf.test.gpu_device_name())
# Define functions
# Read in data file exported from "Combine output files A-D" block above
def read_datafile(fpath, sep="\t", header=0, disp_head=True):
"""
Defaults to tab-separated data files with header in row 0
"""
try:
df = pd.read_csv(fpath, sep=sep, header=header)
if disp_head:
print("Data header: \n", df.head())
except FileNotFoundError as e:
raise Exception("File not found: Enter the path to your file in form field and re-run").with_traceback(e.__traceback__)
return df
# To load image in and do something with it
def load_img(path):
img = tf.io.read_file(path)
img = tf.image.decode_jpeg(img, channels=3)
return img
# To display loaded image
def display_image(image):
fig = plt.figure(figsize=(20, 15))
plt.grid(False)
plt.imshow(image)
# For reading in images from URL and passing through TF models for inference
def download_and_resize_image(url, new_width=256, new_height=256, #From URL
display=False):
_, filename = tempfile.mkstemp(suffix=".jpg")
response = urlopen(url)
image_data = response.read()
image_data = BytesIO(image_data)
pil_image = Image.open(image_data)
im_h, im_w = pil_image.size
pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)
pil_image_rgb = pil_image.convert("RGB")
pil_image_rgb.save(filename, format="JPEG", quality=90)
#print("Image downloaded to %s." % filename)
if display:
display_image(pil_image)
return filename, im_h, im_w
# Download, compile and build the Tensorflow Object Detection API (takes 4-9 minutes)
# TO DO: Type in the path to your working directory in form field to right
basewd = "/content/drive/MyDrive/train" #@param {type:"string"}
%cd $basewd
# Set up directory for TF2 Model Garden
# TO DO: Type in the folder you would like to contain TF2
folder = "tf2" #@param {type:"string"}
if not os.path.exists(folder):
os.makedirs(folder)
%cd $folder
os.makedirs("tf_models")
%cd tf_models
# Clone the Tensorflow Model Garden
!git clone --depth 1 https://github.com/tensorflow/models/
%cd ../..
# Build the Object Detection API
wd = basewd + '/' + folder
%cd $wd
!cd tf_models/models/research/ && protoc object_detection/protos/*.proto --python_out=. && cp object_detection/packages/tf2/setup.py . && python -m pip install .
```
## Model preparation (only run once)
---
These blocks download and set-up files needed for training object detectors. After running once, you can train and re-train as many times as you'd like.
### Download and extract pre-trained models
```
# Download pre-trained models from Tensorflow Object Detection Model Zoo
# https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
# SSD and Faster-RCNN used as options below
# modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
import shutil
import glob
import tarfile
# CD to folder where TF models are installed (tf2)
%cd $wd
# Make folders for your training files for each model
# Faster RCNN Model
if not (os.path.exists('tf_models/train_demo')):
!mkdir tf_models/train_demo
if not (os.path.exists('tf_models/train_demo/rcnn')):
!mkdir tf_models/train_demo/rcnn
if not (os.path.exists('tf_models/train_demo/rcnn/pretrained_model')):
!mkdir tf_models/train_demo/rcnn/pretrained_model
if not (os.path.exists('tf_models/train_demo/rcnn/finetuned_model')):
!mkdir tf_models/train_demo/rcnn/finetuned_model
if not (os.path.exists('tf_models/train_demo/rcnn/trained')):
!mkdir tf_models/train_demo/rcnn/trained
# Download the model
MODEL = 'faster_rcnn_resnet50_v1_640x640_coco17_tpu-8'
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/'
DEST_DIR = 'tf_models/train_demo/rcnn/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
# SSD Model
if not (os.path.exists('tf_models/train_demo/ssd')):
!mkdir tf_models/train_demo/ssd
if not (os.path.exists('tf_models/train_demo/ssd/pretrained_model')):
!mkdir tf_models/train_demo/ssd/pretrained_model
if not (os.path.exists('tf_models/train_demo/ssd/finetuned_model')):
!mkdir tf_models/train_demo/ssd/finetuned_model
if not (os.path.exists('tf_models/train_demo/ssd/trained')):
!mkdir tf_models/train_demo/ssd/trained
# Download the model
MODEL = 'ssd_mobilenet_v2_320x320_coco17_tpu-8'
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/'
DEST_DIR = 'tf_models/train_demo/ssd/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
```
### Convert training data to tf.record format
1) Download generate_tfrecord.py using code block below
2) Open the Colab file explorer on the right and navigate to your current working directory
3) Double click on generate_tfrecord.py to open it in the Colab text editor.
4) Modify the file for your train dataset:
* update label names to the class(es) of interest at line 31 (Chiroptera)
# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'Chiroptera':
return 1
else:
None
* update the filepath where you want your train tf.record file to save at line 85
# TO-DO replace path with your filepath
def main(_):
writer = tf.python_io.TFRecordWriter('/content/drive/MyDrive/[yourfilepath]/tf.record')
5) Close Colab text editor and proceed with steps below to generate tf.record files for your test and train datasets
```
# Download chiroptera_generate_tfrecord.py to your wd in Google Drive
# Follow directions above to modify the file for your dataset
!gdown --id 1fVXeuk7ALHTlTLK3GGH8p6fMHuuWt1Sr
# Convert crops_test to tf.record format for test data
# Modified from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
# TO DO: Update file paths in form fields
csv_input = "/content/drive/MyDrive/train/tf2/pre-processing/Chiroptera_crops_test_notaug_oob_rem_fin.csv" #@param {type:"string"}
output_path = "/content/drive/MyDrive/train/tf2/test_images/tf.record" #@param {type:"string"}
test_image_dir = "/content/drive/MyDrive/train/tf2/test_images" #@param {type:"string"}
!python chiroptera_generate_tfrecord.py --csv_input=$csv_input --output_path=$output_path --image_dir=$test_image_dir
# Move tf.record for test images to test images directory
!mv tf.record $image_dir
# Convert crops_train to tf.record format for train data
# Modified from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
# TO DO: Update file paths in form fields
csv_input = "/content/drive/MyDrive/train/tf2/pre-processing/Chiroptera_crops_train_aug_oob_rem_fin.csv" #@param {type:"string"}
output_path = "/content/drive/MyDrive/train/tf2/images/tf.record" #@param {type:"string"}
train_image_dir = "/content/drive/MyDrive/train/tf2/images" #@param {type:"string"}
global image_dir
!python chiroptera_generate_tfrecord.py --csv_input=$csv_input --output_path=$output_path --image_dir=$train_image_dir
# Move tf.record for training images to train images directory
!mv tf.record $image_dir
```
### Make label map for class Chiroptera
```
%%writefile labelmap.pbtxt
item {
id: 1
name: 'Chiroptera'
}
```
### Modify model config files for training Faster-RCNN and SSD with your dataset
If you have errors with training, check the pipline_config_path and model_dir in the config files for R-FCN or Faster-RCNN model
```
# Adjust model config file based on training/testing datasets
# Modified from https://stackoverflow.com/a/63645324
from google.protobuf import text_format
from object_detection.protos import pipeline_pb2
%cd $wd
# TO DO: Adjust parameters ## add form fields here
filter = "Chiroptera" #@param {type:"string"}
config_basepath = "tf_models/train_demo/" #@param {type:"string"}
label_map = 'labelmap.pbtxt'
train_tfrecord_path = "/content/drive/MyDrive/train/tf2/images/tf.record" #@param {type:"string"}
test_tfrecord_path = "/content/drive/MyDrive/train/tf2/test_images/tf.record" #@param {type:"string"}
ft_ckpt_basepath = "/content/drive/MyDrive/train/tf2/tf_models/train_demo/" #@param {type:"string"}
ft_ckpt_type = "detection" #@param ["detection", "classification"]
num_classes = 1 #@param
batch_size = 1 #@param ["1", "4", "8", "16", "32", "64", "128"] {type:"raw"}
# Define pipeline for modifying model config files
def read_config(model_config):
if 'rcnn/' in model_config:
model_ckpt = 'rcnn/pretrained_model/checkpoint/ckpt-0'
elif 'ssd/' in model_config:
model_ckpt = 'ssd/pretrained_model/checkpoint/ckpt-0'
config_fpath = config_basepath + model_config
pipeline = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(config_fpath, "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline)
return pipeline, model_ckpt, config_fpath
def modify_config(pipeline, model_ckpt, ft_ckpt_basepath):
finetune_checkpoint = ft_ckpt_basepath + model_ckpt
pipeline.model.faster_rcnn.num_classes = num_classes
pipeline.train_config.fine_tune_checkpoint = finetune_checkpoint
pipeline.train_config.fine_tune_checkpoint_type = ft_ckpt_type
pipeline.train_config.batch_size = batch_size
pipeline.train_config.use_bfloat16 = False # True only if training on TPU
pipeline.train_input_reader.label_map_path = label_map
pipeline.train_input_reader.tf_record_input_reader.input_path[0] = train_tfrecord_path
pipeline.eval_input_reader[0].label_map_path = label_map
pipeline.eval_input_reader[0].tf_record_input_reader.input_path[0] = test_tfrecord_path
return pipeline
def write_config(pipeline, config_fpath):
config_outfpath = os.path.splitext(config_fpath)[0] + '_' + filter + '.config'
config_text = text_format.MessageToString(pipeline)
with tf.io.gfile.GFile(config_outfpath, "wb") as f:
f.write(config_text)
return config_outfpath
def setup_pipeline(model_config, ft_ckpt_basepath):
print('\n Modifying model config file for {}'.format(model_config))
pipeline, model_ckpt, config_fpath = read_config(model_config)
pipeline = modify_config(pipeline, model_ckpt, ft_ckpt_basepath)
config_outfpath = write_config(pipeline, config_fpath)
print(' Modifed model config file saved to {}'.format(config_outfpath))
if config_outfpath:
return "Success!"
else:
return "Fail: try again"
# Modify model configs
model_configs = ['rcnn/pretrained_model/pipeline.config', 'ssd/pretrained_model/pipeline.config']
[setup_pipeline(model_config, ft_ckpt_basepath) for model_config in model_configs]
```
## Train
---
```
# Determine how many train and eval steps to use based on dataset size
# TO DO: Only need to update path if you didn't just run "Model Preparation" block above
try:
train_image_dir
except NameError:
train_image_dir = "/content/drive/MyDrive/train/tf2/images" #@param {type:"string"}
examples = len(os.listdir(train_image_dir))
print("Number of train examples: \n", examples)
# Get the number of testing examples
# TO DO: Only need to update path if you didn't just run "Model Preparation" block above
try:
test_image_dir
except NameError:
test_image_dir = "/content/drive/MyDrive/train/tf2/test_images" #@param {type:"string"}
test_examples = len(os.listdir(test_image_dir))
print("Number of test examples: \n", test_examples)
# Get the training batch size
# TO DO: Only need to update value if you didn't just run "Model Preparation" block above
try:
batch_size
except NameError:
batch_size = 1 #@param ["1", "4", "8", "16", "32", "64", "128"] {type:"raw"}
print("Batch size: \n", batch_size)
# Calculate roughly how many steps to use for training and testing
steps_per_epoch = examples / batch_size
num_eval_steps = test_examples / batch_size
print("Number of steps per training epoch: \n", int(steps_per_epoch))
print("Number of evaluation steps: \n", int(num_eval_steps))
# TO DO: Choose how many epochs to train for
epochs = 410 #@param {type:"slider", min:10, max:1000, step:100}
num_train_steps = int(epochs * steps_per_epoch)
num_eval_steps = int(num_eval_steps)
# TO DO: Choose paths for RCNN or SSD model
pipeline_config_path = "tf_models/train_demo/rcnn/pretrained_model/pipeline_Chiroptera.config" #@param ["tf_models/train_demo/rcnn/pretrained_model/pipeline_Chiroptera.config", "tf_models/train_demo/ssd/pretrained_model/pipeline_Chiroptera.config"]
model_dir = "tf_models/train_demo/rcnn/trained" #@param ["tf_models/train_demo/rcnn/trained", "tf_models/train_demo/ssd/trained"]
output_directory = "tf_models/train_demo/rcnn/finetuned_model" #@param ["tf_models/train_demo/rcnn/finetuned_model", "tf_models/train_demo/ssd/finetuned_model"]
trained_checkpoint_dir = "tf_models/train_demo/rcnn/trained" #@param ["tf_models/train_demo/rcnn/trained", "tf_models/train_demo/ssd/trained"] {allow-input: true}
# Save vars to environment for access with cmd line tools below
os.environ["trained_checkpoint_dir"] = "trained_checkpoint_dir"
os.environ["num_train_steps"] = "num_train_steps"
os.environ["num_eval_steps"] = "num_eval_steps"
os.environ["pipeline_config_path"] = "pipeline_config_path"
os.environ["model_dir"] = "model_dir"
os.environ["output_directory"] = "output_directory"
# Optional: Visualize training progress with Tensorboard
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Log training progress using TensorBoard
%tensorboard --logdir $model_dir
# Actual training
# Note: You can change the number of epochs in code block below and re-run to train longer
# Modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
matplotlib.use('Agg')
%cd $wd
!python tf_models/models/research/object_detection/model_main_tf2.py \
--alsologtostderr \
--num_train_steps=$num_train_steps \
--num_eval_steps=$num_eval_steps \
--pipeline_config_path=$pipeline_config_path \
--model_dir=$model_dir
# Export trained model
# Modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
%cd $wd
# Save the model
!python tf_models/models/research/object_detection/exporter_main_v2.py \
--input_type image_tensor \
--pipeline_config_path=$pipeline_config_path \
--trained_checkpoint_dir=$trained_checkpoint_dir \
--output_directory=$output_directory
# Evaluate trained model to get mAP and IoU stats for COCO 2017
# Change pipeline_config_path and checkpoint_dir when switching between SSD and Faster-RCNN models
matplotlib.use('Agg')
!python tf_models/models/research/object_detection/model_main_tf2.py \
--alsologtostderr \
--model_dir=$model_dir \
--pipeline_config_path=$pipeline_config_path \
--checkpoint_dir=$trained_checkpoint_dir
```
| github_jupyter |
## **Yolov3 Algorithm**
```
import struct
import numpy as np
import pandas as pd
import os
from keras.layers import Conv2D
from keras.layers import Input
from keras.layers import BatchNormalization
from keras.layers import LeakyReLU
from keras.layers import ZeroPadding2D
from keras.layers import UpSampling2D
from keras.layers.merge import add, concatenate
from keras.models import Model
```
**Access Google Drive**
```
# Load the Drive helper and mount
from google.colab import drive
drive.mount('/content/drive')
```
**Residual Block**
formula: y=F(x) + x
```
def _conv_block(inp, convs, skip=True):
x = inp
count = 0
for conv in convs:
if count == (len(convs) - 2) and skip:
skip_connection = x
count += 1
if conv['stride'] > 1: x = ZeroPadding2D(((1,0),(1,0)))(x) #padding as darknet prefer left and top
x = Conv2D(conv['filter'],
conv['kernel'],
strides=conv['stride'],
padding='valid' if conv['stride'] > 1 else 'same', # padding as darknet prefer left and top
name='conv_' + str(conv['layer_idx']),
use_bias=False if conv['bnorm'] else True)(x)
if conv['bnorm']: x = BatchNormalization(epsilon=0.001, name='bnorm_' + str(conv['layer_idx']))(x)
if conv['leaky']: x = LeakyReLU(alpha=0.1, name='leaky_' + str(conv['layer_idx']))(x)
return add([skip_connection, x]) if skip else x
```
**Create Yolov3 Architecture**
Three output layers: 82, 94, 106
```
def make_yolov3_model():
input_image = Input(shape=(None, None, 3))
# Layer 0 => 4
x = _conv_block(input_image, [{'filter': 32, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 0},
{'filter': 64, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 1},
{'filter': 32, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 2},
{'filter': 64, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 3}])
# Layer 5 => 8
x = _conv_block(x, [{'filter': 128, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 5},
{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 6},
{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 7}])
# Layer 9 => 11
x = _conv_block(x, [{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 9},
{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 10}])
# Layer 12 => 15
x = _conv_block(x, [{'filter': 256, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 12},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 13},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 14}])
# Layer 16 => 36
for i in range(7):
x = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 16+i*3},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 17+i*3}])
skip_36 = x
# Layer 37 => 40
x = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 37},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 38},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 39}])
# Layer 41 => 61
for i in range(7):
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 41+i*3},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 42+i*3}])
skip_61 = x
# Layer 62 => 65
x = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 62},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 63},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 64}])
# Layer 66 => 74
for i in range(3):
x = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 66+i*3},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 67+i*3}])
# Layer 75 => 79
x = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 75},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 76},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 77},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 78},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 79}], skip=False)
# Layer 80 => 82
yolo_82 = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 80},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 81}], skip=False)
# Layer 83 => 86
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 84}], skip=False)
x = UpSampling2D(2)(x)
x = concatenate([x, skip_61])
# Layer 87 => 91
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 87},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 88},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 89},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 90},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 91}], skip=False)
# Layer 92 => 94
yolo_94 = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 92},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 93}], skip=False)
# Layer 95 => 98
x = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 96}], skip=False)
x = UpSampling2D(2)(x)
x = concatenate([x, skip_36])
# Layer 99 => 106
yolo_106 = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 99},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 100},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 101},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 102},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 103},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 104},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 105}], skip=False)
model = Model(input_image, [yolo_82, yolo_94, yolo_106])
return model
```
**Read and Load the pre-trained model weight**
```
class WeightReader:
def __init__(self, weight_file):
with open(weight_file, 'rb') as w_f:
major, = struct.unpack('i', w_f.read(4))
minor, = struct.unpack('i', w_f.read(4))
revision, = struct.unpack('i', w_f.read(4))
if (major*10 + minor) >= 2 and major < 1000 and minor < 1000:
w_f.read(8)
else:
w_f.read(4)
transpose = (major > 1000) or (minor > 1000)
binary = w_f.read()
self.offset = 0
self.all_weights = np.frombuffer(binary, dtype='float32')
def read_bytes(self, size):
self.offset = self.offset + size
return self.all_weights[self.offset-size:self.offset]
def load_weights(self, model):
for i in range(106):
try:
conv_layer = model.get_layer('conv_' + str(i))
print("loading weights of convolution #" + str(i))
if i not in [81, 93, 105]:
norm_layer = model.get_layer('bnorm_' + str(i))
size = np.prod(norm_layer.get_weights()[0].shape)
beta = self.read_bytes(size) # bias
gamma = self.read_bytes(size) # scale
mean = self.read_bytes(size) # mean
var = self.read_bytes(size) # variance
weights = norm_layer.set_weights([gamma, beta, mean, var])
if len(conv_layer.get_weights()) > 1:
bias = self.read_bytes(np.prod(conv_layer.get_weights()[1].shape))
kernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel, bias])
else:
kernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel])
except ValueError:
print("no convolution #" + str(i))
def reset(self):
self.offset = 0
```
**Define the model**
```
model = make_yolov3_model()
```
**Call class WeightReader to read the weight & load to the model**
```
weight_reader = WeightReader("/content/drive/MyDrive/yolo_custom_model_Training/backup/test_cfg_20000.weights")
weight_reader.load_weights(model)
```
**We will use a pre-trained model to perform object detection**
```
import numpy as np
from matplotlib import pyplot
from matplotlib.patches import Rectangle
from numpy import expand_dims
from keras.models import load_model
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
# define the expected input shape for the model
input_w, input_h = 416, 416
```
**Draw bounding box on the images**
```
class BoundBox:
def __init__(self, xmin, ymin, xmax, ymax, objness = None, classes = None):
self.xmin = xmin
self.ymin = ymin
self.xmax = xmax
self.ymax = ymax
self.objness = objness
self.classes = classes
self.label = -1
self.score = -1
def get_label(self):
if self.label == -1:
self.label = np.argmax(self.classes)
return self.label
def get_score(self):
if self.score == -1:
self.score = self.classes[self.get_label()]
return self.score
def _sigmoid(x):
return 1. / (1. + np.exp(-x))
def decode_netout(netout, anchors, obj_thresh, net_h, net_w):
grid_h, grid_w = netout.shape[:2] # 0 and 1 is row and column 13*13
nb_box = 3 # 3 anchor boxes
netout = netout.reshape((grid_h, grid_w, nb_box, -1)) #13*13*3 ,-1
nb_class = netout.shape[-1] - 5
boxes = []
netout[..., :2] = _sigmoid(netout[..., :2])
netout[..., 4:] = _sigmoid(netout[..., 4:])
netout[..., 5:] = netout[..., 4][..., np.newaxis] * netout[..., 5:]
netout[..., 5:] *= netout[..., 5:] > obj_thresh
for i in range(grid_h*grid_w):
row = i / grid_w
col = i % grid_w
for b in range(nb_box):
# 4th element is objectness score
objectness = netout[int(row)][int(col)][b][4]
if(objectness.all() <= obj_thresh): continue
# first 4 elements are x, y, w, and h
x, y, w, h = netout[int(row)][int(col)][b][:4]
x = (col + x) / grid_w # center position, unit: image width
y = (row + y) / grid_h # center position, unit: image height
w = anchors[2 * b + 0] * np.exp(w) / net_w # unit: image width
h = anchors[2 * b + 1] * np.exp(h) / net_h # unit: image height
# last elements are class probabilities
classes = netout[int(row)][col][b][5:]
box = BoundBox(x-w/2, y-h/2, x+w/2, y+h/2, objectness, classes)
boxes.append(box)
return boxes
def correct_yolo_boxes(boxes, image_h, image_w, net_h, net_w):
new_w, new_h = net_w, net_h
for i in range(len(boxes)):
x_offset, x_scale = (net_w - new_w)/2./net_w, float(new_w)/net_w
y_offset, y_scale = (net_h - new_h)/2./net_h, float(new_h)/net_h
boxes[i].xmin = int((boxes[i].xmin - x_offset) / x_scale * image_w)
boxes[i].xmax = int((boxes[i].xmax - x_offset) / x_scale * image_w)
boxes[i].ymin = int((boxes[i].ymin - y_offset) / y_scale * image_h)
boxes[i].ymax = int((boxes[i].ymax - y_offset) / y_scale * image_h)
```
**Intersection over Union - Actual bounding box vs predicted bounding box**
```
def _interval_overlap(interval_a, interval_b):
x1, x2 = interval_a
x3, x4 = interval_b
if x3 < x1:
if x4 < x1:
return 0
else:
return min(x2,x4) - x1
else:
if x2 < x3:
return 0
else:
return min(x2,x4) - x3
#intersection over union
def bbox_iou(box1, box2):
intersect_w = _interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax])
intersect_h = _interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax])
intersect = intersect_w * intersect_h
w1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin
w2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin
#Union(A,B) = A + B - Inter(A,B)
union = w1*h1 + w2*h2 - intersect
return float(intersect) / union
```
**Non Max Suppression - Only choose the high probability bounding boxes**
```
#boxes from correct_yolo_boxes and decode_netout
def do_nms(boxes, nms_thresh):
if len(boxes) > 0:
nb_class = len(boxes[0].classes)
else:
return
for c in range(nb_class):
sorted_indices = np.argsort([-box.classes[c] for box in boxes])
for i in range(len(sorted_indices)):
index_i = sorted_indices[i]
if boxes[index_i].classes[c] == 0: continue
for j in range(i+1, len(sorted_indices)):
index_j = sorted_indices[j]
if bbox_iou(boxes[index_i], boxes[index_j]) >= nms_thresh:
boxes[index_j].classes[c] = 0
```
**Load and Prepare images**
```
def load_image_pixels(filename, shape):
# load the image to get its shape
image = load_img(filename) #load_img() Keras function to load the image .
width, height = image.size
# load the image with the required size
image = load_img(filename, target_size=shape) # target_size argument to resize the image after loading
# convert to numpy array
image = img_to_array(image)
# scale pixel values to [0, 1]
image = image.astype('float32')
image /= 255.0 #rescale the pixel values from 0-255 to 0-1 32-bit floating point values.
# add a dimension so that we have one sample
image = expand_dims(image, 0)
return image, width, height
```
**Save all of the boxes above the threshold**
```
def get_boxes(boxes, labels, thresh):
v_boxes, v_labels, v_scores = list(), list(), list()
# enumerate all boxes
for box in boxes:
# enumerate all possible labels
for i in range(len(labels)):
# check if the threshold for this label is high enough
if box.classes[i] > thresh:
v_boxes.append(box)
v_labels.append(labels[i])
v_scores.append(box.classes[i]*100)
return v_boxes, v_labels, v_scores
```
**Draw all the boxes based on the information from the previous step**
```
def draw_boxes(filename, v_boxes, v_labels, v_scores):
# load the image
data = pyplot.imread(filename)
# plot the image
pyplot.imshow(data)
# get the context for drawing boxes
ax = pyplot.gca()
# plot each box
for i in range(len(v_boxes)):
#by retrieving the coordinates from each bounding box and creating a Rectangle object.
box = v_boxes[i]
# get coordinates
y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax
# calculate width and height of the box
width, height = x2 - x1, y2 - y1
# create the shape
rect = Rectangle((x1, y1), width, height, fill=False, color='white')
# draw the box
ax.add_patch(rect)
# draw text and score in top left corner
label = "%s (%.3f)" % (v_labels[i], v_scores[i])
pyplot.text(x1, y1, label, color='white')
# show the plot
pyplot.show()
draw_boxes
```
### **Detection**
```
%cd '/content/drive/MyDrive/yolo_custom_model_Training/custom_data/'
input_w, input_h = 416, 416
anchors = [[116,90, 156,198, 373,326], [30,61, 62,45, 59,119], [10,13, 16,30, 33,23]]
class_threshold = 0.15
pred_right = 0
labels = ['clear_plastic_bottle','plastic_bottle_cap','drink_can','plastic_straw','paper_straw',
'disposable_plastic_cup','styrofoam_piece','glass_bottle','pop_tab','paper_bag','plastic_utensils',
'normal_paper','plastic_lid']
filepath = '/content/drive/MyDrive/yolo_custom_model_Training/custom_data/'
for im in os.listdir(filepath):
image, image_w, image_h = load_image_pixels(im, (input_w, input_h))
yhat = model.predict(image)
boxes = list()
for i in range(len(yhat)):
boxes += decode_netout(yhat[i][0], anchors[i], class_threshold, input_h, input_w)
correct_yolo_boxes(boxes, image_h, image_w, input_h, input_w)
do_nms(boxes, 0.1)
v_boxes, v_labels, v_scores = get_boxes(boxes, labels, class_threshold)
if len(v_labels)!=0:
image_name, useless = im.split('.')
if image_name[:-3] == v_labels[0]:
pred_right +=1
accuracy = '{:.2%}'.format(pred_right/130)
print("the detection accuracy is " + accuracy)
pred_right
```
| github_jupyter |
```
################################ NOTES ##############################ex
# Lines of code that are to be excluded from the documentation are #ex
# marked with `#ex` at the end of the line. #ex
# #ex
# To ensure that figures are displayed correctly together with widgets #ex
# in the sphinx documentation we will include screenshots of some of #ex
# the produced figures. #ex
# Do not run cells with the `display(Image('path_to_image'))` code to #ex
# avoid duplication of results in the notebook. #ex
# #ex
# Some reStructuredText 2 (ReST) syntax is included to aid in #ex
# conversion to ReST for the sphinx documentation. #ex
#########################################################################ex
notebook_dir = %pwd #ex
import pysces #ex
import psctb #ex
import numpy #ex
from os import path #ex
from IPython.display import display, Image #ex
from sys import platform #ex
%matplotlib inline
```
# Symca
Symca is used to perform symbolic metabolic control analysis [[3,4]](references.html) on metabolic pathway models in order to dissect the control properties of these pathways in terms of the different chains of local effects (or control patterns) that make up the total control coefficient values. Symbolic/algebraic expressions are generated for each control coefficient in a pathway which can be subjected to further analysis.
## Features
* Generates symbolic expressions for each control coefficient of a metabolic pathway model.
* Splits control coefficients into control patterns that indicate the contribution of different chains of local effects.
* Control coefficient and control pattern expressions can be manipulated using standard `SymPy` functionality.
* Values of control coefficient and control pattern values are determined automatically and updated automatically following the calculation of standard (non-symbolic) control coefficient values subsequent to a parameter alteration.
* Analysis sessions (raw expression data) can be saved to disk for later use.
* The effect of parameter scans on control coefficient and control patters can be generated and displayed using `ScanFig`.
* Visualisation of control patterns by using `ModelGraph` functionality.
* Saving/loading of `Symca` sessions.
* Saving of control pattern results.
## Usage and feature walkthrough
### Workflow
Performing symbolic control analysis with `Symca` usually requires the following steps:
1. Instantiation of a `Symca` object using a `PySCeS` model object.
2. Generation of symbolic control coefficient expressions.
3. Access generated control coefficient expression results via `cc_results` and the corresponding control coefficient name (see [Basic Usage](basic_usage.ipynb#syntax))
4. Inspection of control coefficient values.
5. Inspection of control pattern values and their contributions towards the total control coefficient values.
6. Inspection of the effect of parameter changes (parameter scans) on the values of control coefficients and control patterns and the contribution of control patterns towards control coefficients.
7. Session/result saving if required
8. Further analysis.
### Object instantiation
Instantiation of a `Symca` analysis object requires `PySCeS` model object (`PysMod`) as an argument. Using the included [lin4_fb.psc](included_files.html#lin4-fb-psc) model a `Symca` session is instantiated as follows:
```
mod = pysces.model('lin4_fb')
sc = psctb.Symca(mod)
```
Additionally `Symca` has the following arguments:
* `internal_fixed`: This must be set to `True` in the case where an internal metabolite has a fixed concentration *(default: `False`)*
* `auto_load`: If `True` `Symca` will try to load a previously saved session. Saved data is unaffected by the `internal_fixed` argument above *(default: `False`)*.
.. note:: For the case where an internal metabolite is fixed see [Fixed internal metabolites](Symca.ipynb#fixed-internal-metabolites) below.
### Generating symbolic control coefficient expressions
Control coefficient expressions can be generated as soon as a `Symca` object has been instantiated using the `do_symca` method. This process can potentially take quite some time to complete, therefore we recommend saving the generated expressions for later loading (see [Saving/Loading Sessions](Symca.ipynb#saving-loading-sessions) below). In the case of `lin4_fb.psc` expressions should be generated within a few seconds.
```
sc.do_symca()
```
`do_symca` has the following arguments:
* `internal_fixed`: This must be set to `True` in the case where an internal metabolite has a fixed concentration *(default: `False`)*
* `auto_save_load`: If set to `True` `Symca` will attempt to load a previously saved session and only generate new expressions in case of a failure. After generation of new results, these results will be saved instead. Setting `internal_fixed` to `True` does not affect previously saved results that were generated with this argument set to `False` *(default: `False`)*.
### Accessing control coefficient expressions
Generated results may be accessed via a dictionary-like `cc_results` object (see [Basic Usage - Tables](basic_usage.ipynb#tables)). Inspecting this `cc_results` object in a IPython/Jupyter notebook yields a table of control coefficient values:
```
sc.cc_results
```
Inspecting an individual control coefficient yields a symbolic expression together with a value:
```
sc.cc_results.ccJR1_R4
```
In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\Sigma$.
Various properties of this control coefficient can be accessed such as the:
* Expression (as a `SymPy` expression)
```
sc.cc_results.ccJR1_R4.expression
```
* Numerator expression (as a `SymPy` expression)
```
sc.cc_results.ccJR1_R4.numerator
```
* Denominator expression (as a `SymPy` expression)
```
sc.cc_results.ccJR1_R4.denominator
```
* Value (as a `float64`)
```
sc.cc_results.ccJR1_R4.value
```
Additional, less pertinent, attributes are `abs_value`, `latex_expression`, `latex_expression_full`, `latex_numerator`, `latex_name`, `name` and `denominator_object`.
The individual control coefficient numerator terms, otherwise known as control patterns, may also be accessed as follows:
```
sc.cc_results.ccJR1_R4.CP001
sc.cc_results.ccJR1_R4.CP002
```
Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed).
#### Control pattern percentage contribution
Additionally control patterns have a `percentage` field which indicates the degree to which a particular control pattern contributes towards the overall control coefficient value:
```
sc.cc_results.ccJR1_R4.CP001.percentage
sc.cc_results.ccJR1_R4.CP002.percentage
```
Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have different signs.
A particularly problematic example of where the above method is necessary, is a hypothetical control coefficient with a value of zero, but with two control patterns with equal value but opposite signs. In this case a conventional percentage calculation would lead to an undefined (`NaN`) result, whereas our methodology would indicate that each control pattern is equally ($50\%$) responsible for the observed control coefficient value.
### Dynamic value updating
The values of the control coefficients and their control patterns are automatically updated when new steady-state
elasticity coefficients are calculated for the model. Thus changing a parameter of `lin4_hill`, such as the $V_{f}$ value of reaction 4, will lead to new control coefficient and control pattern values:
```
mod.reLoad()
# mod.Vf_4 has a default value of 50
mod.Vf_4 = 0.1
# calculating new steady state
mod.doMca()
# now ccJR1_R4 and its two control patterns should have new values
sc.cc_results.ccJR1_R4
# original value was 0.000
sc.cc_results.ccJR1_R4.CP001
# original value was 0.964
sc.cc_results.ccJR1_R4.CP002
# resetting to default Vf_4 value and recalculating
mod.reLoad()
mod.doMca()
```
### Control pattern graphs
As described under [Basic Usage](basic_usage.ipynb#graphic-representation-of-metabolic-networks), `Symca` has the functionality to display the chains of local effects represented by control patterns on a scheme of a metabolic model. This functionality can be accessed via the `highlight_patterns` method:
```
# This path leads to the provided layout file
path_to_layout = '~/Pysces/psc/lin4_fb.dict'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_layout)
else:
path_to_layout = path.expanduser(path_to_layout)
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_1.png'))) #ex
```
`highlight_patterns` has the following optional arguments:
* `width`: Sets the width of the graph (*default*: 900).
* `height`:Sets the height of the graph (*default*: 500).
* `show_dummy_sinks`: If `True` reactants with the "dummy" or "sink" will not be displayed (*default*: `False`).
* `show_external_modifier_links`: If `True` edges representing the interaction of external effectors with reactions will be shown (*default*: `False`).
Clicking either of the two buttons representing the control patterns highlights these patterns according according to their percentage contribution (as discussed [above](Symca.ipynb#control-pattern-percentage-contribution)) towards the total control coefficient.
```
# clicking on CP002 shows that this control pattern representing
# the chain of effects passing through the feedback loop
# is totally responsible for the observed control coefficient value.
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_2.png'))) #ex
# clicking on CP001 shows that this control pattern representing
# the chain of effects of the main pathway does not contribute
# at all to the control coefficient value.
sc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_model_graph_3.png'))) #ex
```
### Parameter scans
Parameter scans can be performed in order to determine the effect of a parameter change on either the control coefficient and control pattern values or of the effect of a parameter change on the contribution of the control patterns towards the control coefficient (as discussed [above](Symca.ipynb#control-pattern-percentage-contribution)). The procedures for both the "value" and "percentage" scans are very much the same and rely on the same principles as described in the [Basic Usage](basic_usage.ipynb#plotting-and-displaying-results) and [RateChar](RateChar.ipynb#plotting-results) sections.
To perform a parameter scan the `do_par_scan` method is called. This method has the following arguments:
* `parameter`: A String representing the parameter which should be varied.
* `scan_range`: Any iterable representing the range of values over which to vary the parameter (typically a NumPy `ndarray` generated by `numpy.linspace` or `numpy.logspace`).
* `scan_type`: Either `"percentage"` or `"value"` as described above (*default*: `"percentage"`).
* `init_return`: If `True` the parameter value will be reset to its initial value after performing the parameter scan (*default*: `True`).
* `par_scan`: If `True`, the parameter scan will be performed by multiple parallel processes rather than a single process, thus speeding performance (*default*: `False`).
* `par_engine`: Specifies the engine to be used for the parallel scanning processes. Can either be `"multiproc"` or `"ipcluster"`. A discussion of the differences between these methods are beyond the scope of this document, see [here](http://www.davekuhlman.org/python_multiprocessing_01.html) for a brief overview of Multiprocessing in Python. (*default*: `"multiproc"`).
* `force_legacy`: If `True` `do_par_scan` will use a older and slower algorithm for performing the parameter scan. This is mostly used for debugging purposes. (*default*: `False`)
Below we will perform a percentage scan of $V_{f4}$ for 200 points between 0.01 and 1000 in log space:
```
percentage_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',
scan_range=numpy.logspace(-1,3,200),
scan_type='percentage')
```
As previously described, these data can be displayed using `ScanFig` by calling the `plot` method of `percentage_scan_data`. Furthermore, lines can be enabled/disabled using the `toggle_category` method of `ScanFig` or by clicking on the appropriate buttons:
```
percentage_scan_plot = percentage_scan_data.plot()
# set the x-axis to a log scale
percentage_scan_plot.ax.semilogx()
# enable all the lines
percentage_scan_plot.toggle_category('Control Patterns', True)
percentage_scan_plot.toggle_category('CP001', True)
percentage_scan_plot.toggle_category('CP002', True)
# display the plot
percentage_scan_plot.interact()
#remove_next
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_perscan.png'))) #ex
```
A `value` plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present:
```
value_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',
scan_range=numpy.logspace(-1,3,200),
scan_type='value')
value_scan_plot = value_scan_data.plot()
# set the x-axis to a log scale
value_scan_plot.ax.semilogx()
# enable all the lines
value_scan_plot.toggle_category('Control Coefficients', True)
value_scan_plot.toggle_category('ccJR1_R4', True)
value_scan_plot.toggle_category('Control Patterns', True)
value_scan_plot.toggle_category('CP001', True)
value_scan_plot.toggle_category('CP002', True)
# display the plot
value_scan_plot.interact()
#remove_next
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','sc_valscan.png'))) #ex
```
### Fixed internal metabolites
In the case where the concentration of an internal intermediate is fixed (such as in the case of a GSDA) the `internal_fixed` argument must be set to `True` in either the `do_symca` method, or when instantiating the `Symca` object. This will typically result in the creation of a `cc_results_N` object for each separate reaction block, where `N` is a number starting at 0. Results can then be accessed via these objects as with normal free internal intermediate models.
Thus for a variant of the `lin4_fb` model where the intermediate`S3` is fixed at its steady-state value the procedure is as follows:
```
# Create a variant of mod with 'C' fixed at its steady-state value
mod_fixed_S3 = psctb.modeltools.fix_metabolite_ss(mod, 'S3')
# Instantiate Symca object the 'internal_fixed' argument set to 'True'
sc_fixed_S3 = psctb.Symca(mod_fixed_S3,internal_fixed=True)
# Run the 'do_symca' method (internal_fixed can also be set to 'True' here)
sc_fixed_S3.do_symca()
```
The normal `sc_fixed_S3.cc_results` object is still generated, but will be invalid for the fixed model. Each additional `cc_results_N` contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These `cc_results_N` objects are numbered arbitrarily, but consistantly accross different sessions. Each results object accessed and utilised in the same way as the normal `cc_results` object.
For the `mod_fixed_c` model two additional results objects (`cc_results_0` and `cc_results_1`) are generated:
* `cc_results_1` contains the control coefficients describing the sensitivity of flux and concentrations within the supply block of `S3` towards reactions within the supply block.
```
sc_fixed_S3.cc_results_1
```
* `cc_results_0` contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the `S3` demand block consists of a single reaction, this object also contains the control coefficient of `R4` on `J_R4`, which is equal to one. This results object is useful confirming that the results were generated as expected.
```
sc_fixed_S3.cc_results_0
```
If the demand block of `S3` in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional `cc_results_N` object containing the control coefficients of that reaction block.
### Saving results
In addition to being able to save parameter scan results (as previously described), a summary of the control coefficient and control pattern results can be saved using the `save_results` method. This saves a `csv` file (by default) to disk to any specified location. If no location is specified, a file named `cc_summary_N` is saved to the `~/Pysces/$modelname/symca/` directory, where `N` is a number starting at 0:
```
sc.save_results()
```
`save_results` has the following optional arguments:
* `file_name`: Specifies a path to save the results to. If `None`, the path defaults as described above.
* `separator`: The separator between fields (*default*: `","`)
The contents of the saved data file is as follows:
```
# the following code requires `pandas` to run
import pandas as pd
# load csv file at default path
results_path = '~/Pysces/lin4_fb/symca/cc_summary_0.csv'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
results_path = psctb.utils.misc.unix_to_windows_path(results_path)
else:
results_path = path.expanduser(results_path)
saved_results = pd.read_csv(results_path)
# show first 20 lines
saved_results.head(n=20)
```
### Saving/loading sessions
Saving and loading `Symca` sessions is very simple and works similar to `RateChar`. Saving a session takes place with the `save_session` method, whereas the `load_session` method loads the saved expressions. As with the `save_results` method and most other saving and loading functionality, if no `file_name` argument is provided, files will be saved to the default directory (see also [Basic Usage](basic_usage.ipynb#saving-and-default-directories)). As previously described, expressions can also automatically be loaded/saved by `do_symca` by using the `auto_save_load` argument which saves and loads using the default path. Models with internal fixed metabolites are handled automatically.
```
# saving session
sc.save_session()
# create new Symca object and load saved results
new_sc = psctb.Symca(mod)
new_sc.load_session()
# display saved results
new_sc.cc_results
```
| github_jupyter |
# Recognize named entities on Twitter with LSTMs
In this assignment, you will use a recurrent neural network to solve Named Entity Recognition (NER) problem. NER is a common task in natural language processing systems. It serves for extraction such entities from the text as persons, organizations, locations, etc. In this task you will experiment to recognize named entities from Twitter.
For example, we want to extract persons' and organizations' names from the text. Than for the input text:
Ian Goodfellow works for Google Brain
a NER model needs to provide the following sequence of tags:
B-PER I-PER O O B-ORG I-ORG
Where *B-* and *I-* prefixes stand for the beginning and inside of the entity, while *O* stands for out of tag or no tag. Markup with the prefix scheme is called *BIO markup*. This markup is introduced for distinguishing of consequent entities with similar types.
A solution of the task will be based on neural networks, particularly, on Bi-Directional Long Short-Term Memory Networks (Bi-LSTMs).
### Libraries
For this task you will need the following libraries:
- [Tensorflow](https://www.tensorflow.org) — an open-source software library for Machine Intelligence.
In this assignment, we use Tensorflow 1.15.0. You can install it with pip:
!pip install tensorflow==1.15.0
- [Numpy](http://www.numpy.org) — a package for scientific computing.
If you have never worked with Tensorflow, you would probably need to read some tutorials during your work on this assignment, e.g. [this one](https://www.tensorflow.org/tutorials/recurrent) could be a good starting point.
### Data
The following cell will download all data required for this assignment into the folder `week2/data`.
```
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
! wget https://raw.githubusercontent.com/hse-aml/natural-language-processing/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
setup_google_colab.setup_week2()
import sys
sys.path.append("..")
from common.download_utils import download_week2_resources
download_week2_resources()
```
### Load the Twitter Named Entity Recognition corpus
We will work with a corpus, which contains tweets with NE tags. Every line of a file contains a pair of a token (word/punctuation symbol) and a tag, separated by a whitespace. Different tweets are separated by an empty line.
The function *read_data* reads a corpus from the *file_path* and returns two lists: one with tokens and one with the corresponding tags. You need to complete this function by adding a code, which will replace a user's nickname to `<USR>` token and any URL to `<URL>` token. You could think that a URL and a nickname are just strings which start with *http://* or *https://* in case of URLs and a *@* symbol for nicknames.
```
def read_data(file_path):
tokens = []
tags = []
tweet_tokens = []
tweet_tags = []
for line in open(file_path, encoding='utf-8'):
line = line.strip()
if not line:
if tweet_tokens:
tokens.append(tweet_tokens)
tags.append(tweet_tags)
tweet_tokens = []
tweet_tags = []
else:
token, tag = line.split()
# Replace all urls with <URL> token
# Replace all users with <USR> token
######################################
######### YOUR CODE HERE #############
######################################
tweet_tokens.append(token)
tweet_tags.append(tag)
return tokens, tags
```
And now we can load three separate parts of the dataset:
- *train* data for training the model;
- *validation* data for evaluation and hyperparameters tuning;
- *test* data for final evaluation of the model.
```
train_tokens, train_tags = read_data('data/train.txt')
validation_tokens, validation_tags = read_data('data/validation.txt')
test_tokens, test_tags = read_data('data/test.txt')
```
You should always understand what kind of data you deal with. For this purpose, you can print the data running the following cell:
```
for i in range(3):
for token, tag in zip(train_tokens[i], train_tags[i]):
print('%s\t%s' % (token, tag))
print()
```
### Prepare dictionaries
To train a neural network, we will use two mappings:
- {token}$\to${token id}: address the row in embeddings matrix for the current token;
- {tag}$\to${tag id}: one-hot ground truth probability distribution vectors for computing the loss at the output of the network.
Now you need to implement the function *build_dict* which will return {token or tag}$\to${index} and vice versa.
```
from collections import defaultdict
def build_dict(tokens_or_tags, special_tokens):
"""
tokens_or_tags: a list of lists of tokens or tags
special_tokens: some special tokens
"""
# Create a dictionary with default value 0
tok2idx = defaultdict(lambda: 0)
idx2tok = []
# Create mappings from tokens (or tags) to indices and vice versa.
# At first, add special tokens (or tags) to the dictionaries.
# The first special token must have index 0.
# Mapping tok2idx should contain each token or tag only once.
# To do so, you should:
# 1. extract unique tokens/tags from the tokens_or_tags variable, which is not
# occur in special_tokens (because they could have non-empty intersection)
# 2. index them (for example, you can add them into the list idx2tok
# 3. for each token/tag save the index into tok2idx).
######################################
######### YOUR CODE HERE #############
######################################
return tok2idx, idx2tok
```
After implementing the function *build_dict* you can make dictionaries for tokens and tags. Special tokens in our case will be:
- `<UNK>` token for out of vocabulary tokens;
- `<PAD>` token for padding sentence to the same length when we create batches of sentences.
```
special_tokens = ['<UNK>', '<PAD>']
special_tags = ['O']
# Create dictionaries
token2idx, idx2token = build_dict(train_tokens + validation_tokens, special_tokens)
tag2idx, idx2tag = build_dict(train_tags, special_tags)
```
The next additional functions will help you to create the mapping between tokens and ids for a sentence.
```
def words2idxs(tokens_list):
return [token2idx[word] for word in tokens_list]
def tags2idxs(tags_list):
return [tag2idx[tag] for tag in tags_list]
def idxs2words(idxs):
return [idx2token[idx] for idx in idxs]
def idxs2tags(idxs):
return [idx2tag[idx] for idx in idxs]
```
### Generate batches
Neural Networks are usually trained with batches. It means that weight updates of the network are based on several sequences at every single time. The tricky part is that all sequences within a batch need to have the same length. So we will pad them with a special `<PAD>` token. It is also a good practice to provide RNN with sequence lengths, so it can skip computations for padding parts. We provide the batching function *batches_generator* readily available for you to save time.
```
def batches_generator(batch_size, tokens, tags,
shuffle=True, allow_smaller_last_batch=True):
"""Generates padded batches of tokens and tags."""
n_samples = len(tokens)
if shuffle:
order = np.random.permutation(n_samples)
else:
order = np.arange(n_samples)
n_batches = n_samples // batch_size
if allow_smaller_last_batch and n_samples % batch_size:
n_batches += 1
for k in range(n_batches):
batch_start = k * batch_size
batch_end = min((k + 1) * batch_size, n_samples)
current_batch_size = batch_end - batch_start
x_list = []
y_list = []
max_len_token = 0
for idx in order[batch_start: batch_end]:
x_list.append(words2idxs(tokens[idx]))
y_list.append(tags2idxs(tags[idx]))
max_len_token = max(max_len_token, len(tags[idx]))
# Fill in the data into numpy nd-arrays filled with padding indices.
x = np.ones([current_batch_size, max_len_token], dtype=np.int32) * token2idx['<PAD>']
y = np.ones([current_batch_size, max_len_token], dtype=np.int32) * tag2idx['O']
lengths = np.zeros(current_batch_size, dtype=np.int32)
for n in range(current_batch_size):
utt_len = len(x_list[n])
x[n, :utt_len] = x_list[n]
lengths[n] = utt_len
y[n, :utt_len] = y_list[n]
yield x, y, lengths
```
## Build a recurrent neural network
This is the most important part of the assignment. Here we will specify the network architecture based on TensorFlow building blocks. It's fun and easy as a lego constructor! We will create an LSTM network which will produce probability distribution over tags for each token in a sentence. To take into account both right and left contexts of the token, we will use Bi-Directional LSTM (Bi-LSTM). Dense layer will be used on top to perform tag classification.
```
import tensorflow as tf
import numpy as np
class BiLSTMModel():
pass
```
First, we need to create [placeholders](https://www.tensorflow.org/api_docs/python/tf/compat/v1/placeholder) to specify what data we are going to feed into the network during the execution time. For this task we will need the following placeholders:
- *input_batch* — sequences of words (the shape equals to [batch_size, sequence_len]);
- *ground_truth_tags* — sequences of tags (the shape equals to [batch_size, sequence_len]);
- *lengths* — lengths of not padded sequences (the shape equals to [batch_size]);
- *dropout_ph* — dropout keep probability; this placeholder has a predefined value 1;
- *learning_rate_ph* — learning rate; we need this placeholder because we want to change the value during training.
It could be noticed that we use *None* in the shapes in the declaration, which means that data of any size can be feeded.
You need to complete the function *declare_placeholders*.
```
def declare_placeholders(self):
"""Specifies placeholders for the model."""
# Placeholders for input and ground truth output.
self.input_batch = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input_batch')
self.ground_truth_tags = ######### YOUR CODE HERE #############
# Placeholder for lengths of the sequences.
self.lengths = tf.placeholder(dtype=tf.int32, shape=[None], name='lengths')
# Placeholder for a dropout keep probability. If we don't feed
# a value for this placeholder, it will be equal to 1.0.
self.dropout_ph = tf.placeholder_with_default(tf.cast(1.0, tf.float32), shape=[])
# Placeholder for a learning rate (tf.float32).
self.learning_rate_ph = ######### YOUR CODE HERE #############
BiLSTMModel.__declare_placeholders = classmethod(declare_placeholders)
```
Now, let us specify the layers of the neural network. First, we need to perform some preparatory steps:
- Create embeddings matrix with [tf.Variable](https://www.tensorflow.org/api_docs/python/tf/Variable). Specify its name (*embeddings_matrix*), type (*tf.float32*), and initialize with random values.
- Create forward and backward LSTM cells. TensorFlow provides a number of RNN cells ready for you. We suggest that you use *LSTMCell*, but you can also experiment with other types, e.g. GRU cells. [This](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) blogpost could be interesting if you want to learn more about the differences.
- Wrap your cells with [DropoutWrapper](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper). Dropout is an important regularization technique for neural networks. Specify all keep probabilities using the dropout placeholder that we created before.
After that, you can build the computation graph that transforms an input_batch:
- [Look up](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) embeddings for an *input_batch* in the prepared *embedding_matrix*.
- Pass the embeddings through [Bidirectional Dynamic RNN](https://www.tensorflow.org/api_docs/python/tf/nn/bidirectional_dynamic_rnn) with the specified forward and backward cells. Use the lengths placeholder here to avoid computations for padding tokens inside the RNN.
- Create a dense layer on top. Its output will be used directly in loss function.
Fill in the code below. In case you need to debug something, the easiest way is to check that tensor shapes of each step match the expected ones.
```
def build_layers(self, vocabulary_size, embedding_dim, n_hidden_rnn, n_tags):
"""Specifies bi-LSTM architecture and computes logits for inputs."""
# Create embedding variable (tf.Variable) with dtype tf.float32
initial_embedding_matrix = np.random.randn(vocabulary_size, embedding_dim) / np.sqrt(embedding_dim)
embedding_matrix_variable = ######### YOUR CODE HERE #############
# Create RNN cells (for example, tf.nn.rnn_cell.BasicLSTMCell) with n_hidden_rnn number of units
# and dropout (tf.nn.rnn_cell.DropoutWrapper), initializing all *_keep_prob with dropout placeholder.
forward_cell = ######### YOUR CODE HERE #############
backward_cell = ######### YOUR CODE HERE #############
# Look up embeddings for self.input_batch (tf.nn.embedding_lookup).
# Shape: [batch_size, sequence_len, embedding_dim].
embeddings = ######### YOUR CODE HERE #############
# Pass them through Bidirectional Dynamic RNN (tf.nn.bidirectional_dynamic_rnn).
# Shape: [batch_size, sequence_len, 2 * n_hidden_rnn].
# Also don't forget to initialize sequence_length as self.lengths and dtype as tf.float32.
(rnn_output_fw, rnn_output_bw), _ = ######### YOUR CODE HERE #############
rnn_output = tf.concat([rnn_output_fw, rnn_output_bw], axis=2)
# Dense layer on top.
# Shape: [batch_size, sequence_len, n_tags].
self.logits = tf.layers.dense(rnn_output, n_tags, activation=None)
BiLSTMModel.__build_layers = classmethod(build_layers)
```
To compute the actual predictions of the neural network, you need to apply [softmax](https://www.tensorflow.org/api_docs/python/tf/nn/softmax) to the last layer and find the most probable tags with [argmax](https://www.tensorflow.org/api_docs/python/tf/argmax).
```
def compute_predictions(self):
"""Transforms logits to probabilities and finds the most probable tags."""
# Create softmax (tf.nn.softmax) function
softmax_output = ######### YOUR CODE HERE #############
# Use argmax (tf.argmax) to get the most probable tags
# Don't forget to set axis=-1
# otherwise argmax will be calculated in a wrong way
self.predictions = ######### YOUR CODE HERE #############
BiLSTMModel.__compute_predictions = classmethod(compute_predictions)
```
During training we do not need predictions of the network, but we need a loss function. We will use [cross-entropy loss](http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html#cross-entropy), efficiently implemented in TF as
[cross entropy with logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits_v2). Note that it should be applied to logits of the model (not to softmax probabilities!). Also note, that we do not want to take into account loss terms coming from `<PAD>` tokens. So we need to mask them out, before computing [mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean).
```
def compute_loss(self, n_tags, PAD_index):
"""Computes masked cross-entopy loss with logits."""
# Create cross entropy function function (tf.nn.softmax_cross_entropy_with_logits_v2)
ground_truth_tags_one_hot = tf.one_hot(self.ground_truth_tags, n_tags)
loss_tensor = ######### YOUR CODE HERE #############
mask = tf.cast(tf.not_equal(self.input_batch, PAD_index), tf.float32)
# Create loss function which doesn't operate with <PAD> tokens (tf.reduce_mean)
# Be careful that the argument of tf.reduce_mean should be
# multiplication of mask and loss_tensor.
self.loss = ######### YOUR CODE HERE #############
BiLSTMModel.__compute_loss = classmethod(compute_loss)
```
The last thing to specify is how we want to optimize the loss.
We suggest that you use [Adam](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) optimizer with a learning rate from the corresponding placeholder.
You will also need to apply clipping to eliminate exploding gradients. It can be easily done with [clip_by_norm](https://www.tensorflow.org/api_docs/python/tf/clip_by_norm) function.
```
def perform_optimization(self):
"""Specifies the optimizer and train_op for the model."""
# Create an optimizer (tf.train.AdamOptimizer)
self.optimizer = ######### YOUR CODE HERE #############
self.grads_and_vars = self.optimizer.compute_gradients(self.loss)
# Gradient clipping (tf.clip_by_norm) for self.grads_and_vars
# Pay attention that you need to apply this operation only for gradients
# because self.grads_and_vars also contains variables.
# list comprehension might be useful in this case.
clip_norm = tf.cast(1.0, tf.float32)
self.grads_and_vars = ######### YOUR CODE HERE #############
self.train_op = self.optimizer.apply_gradients(self.grads_and_vars)
BiLSTMModel.__perform_optimization = classmethod(perform_optimization)
```
Congratulations! You have specified all the parts of your network. You may have noticed, that we didn't deal with any real data yet, so what you have written is just recipes on how the network should function.
Now we will put them to the constructor of our Bi-LSTM class to use it in the next section.
```
def init_model(self, vocabulary_size, n_tags, embedding_dim, n_hidden_rnn, PAD_index):
self.__declare_placeholders()
self.__build_layers(vocabulary_size, embedding_dim, n_hidden_rnn, n_tags)
self.__compute_predictions()
self.__compute_loss(n_tags, PAD_index)
self.__perform_optimization()
BiLSTMModel.__init__ = classmethod(init_model)
```
## Train the network and predict tags
[Session.run](https://www.tensorflow.org/api_docs/python/tf/Session#run) is a point which initiates computations in the graph that we have defined. To train the network, we need to compute *self.train_op*, which was declared in *perform_optimization*. To predict tags, we just need to compute *self.predictions*. Anyway, we need to feed actual data through the placeholders that we defined before.
```
def train_on_batch(self, session, x_batch, y_batch, lengths, learning_rate, dropout_keep_probability):
feed_dict = {self.input_batch: x_batch,
self.ground_truth_tags: y_batch,
self.learning_rate_ph: learning_rate,
self.dropout_ph: dropout_keep_probability,
self.lengths: lengths}
session.run(self.train_op, feed_dict=feed_dict)
BiLSTMModel.train_on_batch = classmethod(train_on_batch)
```
Implement the function *predict_for_batch* by initializing *feed_dict* with input *x_batch* and *lengths* and running the *session* for *self.predictions*.
```
def predict_for_batch(self, session, x_batch, lengths):
######################################
######### YOUR CODE HERE #############
######################################
return predictions
BiLSTMModel.predict_for_batch = classmethod(predict_for_batch)
```
We finished with necessary methods of our BiLSTMModel model and almost ready to start experimenting.
### Evaluation
To simplify the evaluation process we provide two functions for you:
- *predict_tags*: uses a model to get predictions and transforms indices to tokens and tags;
- *eval_conll*: calculates precision, recall and F1 for the results.
```
from evaluation import precision_recall_f1
def predict_tags(model, session, token_idxs_batch, lengths):
"""Performs predictions and transforms indices to tokens and tags."""
tag_idxs_batch = model.predict_for_batch(session, token_idxs_batch, lengths)
tags_batch, tokens_batch = [], []
for tag_idxs, token_idxs in zip(tag_idxs_batch, token_idxs_batch):
tags, tokens = [], []
for tag_idx, token_idx in zip(tag_idxs, token_idxs):
tags.append(idx2tag[tag_idx])
tokens.append(idx2token[token_idx])
tags_batch.append(tags)
tokens_batch.append(tokens)
return tags_batch, tokens_batch
def eval_conll(model, session, tokens, tags, short_report=True):
"""Computes NER quality measures using CONLL shared task script."""
y_true, y_pred = [], []
for x_batch, y_batch, lengths in batches_generator(1, tokens, tags):
tags_batch, tokens_batch = predict_tags(model, session, x_batch, lengths)
if len(x_batch[0]) != len(tags_batch[0]):
raise Exception("Incorrect length of prediction for the input, "
"expected length: %i, got: %i" % (len(x_batch[0]), len(tags_batch[0])))
predicted_tags = []
ground_truth_tags = []
for gt_tag_idx, pred_tag, token in zip(y_batch[0], tags_batch[0], tokens_batch[0]):
if token != '<PAD>':
ground_truth_tags.append(idx2tag[gt_tag_idx])
predicted_tags.append(pred_tag)
# We extend every prediction and ground truth sequence with 'O' tag
# to indicate a possible end of entity.
y_true.extend(ground_truth_tags + ['O'])
y_pred.extend(predicted_tags + ['O'])
results = precision_recall_f1(y_true, y_pred, print_results=True, short_report=short_report)
return results
```
## Run your experiment
Create *BiLSTMModel* model with the following parameters:
- *vocabulary_size* — number of tokens;
- *n_tags* — number of tags;
- *embedding_dim* — dimension of embeddings, recommended value: 200;
- *n_hidden_rnn* — size of hidden layers for RNN, recommended value: 200;
- *PAD_index* — an index of the padding token (`<PAD>`).
Set hyperparameters. You might want to start with the following recommended values:
- *batch_size*: 32;
- 4 epochs;
- starting value of *learning_rate*: 0.005
- *learning_rate_decay*: a square root of 2;
- *dropout_keep_probability*: try several values: 0.1, 0.5, 0.9.
However, feel free to conduct more experiments to tune hyperparameters and earn extra points for the assignment.
```
tf.reset_default_graph()
model = ######### YOUR CODE HERE #############
batch_size = ######### YOUR CODE HERE #############
n_epochs = ######### YOUR CODE HERE #############
learning_rate = ######### YOUR CODE HERE #############
learning_rate_decay = ######### YOUR CODE HERE #############
dropout_keep_probability = ######### YOUR CODE HERE #############
```
If you got an error *"Tensor conversion requested dtype float64 for Tensor with dtype float32"* in this point, check if there are variables without dtype initialised. Set the value of dtype equals to *tf.float32* for such variables.
Finally, we are ready to run the training!
```
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Start training... \n')
for epoch in range(n_epochs):
# For each epoch evaluate the model on train and validation data
print('-' * 20 + ' Epoch {} '.format(epoch+1) + 'of {} '.format(n_epochs) + '-' * 20)
print('Train data evaluation:')
eval_conll(model, sess, train_tokens, train_tags, short_report=True)
print('Validation data evaluation:')
eval_conll(model, sess, validation_tokens, validation_tags, short_report=True)
# Train the model
for x_batch, y_batch, lengths in batches_generator(batch_size, train_tokens, train_tags):
model.train_on_batch(sess, x_batch, y_batch, lengths, learning_rate, dropout_keep_probability)
# Decaying the learning rate
learning_rate = learning_rate / learning_rate_decay
print('...training finished.')
```
Now let us see full quality reports for the final model on train, validation, and test sets. To give you a hint whether you have implemented everything correctly, you might expect F-score about 40% on the validation set.
**The output of the cell below (as well as the output of all the other cells) should be present in the notebook for peer2peer review!**
```
print('-' * 20 + ' Train set quality: ' + '-' * 20)
train_results = eval_conll(model, sess, train_tokens, train_tags, short_report=False)
print('-' * 20 + ' Validation set quality: ' + '-' * 20)
validation_results = ######### YOUR CODE HERE #############
print('-' * 20 + ' Test set quality: ' + '-' * 20)
test_results = ######### YOUR CODE HERE #############
```
### Conclusions
Could we say that our model is state of the art and the results are acceptable for the task? Definately, we can say so. Nowadays, Bi-LSTM is one of the state of the art approaches for solving NER problem and it outperforms other classical methods. Despite the fact that we used small training corpora (in comparison with usual sizes of corpora in Deep Learning), our results are quite good. In addition, in this task there are many possible named entities and for some of them we have only several dozens of trainig examples, which is definately small. However, the implemented model outperforms classical CRFs for this task. Even better results could be obtained by some combinations of several types of methods, e.g. see [this](https://arxiv.org/abs/1603.01354) paper if you are interested.
| github_jupyter |
# TV Script Generation
In this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data.
## Get the Data
The data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text.
>* As a first step, we'll load in this data and look at some samples.
* Then, you'll be tasked with defining and training an RNN to generate a new script!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
```
## Explore the Data
Play around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
```
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
```
---
## Implement Pre-processing Functions
The first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:
- Lookup Table
- Tokenize Punctuation
### Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call `vocab_to_int`
- Dictionary to go from the id to word, we'll call `int_to_vocab`
Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
```
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_counter = Counter(text)
sorted_vocab_list = sorted(word_counter, key=word_counter.get, reverse=True)
vocab_to_int = {word: i for i, word in enumerate(sorted_vocab_list)} #Do not need to start from index 1 because no padding.
int_to_vocab = {i: word for word, i in vocab_to_int.items()}
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
```
### Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.
Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( **.** )
- Comma ( **,** )
- Quotation Mark ( **"** )
- Semicolon ( **;** )
- Exclamation mark ( **!** )
- Question mark ( **?** )
- Left Parentheses ( **(** )
- Right Parentheses ( **)** )
- Dash ( **-** )
- Return ( **\n** )
This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
```
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.': "||dot||",
',': "||comma||",
'"': "||doublequote||",
';': "||semicolon||",
'!': "||bang||",
'?': "||questionmark||",
'(': "||leftparens||",
')': "||rightparens||",
'-': "||dash||",
'\n': "||return||",
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
```
## Pre-process all the data and save it
Running the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
```
## Build the Neural Network
In this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.
### Check Access to GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
```
## Input
Let's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.html#torch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.
You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.
```
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
```
### Batching
Implement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.
>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.
For example, say we have these as input:
```
words = [1, 2, 3, 4, 5, 6, 7]
sequence_length = 4
```
Your first `feature_tensor` should contain the values:
```
[1, 2, 3, 4]
```
And the corresponding `target_tensor` should just be the next "word"/tokenized word value:
```
5
```
This should continue with the second `feature_tensor`, `target_tensor` being:
```
[2, 3, 4, 5] # features
6 # target
```
```
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
features = []
targets = []
print(words, sequence_length, batch_size)
for start in range(len(words) - sequence_length):
end = start + sequence_length
features.append(words[start:end])
targets.append(words[end])
data = TensorDataset(torch.tensor(features), torch.tensor(targets))
data_loader = DataLoader(data, batch_size, True)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
```
### Test your dataloader
You'll have to modify this code to test a batching function, but it should look fairly similar.
Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.
Your code should return something like the following (likely in a different order, if you shuffled your data):
```
torch.Size([10, 5])
tensor([[ 28, 29, 30, 31, 32],
[ 21, 22, 23, 24, 25],
[ 17, 18, 19, 20, 21],
[ 34, 35, 36, 37, 38],
[ 11, 12, 13, 14, 15],
[ 23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[ 38, 39, 40, 41, 42],
[ 25, 26, 27, 28, 29],
[ 7, 8, 9, 10, 11]])
torch.Size([10])
tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])
```
### Sizes
Your sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10).
### Values
You should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
```
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
```
---
## Build the Neural Network
Implement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:
- `__init__` - The initialize function.
- `init_hidden` - The initialization function for an LSTM/GRU hidden state
- `forward` - Forward propagation function.
The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.
**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.
### Hints
1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`
2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:
```
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
```
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.embed = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.LSTM(embedding_dim, self.hidden_dim, self.n_layers, dropout=dropout, batch_first=True)
self.fc = nn.Linear(self.hidden_dim, self.output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
x = self.embed(nn_input)
x, hidden = self.rnn(x, hidden)
x = x.contiguous().view(-1, self.hidden_dim)
x = self.fc(x)
x = x.view(nn_input.size(0), -1, self.output_size)[:, -1]
# return one batch of output word scores and the hidden state
return x, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
if train_on_gpu:
hidden = (hidden[0].cuda(), hidden[1].cuda())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
```
### Define forward and backpropagation
Use the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:
```
loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)
```
And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.
**If a GPU is available, you should move your data to that GPU device, here.**
```
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
optimizer.zero_grad()
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
```
## Neural Network Training
With the structure of the network complete and data ready to be fed in the neural network, it's time to train it.
### Train Loop
The training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
```
### Hyperparameters
Set and train the neural network with the following parameters:
- Set `sequence_length` to the length of a sequence.
- Set `batch_size` to the batch size.
- Set `num_epochs` to the number of epochs to train for.
- Set `learning_rate` to the learning rate for an Adam optimizer.
- Set `vocab_size` to the number of uniqe tokens in our vocabulary.
- Set `output_size` to the desired size of the output.
- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.
- Set `hidden_dim` to the hidden dimension of your RNN.
- Set `n_layers` to the number of layers/cells in your RNN.
- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.
If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
```
# Data params
# Sequence Length
sequence_length = 8 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 9
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 256
# Hidden Dimension
hidden_dim = 512
# Number of RNN Layers
n_layers = 3
# Show stats for every n number of batches
show_every_n_batches = 500
```
### Train
In the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train.
> **You should aim for a loss less than 3.5.**
You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
```
### Question: How did you decide on your model hyperparameters?
For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?
**Answer:** Most of the params were selected based on community input gathered from online sources. Sequence length was a little special in that I could not find many suggestions online, I tested, 4, 6, 8, 16, 32, 64, 128, and 1024 length sequences. I found that smaller sequences where effective, but I am not conclusive. 8 achieved the best results in a fairly short time.
I also tested other params like hidden dims and layers, etc. The conclusion was that higher embedding dims did not improve performance, while higher hidden dims did, 2-3 layers seems to offer little difference in performance.
---
# Checkpoint
After running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
```
## Generate TV Script
With the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section.
### Generate Text
To generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
```
### Generate a New Script
It's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:
- "jerry"
- "elaine"
- "george"
- "kramer"
You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
```
import numpy as np
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
```
#### Save your favorite scripts
Once you have a script that you like (or find interesting), save it to a text file!
```
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
```
# The TV Script is Not Perfect
It's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.
### Example generated script
>jerry: what about me?
>
>jerry: i don't have to wait.
>
>kramer:(to the sales table)
>
>elaine:(to jerry) hey, look at this, i'm a good doctor.
>
>newman:(to elaine) you think i have no idea of this...
>
>elaine: oh, you better take the phone, and he was a little nervous.
>
>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.
>
>jerry: oh, yeah. i don't even know, i know.
>
>jerry:(to the phone) oh, i know.
>
>kramer:(laughing) you know...(to jerry) you don't know.
You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally.
# Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
| github_jupyter |
# The Perceptron
```
import mxnet as mx
from mxnet import nd, autograd
import matplotlib.pyplot as plt
import numpy as np
mx.random.seed(1)
```
## A Separable Classification Problem
```
# generate fake data that is linearly separable with a margin epsilon given the data
def getfake(samples, dimensions, epsilon):
wfake = nd.random_normal(shape=(dimensions)) # fake weight vector for separation
bfake = nd.random_normal(shape=(1)) # fake bias
wfake = wfake / nd.norm(wfake) # rescale to unit length
# making some linearly separable data, simply by chosing the labels accordingly
X = nd.zeros(shape=(samples, dimensions))
Y = nd.zeros(shape=(samples))
i = 0
while (i < samples):
tmp = nd.random_normal(shape=(1,dimensions))
margin = nd.dot(tmp, wfake) + bfake
if (nd.norm(tmp).asscalar() < 3) & (abs(margin.asscalar()) > epsilon):
X[i,:] = tmp[0]
Y[i] = 1 if margin.asscalar() > 0 else -1
i += 1
return X, Y
# plot the data with colors chosen according to the labels
def plotdata(X,Y):
for (x,y) in zip(X,Y):
if (y.asscalar() == 1):
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='r')
else:
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='b')
# plot contour plots on a [-3,3] x [-3,3] grid
def plotscore(w,d):
xgrid = np.arange(-3, 3, 0.02)
ygrid = np.arange(-3, 3, 0.02)
xx, yy = np.meshgrid(xgrid, ygrid)
zz = nd.zeros(shape=(xgrid.size, ygrid.size, 2))
zz[:,:,0] = nd.array(xx)
zz[:,:,1] = nd.array(yy)
vv = nd.dot(zz,w) + d
CS = plt.contour(xgrid,ygrid,vv.asnumpy())
plt.clabel(CS, inline=1, fontsize=10)
X, Y = getfake(50, 2, 0.3)
plotdata(X,Y)
plt.show()
```
## Perceptron Implementation
```
def perceptron(w,b,x,y):
if (y * (nd.dot(w,x) + b)).asscalar() <= 0:
w += y * x
b += y
return 1
else:
return 0
w = nd.zeros(shape=(2))
b = nd.zeros(shape=(1))
for (x,y) in zip(X,Y):
res = perceptron(w,b,x,y)
if (res == 1):
print('Encountered an error and updated parameters')
print('data {}, label {}'.format(x.asnumpy(),y.asscalar()))
print('weight {}, bias {}'.format(w.asnumpy(),b.asscalar()))
plotscore(w,b)
plotdata(X,Y)
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='g')
plt.show()
```
## Perceptron Convergence in Action
```
Eps = np.arange(0.025, 0.45, 0.025)
Err = np.zeros(shape=(Eps.size))
for j in range(10):
for (i,epsilon) in enumerate(Eps):
X, Y = getfake(1000, 2, epsilon)
for (x,y) in zip(X,Y):
Err[i] += perceptron(w,b,x,y)
Err = Err / 10.0
plt.plot(Eps, Err, label='average number of updates for training')
plt.legend()
plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
HUES64_rep1_tfxn1_fs = ["../../../data/02__mpra/01__counts/07__HUES64_rep6_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/07__HUES64_rep6_lib2_BARCODES.txt"]
HUES64_rep1_tfxn2_fs = ["../../../data/02__mpra/01__counts/08__HUES64_rep7_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/08__HUES64_rep7_lib2_BARCODES.txt"]
HUES64_rep1_tfxn3_fs = ["../../../data/02__mpra/01__counts/09__HUES64_rep8_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/09__HUES64_rep8_lib2_BARCODES.txt"]
HUES64_rep2_tfxn1_fs = ["../../../data/02__mpra/01__counts/10__HUES64_rep9_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/10__HUES64_rep9_lib2_BARCODES.txt"]
HUES64_rep2_tfxn2_fs = ["../../../data/02__mpra/01__counts/11__HUES64_rep10_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/11__HUES64_rep10_lib2_BARCODES.txt"]
HUES64_rep2_tfxn3_fs = ["../../../data/02__mpra/01__counts/12__HUES64_rep11_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/12__HUES64_rep11_lib2_BARCODES.txt"]
HUES64_rep3_tfxn1_fs = ["../../../data/02__mpra/01__counts/16__HUES64_rep12_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/16__HUES64_rep12_lib2_BARCODES.txt"]
HUES64_rep3_tfxn2_fs = ["../../../data/02__mpra/01__counts/17__HUES64_rep13_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/17__HUES64_rep13_lib2_BARCODES.txt"]
HUES64_rep3_tfxn3_fs = ["../../../data/02__mpra/01__counts/18__HUES64_rep14_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/18__HUES64_rep14_lib2_BARCODES.txt"]
mESC_rep1_tfxn1_fs = ["../../../data/02__mpra/01__counts/15__mESC_rep3_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/15__mESC_rep3_lib2_BARCODES.txt"]
mESC_rep2_tfxn1_fs = ["../../../data/02__mpra/01__counts/19__mESC_rep4_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/19__mESC_rep4_lib2_BARCODES.txt",
"../../../data/02__mpra/01__counts/19__mESC_rep4_lib3_BARCODES.txt"]
mESC_rep3_tfxn1_fs = ["../../../data/02__mpra/01__counts/20__mESC_rep5_lib1_BARCODES.txt",
"../../../data/02__mpra/01__counts/20__mESC_rep5_lib2_BARCODES.txt",
"../../../data/02__mpra/01__counts/20__mESC_rep5_lib3_BARCODES.txt"]
```
## 1. import, merge, sum
### HUES64 rep 1
```
for i, f in enumerate(HUES64_rep1_tfxn1_fs):
if i == 0:
HUES64_rep1_tfxn1 = pd.read_table(f, sep="\t")
print(len(HUES64_rep1_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep1_tfxn1 = HUES64_rep1_tfxn1.merge(tmp, on="barcode")
HUES64_rep1_tfxn1["count"] = HUES64_rep1_tfxn1[["count_x", "count_y"]].sum(axis=1)
HUES64_rep1_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep1_tfxn1.head()
for i, f in enumerate(HUES64_rep1_tfxn2_fs):
if i == 0:
HUES64_rep1_tfxn2 = pd.read_table(f, sep="\t")
print(len(HUES64_rep1_tfxn2))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep1_tfxn2 = HUES64_rep1_tfxn2.merge(tmp, on="barcode")
HUES64_rep1_tfxn2["count"] = HUES64_rep1_tfxn2[["count_x", "count_y"]].sum(axis=1)
HUES64_rep1_tfxn2.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep1_tfxn2.head()
for i, f in enumerate(HUES64_rep1_tfxn3_fs):
if i == 0:
HUES64_rep1_tfxn3 = pd.read_table(f, sep="\t")
print(len(HUES64_rep1_tfxn3))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep1_tfxn3 = HUES64_rep1_tfxn3.merge(tmp, on="barcode")
HUES64_rep1_tfxn3["count"] = HUES64_rep1_tfxn3[["count_x", "count_y"]].sum(axis=1)
HUES64_rep1_tfxn3.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep1_tfxn3.head()
```
### HUES64 rep 2
```
for i, f in enumerate(HUES64_rep2_tfxn1_fs):
if i == 0:
HUES64_rep2_tfxn1 = pd.read_table(f, sep="\t")
print(len(HUES64_rep2_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep2_tfxn1 = HUES64_rep2_tfxn1.merge(tmp, on="barcode")
HUES64_rep2_tfxn1["count"] = HUES64_rep2_tfxn1[["count_x", "count_y"]].sum(axis=1)
HUES64_rep2_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep2_tfxn1.head()
for i, f in enumerate(HUES64_rep2_tfxn2_fs):
if i == 0:
HUES64_rep2_tfxn2 = pd.read_table(f, sep="\t")
print(len(HUES64_rep2_tfxn2))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep2_tfxn2 = HUES64_rep2_tfxn2.merge(tmp, on="barcode")
HUES64_rep2_tfxn2["count"] = HUES64_rep2_tfxn2[["count_x", "count_y"]].sum(axis=1)
HUES64_rep2_tfxn2.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep2_tfxn2.head()
for i, f in enumerate(HUES64_rep2_tfxn3_fs):
if i == 0:
HUES64_rep2_tfxn3 = pd.read_table(f, sep="\t")
print(len(HUES64_rep2_tfxn3))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep2_tfxn3 = HUES64_rep2_tfxn3.merge(tmp, on="barcode")
HUES64_rep2_tfxn3["count"] = HUES64_rep2_tfxn3[["count_x", "count_y"]].sum(axis=1)
HUES64_rep2_tfxn3.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep2_tfxn3.head()
```
### HUES64 rep 3
```
for i, f in enumerate(HUES64_rep3_tfxn1_fs):
if i == 0:
HUES64_rep3_tfxn1 = pd.read_table(f, sep="\t")
print(len(HUES64_rep3_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep3_tfxn1 = HUES64_rep3_tfxn1.merge(tmp, on="barcode")
HUES64_rep3_tfxn1["count"] = HUES64_rep3_tfxn1[["count_x", "count_y"]].sum(axis=1)
HUES64_rep3_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep3_tfxn1.head()
for i, f in enumerate(HUES64_rep3_tfxn2_fs):
if i == 0:
HUES64_rep3_tfxn2 = pd.read_table(f, sep="\t")
print(len(HUES64_rep3_tfxn2))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep3_tfxn2 = HUES64_rep3_tfxn2.merge(tmp, on="barcode")
HUES64_rep3_tfxn2["count"] = HUES64_rep3_tfxn2[["count_x", "count_y"]].sum(axis=1)
HUES64_rep3_tfxn2.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep3_tfxn2.head()
for i, f in enumerate(HUES64_rep3_tfxn3_fs):
if i == 0:
HUES64_rep3_tfxn3 = pd.read_table(f, sep="\t")
print(len(HUES64_rep3_tfxn3))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
HUES64_rep3_tfxn3 = HUES64_rep3_tfxn3.merge(tmp, on="barcode")
HUES64_rep3_tfxn3["count"] = HUES64_rep3_tfxn3[["count_x", "count_y"]].sum(axis=1)
HUES64_rep3_tfxn3.drop(["count_x", "count_y"], axis=1, inplace=True)
HUES64_rep3_tfxn3.head()
```
## mESC rep 1
```
for i, f in enumerate(mESC_rep1_tfxn1_fs):
if i == 0:
mESC_rep1_tfxn1 = pd.read_table(f, sep="\t")
print(len(mESC_rep1_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
mESC_rep1_tfxn1 = mESC_rep1_tfxn1.merge(tmp, on="barcode")
mESC_rep1_tfxn1["count"] = mESC_rep1_tfxn1[["count_x", "count_y"]].sum(axis=1)
mESC_rep1_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
mESC_rep1_tfxn1.head()
```
## mESC rep 2
```
for i, f in enumerate(mESC_rep2_tfxn1_fs):
if i == 0:
mESC_rep2_tfxn1 = pd.read_table(f, sep="\t")
print(len(mESC_rep2_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
mESC_rep2_tfxn1 = mESC_rep2_tfxn1.merge(tmp, on="barcode")
mESC_rep2_tfxn1["count"] = mESC_rep2_tfxn1[["count_x", "count_y", "count"]].sum(axis=1)
mESC_rep2_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
mESC_rep2_tfxn1.head()
```
## mESC rep 3
```
for i, f in enumerate(mESC_rep3_tfxn1_fs):
if i == 0:
mESC_rep3_tfxn1 = pd.read_table(f, sep="\t")
print(len(mESC_rep3_tfxn1))
else:
tmp = pd.read_table(f, sep="\t")
print(len(tmp))
mESC_rep3_tfxn1 = mESC_rep3_tfxn1.merge(tmp, on="barcode")
mESC_rep3_tfxn1["count"] = mESC_rep3_tfxn1[["count_x", "count_y", "count"]].sum(axis=1)
mESC_rep3_tfxn1.drop(["count_x", "count_y"], axis=1, inplace=True)
mESC_rep3_tfxn1.head()
```
## 2. write files
```
HUES64_rep1_tfxn1.to_csv("../../../GEO_submission/MPRA__HUES64__rep1__tfxn1.BARCODES.txt", sep="\t", index=False)
HUES64_rep1_tfxn2.to_csv("../../../GEO_submission/MPRA__HUES64__rep1__tfxn2.BARCODES.txt", sep="\t", index=False)
HUES64_rep1_tfxn3.to_csv("../../../GEO_submission/MPRA__HUES64__rep1__tfxn3.BARCODES.txt", sep="\t", index=False)
HUES64_rep2_tfxn1.to_csv("../../../GEO_submission/MPRA__HUES64__rep2__tfxn1.BARCODES.txt", sep="\t", index=False)
HUES64_rep2_tfxn2.to_csv("../../../GEO_submission/MPRA__HUES64__rep2__tfxn2.BARCODES.txt", sep="\t", index=False)
HUES64_rep2_tfxn3.to_csv("../../../GEO_submission/MPRA__HUES64__rep2__tfxn3.BARCODES.txt", sep="\t", index=False)
HUES64_rep3_tfxn1.to_csv("../../../GEO_submission/MPRA__HUES64__rep3__tfxn1.BARCODES.txt", sep="\t", index=False)
HUES64_rep3_tfxn2.to_csv("../../../GEO_submission/MPRA__HUES64__rep3__tfxn2.BARCODES.txt", sep="\t", index=False)
HUES64_rep3_tfxn3.to_csv("../../../GEO_submission/MPRA__HUES64__rep3__tfxn3.BARCODES.txt", sep="\t", index=False)
mESC_rep1_tfxn1.to_csv("../../../GEO_submission/MPRA__mESC__rep1__tfxn1.BARCODES.txt", sep="\t", index=False)
mESC_rep2_tfxn1.to_csv("../../../GEO_submission/MPRA__mESC__rep2__tfxn1.BARCODES.txt", sep="\t", index=False)
mESC_rep3_tfxn1.to_csv("../../../GEO_submission/MPRA__mESC__rep3__tfxn1.BARCODES.txt", sep="\t", index=False)
```
| github_jupyter |
# Building our operators: the Face Divergence
The divergence is the integral of a flux through a closed surface as that enclosed volume shrinks to a point. Since we have discretized and no longer have continuous functions, we cannot fully take the limit to a point; instead, we approximate it around some (finite!) volume: *a cell*. The flux out of the surface ($\vec{j} \cdot \vec{n}$) is actually how we discretized $\vec{j}$ onto our mesh (i.e. $\bf{j}$) except that the face normal points out of the cell (rather than in the axes direction). After fixing the direction of the face normal (multiplying by $\pm 1$), we only need to calculate the face areas and cell volume to create the discrete divergence matrix.
<img src="./images/Divergence.png" width=80% align="center">
<h4 align="center">Figure 4. Geometrical definition of the divergence and the discretization.</h4>
## Implementation
Although this is a really helpful way to think about conceptually what is happening, the implementation of that would be a huge for loop over each cell. In practice, this would be slow, so instead, we will take advantage of linear algebra. Let's start by looking at this in 1 dimension using the SimPEG Mesh class.
```
import numpy as np
from SimPEG import Mesh
import matplotlib.pyplot as plt
%matplotlib inline
plt.set_cmap(plt.get_cmap('viridis')) # use a nice colormap!
# define a 1D mesh
mesh1D = Mesh.TensorMesh([5]) # with 5 cells
fig, ax = plt.subplots(1,1, figsize=(12,2))
ax.plot(mesh1D.gridN, np.zeros(mesh1D.nN),'-k',marker='|',markeredgewidth=2, markersize=16)
ax.plot(mesh1D.gridCC,np.zeros(mesh1D.nC),'o')
ax.plot(mesh1D.gridFx,np.zeros(mesh1D.nFx),'>')
ax.set_title('1D Mesh')
# and define a vector of fluxes that live on the faces of the 1D mesh
face_vec = np.r_[0., 1., 2., 2., 1., 0.] # vector of fluxes that live on the faces of the mesh
print("The flux on the faces is {}".format(face_vec))
plt.plot(mesh1D.gridFx, face_vec, '-o')
plt.ylim([face_vec.min()-0.5, face_vec.max()+0.5])
plt.grid(which='both')
plt.title('face_vec');
```
Over a single cell, the divergence is
$$
\nabla \cdot \vec{j}(p) = \lim_{v \to \{p\}} = \int \int_{S(v)} \frac{\vec{j}\cdot \vec{n}}{v} dS
$$
in 1D, this collapses to taking a single difference - how much is going out of the cell vs coming in?
$$
\nabla \cdot \vec{j} \approx \frac{1}{v}(-j_{\text{left}} + j_{\text{right}})
$$
Since the normal of the x-face on the left side of the cell points in the positive x-direction, we multiply by -1 to get the flux going out of the cell. On the right, the normal defining the x-face is point out of the cell, so it is positive.
```
# We can take the divergence over the entire mesh by looping over each cell
div_face_vec = np.zeros(mesh1D.nC) # allocate for each cell
for i in range(mesh1D.nC): # loop over each cell and
div_face_vec[i] = 1.0/mesh1D.vol[i] * (-face_vec[i] + face_vec[i+1])
print("The face div of the 1D flux is {}".format(div_face_vec))
```
Doing it as a for loop is easy to program for the first time,
but is difficult to see what is going on and could be slow!
Instead, we can build a faceDiv matrix (note: this is a silly way to do this!)
```
faceDiv = np.zeros([mesh1D.nC, mesh1D.nF]) # allocate space for a face div matrix
for i in range(mesh1D.nC): # loop over each cell
faceDiv[i, [i, i+1]] = 1.0/mesh1D.vol[i] * np.r_[-1,+1]
print("The 1D face div matrix for this mesh is \n{}".format(faceDiv))
assert np.all( faceDiv.dot(face_vec) == div_face_vec ) # make sure we get the same result!
print("\nThe face div of the 1D flux is still {}!".format(div_face_vec))
```
the above is still a loop... (and python is not a fan of loops).
Also, if the mesh gets big, we are storing a lot of unnecessary zeros
```
"There are {nnz} zeros (too many!) that we are storing".format(nnz = np.sum(faceDiv == 0))
```
### Working in Sparse
We will use instead *sparse* matrices instead. These are in scipy and act almost the same as numpy arrays (except they default to matrix multiplication), and they don't store all of those pesky zeros! We use [scipy.sparse](http://docs.scipy.org/doc/scipy/reference/sparse.html) to build these matrices.
```
import scipy.sparse as sp
from SimPEG.Utils import sdiag # we are often building sparse diagonal matrices, so we made a functio in SimPEG!
# construct differencing matrix with diagonals -1, +1
sparse_diff = sp.spdiags((np.ones((mesh1D.nC+1, 1))*[-1, 1]).T, [0, 1], mesh1D.nC, mesh1D.nC+1, format="csr")
print("the sparse differencing matrix is \n{}".format(sparse_diff.todense()))
# account for the volume
faceDiv_sparse = sdiag(1./mesh1D.vol) * sparse_diff # account for volume
print("\n and the face divergence is \n{}".format(faceDiv_sparse.todense()))
print("\n but now we are only storing {nnz} nonzeros".format(nnz=faceDiv_sparse.nnz))
assert np.all(faceDiv_sparse.dot(face_vec) == div_face_vec)
print("\n and we get the same answer! {}".format(faceDiv_sparse * face_vec))
```
In SimPEG, this is stored as the `faceDiv` property on the mesh
```
print(mesh1D.faceDiv * face_vec) # and still gives us the same answer!
```
## Moving to 2D
To move up in dimensionality, we build a 2D mesh which has both x and y faces
```
mesh2D = Mesh.TensorMesh([100,80])
mesh2D.plotGrid()
plt.axis('tight');
```
We define 2 face functions, one in the x-direction and one in the y-direction. Here, we choose to work with sine functions as the continuous divergence is easy to compute, meaning we can test it!
```
jx_fct = lambda x, y: -np.sin(2.*np.pi*x)
jy_fct = lambda x, y: -np.sin(2.*np.pi*y)
jx_vec = jx_fct(mesh2D.gridFx[:,0], mesh2D.gridFx[:,1])
jy_vec = jy_fct(mesh2D.gridFy[:,0], mesh2D.gridFy[:,1])
j_vec = np.r_[jx_vec, jy_vec]
print("There are {nFx} x-faces and {nFy} y-faces, so the length of the "
"face function, j, is {lenj}".format(
nFx=mesh2D.nFx,
nFy=mesh2D.nFy,
lenj=len(j_vec)
))
plt.colorbar(mesh2D.plotImage(j_vec, 'F', view='vec')[0])
```
### But first... what does the matrix look like?
Now, we know that we do not want to loop over each of the cells and instead want to work with matrix-vector products. In this case, each row of the divergence matrix should pick out the two relevant faces in the x-direction and two in the y-direction (4 total).
When we unwrap our face function, we unwrap using column major ordering, so all of the x-faces are adjacent to one another, while the y-faces are separated by the number of cells in the x-direction (see [mesh.ipynb](mesh.ipynb) for more details!).
When we plot the divergence matrix, there will be 4 "diagonals",
- 2 that are due to the x-contribution
- 2 that are due to the y-contribution
Here, we define a small 2D mesh so that it is easier to see the matrix structure.
```
small_mesh2D = Mesh.TensorMesh([3,4])
print("Each y-face is {} entries apart".format(small_mesh2D.nCx))
print("and the total number of x-faces is {}".format(small_mesh2D.nFx))
print("So in the first row of the faceDiv, we have non-zero entries at \n{}".format(
small_mesh2D.faceDiv[0,:]))
```
Now, lets look at the matrix structure
```
fig, ax = plt.subplots(1,2, figsize=(12,4))
# plot the non-zero entries in the faceDiv
ax[0].spy(small_mesh2D.faceDiv, ms=2)
ax[0].set_xlabel('2D faceDiv')
small_mesh2D.plotGrid(ax=ax[1])
# Number the faces and plot. (We should really add this to SimPEG... pull request anyone!?)
xys = zip(
small_mesh2D.gridFx[:,0],
small_mesh2D.gridFx[:,1],
range(small_mesh2D.nFx)
)
for x,y,ii in xys:
ax[1].plot(x, y, 'r>')
ax[1].text(x+0.01, y-0.02, ii, color='r')
xys = zip(
small_mesh2D.gridFy[:,0],
small_mesh2D.gridFy[:,1],
range(small_mesh2D.nFy)
)
for x,y,ii in xys:
ax[1].plot(x, y, 'g^')
ax[1].text(x-0.02, y+0.02, ii+small_mesh2D.nFx, color='g')
ax[1].set_xlim((-0.1,1.1));
ax[1].set_ylim((-0.1,1.1));
```
How did we construct the matrix? - Kronecker products.
There is a handy identity that relates the vectorized face function to its matrix form (<a href = "https://en.wikipedia.org/wiki/Vectorization_(mathematics)#Compatibility_with_Kronecker_products">wikipedia link!</a>)
$$
\text{vec}(AUB^\top) = (B \otimes A) \text{vec}(U)
$$
For the x-contribution:
- A is our 1D differential operator ([-1, +1] on the diagonals)
- U is $j_x$ (the x-face function as a matrix)
- B is just an identity
so
$$
\text{Div}_x \text{vec}(j_x) = (I \otimes Div_{1D}) \text{vec}(j_x)
$$
For the y-contribution:
- A is just an identity!
- U is $j_y$ (the y-face function as a matrix)
- B is our 1D differential operator ([-1, +1] on the diagonals)
so
$$
\text{Div}_y \text{vec}(j_y) = (\text{Div}_{1D} \otimes I) \text{vec}(j_y)
$$
$$
\text{Div} \cdot j = \text{Div}_x \cdot j_x + \text{Div}_y \cdot j_y = [\text{Div}_x, \text{Div}_y] \cdot [j_x; j_y]
$$
And $j$ is just $[j_x; j_y]$, so we can horizontally stack $\text{Div}_x$, $\text{Div}_y$
$$
\text{Div} = [\text{Div}_x, \text{Div}_y]
$$
You can check this out in the SimPEG docs by running **small_mesh2D.faceDiv??**
```
# small_mesh2D.faceDiv?? # check out the code!
```
Now that we have a discrete divergence, lets check out the divergence of the face function we defined earlier.
```
Div_j = mesh2D.faceDiv * j_vec
fig, ax = plt.subplots(1,2, figsize=(8,4))
plt.colorbar(mesh2D.plotImage(j_vec, 'F', view='vec', ax=ax[0])[0],ax=ax[0])
plt.colorbar(mesh2D.plotImage(Div_j, ax=ax[1])[0],ax=ax[1])
ax[0].set_title('j')
ax[1].set_title('Div j')
plt.tight_layout()
```
### Are we right??
Since we chose a simple function,
$$
\vec{j} = - \sin(2\pi x) \hat{x} - \sin(2\pi y) \hat{y}
$$
we know the continuous divergence...
$$
\nabla \cdot \vec{j} = -2\pi (\cos(2\pi x) + \cos(2\pi y))
$$
So lets plot it and take a look
```
# from earlier
# jx_fct = lambda x, y: -np.sin(2*np.pi*x)
# jy_fct = lambda x, y: -np.sin(2*np.pi*y)
sol = lambda x, y: -2*np.pi*(np.cos(2*np.pi*x)+np.cos(2*np.pi*y))
cont_div_j = sol(mesh2D.gridCC[:,0], mesh2D.gridCC[:,1])
Div_j = mesh2D.faceDiv * j_vec
fig, ax = plt.subplots(1,2, figsize=(8,4))
plt.colorbar(mesh2D.plotImage(Div_j, ax=ax[0])[0],ax=ax[0])
plt.colorbar(mesh2D.plotImage(cont_div_j, ax=ax[1])[0],ax=ax[1])
ax[0].set_title('Discrete Div j')
ax[1].set_title('Continuous Div j')
plt.tight_layout()
```
Those look similar :)
### Order Test
We can do better than just an eye-ball comparison - since we are using a a staggered grid, with centered differences, the discretization should be second-order ($\mathcal{O}(h^2)$). That is, as we refine the mesh, our approximation of the divergence should improve by a factor of 2.
SimPEG has a number of testing functions for
[derivatives](http://docs.simpeg.xyz/content/api_core/api_Tests.html#SimPEG.Tests.checkDerivative)
and
[order of convergence](http://docs.simpeg.xyz/content/api_core/api_Tests.html#SimPEG.Tests.OrderTest)
to make our lives easier!
```
import unittest
from SimPEG.Tests import OrderTest
jx = lambda x, y: -np.sin(2*np.pi*x)
jy = lambda x, y: -np.sin(2*np.pi*y)
sol = lambda x, y: -2*np.pi*(np.cos(2*np.pi*x)+np.cos(2*np.pi*y))
class Testify(OrderTest):
meshDimension = 2
def getError(self):
j = np.r_[jx(self.M.gridFx[:,0], self.M.gridFx[:,1]),
jy(self.M.gridFy[:,0], self.M.gridFy[:,1])]
num = self.M.faceDiv * j # numeric answer
ans = sol(self.M.gridCC[:,0], self.M.gridCC[:,1]) # note M is a 2D mesh
return np.linalg.norm((num - ans), np.inf) # look at the infinity norm
# (as we refine the mesh, the number of cells
# changes, so need to be careful if using a 2-norm)
def test_order(self):
self.orderTest()
# This just runs the unittest:
suite = unittest.TestLoader().loadTestsFromTestCase( Testify )
unittest.TextTestRunner().run( suite );
```
Looks good - Second order convergence!
## Next up ...
In the [next notebook](weakformulation.ipynb), we will explore how to use the weak formulation to discretize the DC equations.
| github_jupyter |
# Synthetic seismogram
This notebook looks at the convolutional model of a seismic trace.
For a fuller example, see [Bianco, E (2004)](https://github.com/seg/tutorials-2014/blob/master/1406_Make_a_synthetic/how_to_make_synthetic.ipynb) in *The Leading Edge*.
First, the usual preliminaries.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## Load geophysical data
We'll use `lasio` to faciliate loading curves from an LAS file.
```
from welly import Well
w = Well.from_las('../data/L-30.las')
dt = w.data["DT"]
rhob = w.data["RHOB"]
dt
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Convert the logs to SI units</li>
</ul>
</div>
```
dt =
rhob =
```
Compute velocity and thus acoustic impedance.
```
from utils import vp_from_dt, impedance, rc_series
vp = vp_from_dt(dt)
ai = impedance(vp, rhob)
z = dt.basis
plt.figure(figsize=(16, 2))
plt.plot(z, ai, lw=0.5)
plt.show()
```
## Depth to time conversion
The logs are in depth, but the seismic is in travel time. So we need to convert the well data to time.
We don't know the seismic time, but we can model it from the DT curve: since DT is 'elapsed time', in microseconds per metre, we can just add up all these time intervals for 'total elapsed time'. Then we can use that to 'look up' the time of a given depth.
We use the step size to scale the DT values to 'seconds per step' (instead of µs/m).
```
scaled_dt = dt.step * np.nan_to_num(dt) / 1e6 # Convert to seconds per step
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Do the arithmetic to find the timing of the top of the log.</li>
</ul>
</div>
```
dt.start, w.las.header['Well']['STRT']
kb = 0.3048 * w.las.header['Well']['KB'].value
gl = 0.3048 * w.las.header['Well']['GL'].value
start = dt.start
v_water = 1480
v_repl = 1800
water_layer = # Depth of water
repl_layer = # Thickness of replacement layer
water_twt = # TWT in water, using water_layer and v_water
repl_twt = # TWT in replacement layer, using repl_layer and v_repl
print("Water time: {:.3f} ms\nRepl time: {:.3f} ms".format(water_twt, repl_twt))
```
You should get
Water time: 0.186 ms
Repl time: 0.233 ms
Now finally we can compute the cumulative time elapsed on the DT log:
```
dt_time = water_twt + repl_twt + 2*np.cumsum(scaled_dt)
dt_time[-1]
```
And then use this to convert the logs to a time basis:
```
delt = 0.004 # Sample interval.
maxt = np.ceil(dt_time[-1]) # Max time that we need; just needs to be longer than the log.
# Make a regular time basis: the seismic time domain.
seis_time = np.arange(0, maxt, delt)
# Interpolate the AI log onto this basis.
ai_t = np.interp(seis_time, dt_time, ai)
# Let's do the depth 'log' too while we're at it.
z_t = np.interp(seis_time, dt_time, z)
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Make a time-conversion function to get time-converted logs from `delt`, `maxt`, `dt_time`, and a log.</li>
<li>- Make a function to get `dt_time` from `kb`, `gl`, `dt`, `v_water`, `v_repl`.</li>
<li>- Recompute `ai_t` by calling your new functions.</li>
<li>- Plot the DT log in time.</li>
</ul>
</div>
```
def time_convert(log, dt_time, delt=0.004, maxt=3.0):
"""
Converts log to the time domain, given dt_time, delt, and maxt.
dt_time is elapsed time regularly sampled in depth. log must
be sampled on the same depth basis.
"""
# Your code here!
return log_t
def compute_dt_time(dt, kb, gl, v_repl, v_water=1480):
"""
Compute DT time from the dt log and some other variables.
The DT log must be a welly curve object.
"""
# Your code here!
return dt_time
```
Now, at last, we can compute the reflection coefficients in time.
```
from utils import rc_vector
rc = rc_vector(ai_t)
rc[np.isnan(rc)] = 0
```
Plotting these is a bit more fiddly, because we would like to show them as a sequence of spikes, rather than as a continuous curve, and matplotlib's `axvline` method wants everything in terms of fractions of the plot's dimensions, not as values in the data space.
```
plt.figure(figsize=(16, 2))
pts, stems, base = plt.stem(seis_time[1:], rc)
plt.setp(pts, markersize=0)
plt.setp(stems, lw=0.5)
plt.setp(base, lw=0.75)
plt.show()
```
## Impulsive wavelet
Convolve with a wavelet.
```
from bruges.filters import ricker
f = 25
w, t = ricker(0.128, 0.004, f, return_t=True)
plt.plot(t, w)
plt.show()
syn = np.convolve(rc, w, mode='same')
plt.figure(figsize=(16,2))
plt.plot(seis_time[1:], syn)
plt.show()
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Try to plot the RC series with the synthetic.</li>
<li>- You'll need to zoom in a bit to see much, try using a slice of `[300:350]` on all x's and y's.</li>
</ul>
</div>
If the widgets don't show up, you might need to do this:
jupyter nbextension enable --py widgetsnbextension
If we are recording with dynamite or even an airgun, this might be an acceptable model of the seismic. But if we're using Vibroseis, things get more complicated. To get a flavour, try another wavelet in `bruges.filters`, or check out the notebooks:
- [Vibroseis data](../notebooks/Vibroseis_data.ipynb)
- [Wavelets and sweeps](../notebooks/Wavelets_and_sweeps.ipynb)
## Compare with the seismic
```
seismic = np.loadtxt('../data/Penobscot_xl1155.txt')
syn.shape
```
The synthetic is at trace number 77. We need to make a shifted version of the synthetic to overplot.
```
tr = 77
gain = 50
s = tr + gain*syn
```
And we can define semi-real-world cordinates of the seismic data:
```
extent = (0, 400, 4.0, 0)
plt.figure(figsize=(10,20))
plt.imshow(seismic.T, cmap='Greys', extent=extent, aspect='auto')
plt.plot(s, seis_time[1:])
plt.fill_betweenx(seis_time[1:], tr, s, where=syn>0, lw=0)
plt.xlim(0, 400)
plt.ylim(3.2, 0)
plt.show()
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>Load your tops data from `Reading data from files.ipynb` (using `from utils import tops` perhaps), or using the function you made in [`Practice functions`](Practice_functions.ipynb).</li>
<li>- Use the time-converted 'depth', `z_t`, to convert depths to time.</li>
<li>- Plot the tops on the seismic.</li>
</ul>
</div>
```
from utils import get_tops_from_file
tops = get_tops_from_file('../data/L-30_tops.txt')
```
<div class="alert alert-success">
<b>Exercise</b>:
<ul>
<li>- Make functions for the wavelet creation, synthetic generation, and synthetic plotting steps.</li>
<li>- Make a master function that takes the name of an LAS file, plus any other required info (such as `delt`), and returns a tuple of arrays: a time basis, and the synthetic amplitudes. You could make saving a plot optional.</li>
<li>- Copy this notebook and make an offset synthetic for `R-39.las`, which has a shear-wave DT.</li>
</ul>
</div>
<hr />
<div>
<img src="https://avatars1.githubusercontent.com/u/1692321?s=50"><p style="text-align:center">© Agile Geoscience 2016</p>
</div>
| github_jupyter |
<a href="https://colab.research.google.com/github/Serbeld/RX-COVID-19/blob/master/Detection5C_NormNew_v2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install lime
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import inception_v3
from tensorflow.keras.layers import Dense,Dropout,Flatten,Input,AveragePooling2D,BatchNormalization
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import cv2
import os
import lime
from lime import lime_image
from skimage.segmentation import mark_boundaries
import pandas as pd
plt.rcParams["figure.figsize"] = (10,5)
#Loading the dataset
!pip install h5py
import h5py
from google.colab import drive,files
drive.mount('/content/drive')
hdf5_path = '/content/drive/My Drive/Dataset5C/Dataset5C.hdf5'
dataset = h5py.File(hdf5_path, "r")
import numpy as np
import matplotlib.pylab as plt
#train
train_img = dataset["train_img"]
xt = np.array(train_img)
yt = np.array(dataset["train_labels"])
#test
testX = np.array(dataset["test_img"])
testY = np.array(dataset["test_labels"])
#Validation
xval = np.array(dataset["val_img"])
yval = np.array(dataset["val_labels"])
print("Training Shape: "+ str(xt.shape))
print("Validation Shape: "+ str(xval.shape))
print("Testing Shape: "+ str(testX.shape))
#Categorical values or OneHot
import keras
num_classes = 5
yt = keras.utils.to_categorical(yt,num_classes)
testY = keras.utils.to_categorical(testY,num_classes)
yval = keras.utils.to_categorical(yval,num_classes)
#Image
num_image = 15
print()
print('Healthy: [1 0 0 0 0]')
print('Pneumonia & Covid-19: [0 1 0 0 0]')
print('Cardiomegaly: [0 0 1 0 0]')
print('Other respiratory disease: [0 0 0 1 0]')
print('Pleural Effusion: [0 0 0 0 1]')
print()
print("Output: "+ str(yt[num_image]))
imagen = train_img[num_image]
plt.imshow(imagen)
plt.show()
## global params
INIT_LR = 1e-5 # learning rate
EPOCHS = 10 # training epochs
BS = 4 # batch size
## build network
from tensorflow.keras.models import load_model
#Inputs
inputs = Input(shape=(512, 512, 3), name='images')
inputs2 = BatchNormalization()(inputs)
#Inception Model
output1 = inception_v3.InceptionV3(include_top=False,weights= "imagenet",
input_shape=(512, 512, 3),
classes = 5)(inputs2)
#AveragePooling2D
output = AveragePooling2D(pool_size=(2, 2), strides=None,
padding='valid',name='AvgPooling')(output1)
#Flattened
output = Flatten(name='Flatten')(output)
#Dropout
output = Dropout(0.2,name='Dropout')(output)
#ReLU layer
output = Dense(10, activation = 'relu',name='ReLU')(output)
#Dense layer
output = Dense(5, activation='softmax',name='softmax')(output)
# the actual model train)
model = Model(inputs=inputs, outputs=output)
print("[INFO] compiling model...")
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
model.summary()
from tensorflow.keras.callbacks import ModelCheckpoint
model_checkpoint = ModelCheckpoint(filepath="/content/drive/My Drive/Dataset5C/Model",
monitor='val_loss', save_best_only=True)
## train
print("[INFO] training head...")
H = model.fit({'images': xt},
{'softmax': yt},
batch_size = BS,
epochs = EPOCHS,
validation_data=(xval, yval),
callbacks=[model_checkpoint],
shuffle=True)
#Load the best model trained
model = load_model("/content/drive/My Drive/Dataset5C/Model")
## eval
print("[INFO] evaluating network...")
print()
print("Loss: "+ str(round(model.evaluate(testX,testY,verbose=0)[0],2))+ " Acc: "+ str(round(model.evaluate(testX,testY,verbose=1)[1],2)))
print()
predIdxs = model.predict(testX)
predIdxs = np.argmax(predIdxs, axis=1) # argmax for the predicted probability
#print(classification_report(testY.argmax(axis=1), predIdxs,target_names=lb.classes_))
cm = confusion_matrix(testY.argmax(axis=1), predIdxs)
total = sum(sum(cm))
#print(total) #60
acc = (cm[0, 0] + cm[1, 1] + cm[2, 2] + cm[3,3]+ cm[4,4]) / total
#sensitivity = cm[0, 0] / (cm[0, 0] + cm[0, 1])
#specificity = cm[1, 1] / (cm[1, 0] + cm[1, 1])
# show the confusion matrix, accuracy, sensitivity, and specificity
print(cm)
print("acc: {:.4f}".format(acc))
#print("sensitivity: {:.4f}".format(sensitivity))
#print("specificity: {:.4f}".format(specificity))
## explain
N = EPOCHS
plt.style.use("ggplot")
plt.figure(1)
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Precision of COVID-19 detection.")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
#plt.axis([0, EPOCHS, 0.3, 0.9])
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_cero_plot_Inception_2nd_time.png")
plt.show()
import cv2
plt.figure(2)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
plt.imshow((mask +imagen)/255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Normal"+str(ind)+".png")
plt.show()
plt.figure(3)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((50,50),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
mask = cv2.blur(mask,(30,30))
plt.imshow((mask +imagen)/255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Light"+str(ind)+".png")
plt.show()
plt.figure(4)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=3, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((50,50),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
end = cv2.addWeighted((imagen/255), 0.7, mask/255, 0.3, 0)
plt.imshow((end))
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map_purple"+str(ind)+".png")
plt.show()
plt.figure(4)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=2, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((30,30),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 2)
mask = cv2.blur(mask,(30,30))
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask2 = cv2.applyColorMap((mask), cv2.COLORMAP_JET) #heatmap
mask = cv2.blur(mask,(60,60))
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
mask = ((mask*1.1 + mask2*0.7)/255)*(3/2)
end = cv2.addWeighted(imagen/255, 0.8, mask2/255, 0.3, 0)
#end = cv2.addWeighted(end, 0.8, mask/255, 0.2, 0)
plt.imshow((end))
cv2.imwrite("/content/drive/My Drive/Maps/Heat_map"+str(ind)+".png",end*255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map"+str(ind)+".png")
plt.show()
plt.figure(5)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=1, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((30,30),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 2)
mask = cv2.blur(mask,(30,30))
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask2 = cv2.applyColorMap((mask), cv2.COLORMAP_JET) #heatmap
mask = cv2.blur(mask,(60,60))
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
mask = ((mask*1.1 + mask2*0.7)/255)*(3/2)
end = cv2.addWeighted(imagen/255, 0.8, mask2/255, 0.3, 0)
#end = cv2.addWeighted(end, 0.8, mask/255, 0.2, 0)
deep = np.reshape(end,newshape=(512,512,3),order='C')
CHANNEL1=deep[:,:,2]
CHANNEL2=deep[:,:,0]
deep[:,:,0] = CHANNEL1
#deep[:,:,2] = CHANNEL2
plt.imshow((deep))
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map_ma"+str(ind)+".png")
plt.show()
```
| github_jupyter |
# RidgeRegression with Scale & Power Transformer
This Code template is for the regression analysis using simple Ridge Regression with Feature Rescaling technique Scale and Feature Transformation technique PowerTransformer in a pipeline. Ridge Regression is also known as Tikhonov regularization.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.pipeline import Pipeline,make_pipeline
from sklearn.preprocessing import scale,PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Data Rescaling
<Code>scale</Code> standardizes a dataset along any axis. It standardizes features by removing the mean and scaling to unit variance.
scale is similar to <Code>StandardScaler</Code> in terms of feature transformation, but unlike StandardScaler, it lacks Transformer API i.e., it does not have <Code>fit_transform</Code>, <Code>transform</Code> and other related methods.
```
x_train =scale(x_train)
x_test = scale(x_test)
```
### Feature Transformation
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
##### For more information on PowerTransformer [ click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
### Model
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\begin{equation*}
\min_{w} || X w - y||_2^2 + \alpha ||w||_2^2
\end{equation*}
The complexity parameter controls the amount of shrinkage: the larger the value of , the greater the amount of shrinkage and thus the coefficients become more robust to collinearity.
This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)).
#### Model Tuning Parameters
> **alpha** -> Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization.
> **solver** -> Solver to use in the computational routines {‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}
```
model=make_pipeline(PowerTransformer(), Ridge(random_state=123))
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
y_pred=model.predict(x_test)
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Ganapathi Thota , Github: [Profile](https://github.com/Shikiz)
| github_jupyter |
# Understanding Principal Component Analysis
**Outline**
* [Introduction](#intro)
* [Assumption and derivation](#derive)
* [PCA Example](#example)
* [PCA Usage](#usage)
```
%load_ext watermark
%matplotlib inline
# %config InlineBackend.figure_format='retina'
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import math
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score
%watermark -a 'Johnny' -d -t -v -p numpy,pandas,matplotlib,sklearn
```
---
## <a id="intro">Introduction</a>
When we have two features that are highly correlated with each other, we may not want to include both of them in our model. In [Lasso and Ridge regression](http://nbviewer.jupyter.org/github/johnnychiuchiu/Machine-Learning/blob/master/LinearRegression/linearRegressionModelBuilding.ipynb#ridge), what it does is fitting a model with all the predictors but put a penalized term, either L1 or L2 norm on the value of the regression coefficients, this will shrinks the coefficient estimates towards zero. In other words, it try to pick some predictors out of all the predictors in order to reduce the dimension of our column space.
Principal Component Analysis(PCA) is another type of dimension reduction method. What PCA is all about is **Finding the directions of maximum variance in high-dimensional data and project it onto a smaller dimensional subspace while retaining most of the information.** The main idea and motivation is that each of the $n$ observations lives in $p$-dimensional space, but not all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as intersteing as possible. The concept of *interesting* is measured by the amount that the observations vary along each dimension.
Note that PCA is just a linear transformation method. Compared to the original space, it can project our high-dimensional data into another dimension, of which each of the direction are with the maximum variance. In other words, the orthogonality of principal components implies that PCA finds the most uncorrelated components to explain as much variation in the data as possible. We can then pick the number of directions, i.e. components, we want to keep while containing most of the information of the original data. The direction of the highest variance is called the first principal component, the second highest is call the second principal component, and so on.
In PCA, we found out that the first principal component is obtained by doing eigendecomposition of the covariance matrix X, and the eigenvector with the largest eigenvalue is our first principal component in the sense that every vector in the span of this eigenvector will strech out by the largest amount, since eigenvalues are the factors by which the eigenvectors streckes or squishes during the transformation. Therefore, we can sort the top k component by the value of the eigenvalues that we found from doing eigendecomposition of the covariance matrix X.
**Application of PCA**
* We can use PCA as a tool for data visualization. For instance, if we can obtain a two-dimensional representation of the data that captures most of the information, then we can plot hte observations in this low-dimensional space.
* We can use princial components as predictors in a regression model in place of the original larger set of variables.
---
## <a id="derive">Assumption and derivation</a>
**Assumption** for PCA before we derive the whole process are
* Since we are only interested in variance, we assume that each of the variables in $X$ has been and should be centered to have mean zero, i.e. the column means of $X$ are zero.
**Method Derivation**
Assume we have n observation, and a set of features $X1, X2, X3, \dots, Xp$. In order words, we have
\begin{pmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,p} \\
x_{2,1} & x_{2,2} & \cdots & x_{2,p} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n,1} & x_{n,2} & \cdots & x_{n,p}
\end{pmatrix}
where
\begin{equation*}
X1 = \begin{bmatrix}
x_{1,1} \\
x_{2,1} \\
\vdots \\
x_{n,1}
\end{bmatrix}
\end{equation*}
PCA will try to find a low dimensional representation of a dataset that contains as much as possible of the variance. The idea is that each of the n observations lives in p-dimensional space, but not all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as interesting as possible. Let see how these dimensions, or *principal component* are found.
Given $n \times p$ data set $X$, how do we compute the first principal component? We look for the linear combination of the sample feature values of the form
$$z_{i,1} = \phi_{1,1}x_{i,1}+\phi_{2,1}x_{i,2}+\dots+\phi_{p,1}x_{i,p}$$
where
0<i<n and $\phi_1$ denotes the first principal component loading vector, which is
\begin{equation*}
\phi_1=\begin{pmatrix}
\phi_{1,1} \\
\phi_{2,1} \\
\vdots \\
\phi_{p,1}
\end{pmatrix}
\end{equation*}
We'll have n values of $z_1$, and we want to look for the linear combination that has the largest sample variance. More formally,
\begin{equation*}
Z_1
=
\begin{pmatrix}
z_{1,1} \\
z_{2,1} \\
\vdots \\
z_{n,1}
\end{pmatrix}
=
\begin{pmatrix}
\phi_{1,1}x_{1,1} + \phi_{2,1}x_{1,2} + \cdots + \phi_{p,1}x_{1,p} \\
\phi_{1,1}x_{2,1} + \phi_{2,1}x_{2,2} + \cdots + \phi_{p,1}x_{2,p} \\
\vdots \\
\phi_{1,1}x_{n,1} + \phi_{2,1}x_{n,2} + \cdots + \phi_{p,1}x_{n,p}
\end{pmatrix}
=
\begin{pmatrix}
\phi_{1,1}
\phi_{2,1}
\dots
\phi_{p,1}
\end{pmatrix}
\begin{pmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,p} \\
x_{2,1} & x_{2,2} & \cdots & x_{2,p} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n,1} & x_{n,2} & \cdots & x_{n,p}
\end{pmatrix}
=
\phi_{1,1}X_{1}+\phi_{2,1}X_{2}+\dots+\phi_{p,1}X_{p}
=
\phi_1^T X
\end{equation*}
We assume that each of the variables in $X$ has been centered to have mean zero, i.e., the column means of $X$ are zero. Therefore, $E(X_i)=0$ for i in 1,...p. It's obvious to know that $E(Z_1)=E(\phi_{1,1}X_{1}+\phi_{2,1}X_{2}+\dots+\phi_{p,1}X_{p}) = 0$
Therefore, the variance of $Z_1$ is
$$Var(Z_1) = E\Big[[Z_1-E(Z_1)][Z_1-E(Z_1)]^T\Big] = E\Big[Z_1 Z_1^T \Big] = E\Big[(\phi_1^T X) (\phi_1^T X)^T \Big] = E\Big[\phi_1^T X X^T \phi_1\Big] = \phi_1^T E[X X^T] \phi_1$$
We also know that the [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix) of X is
$$C = Cov(X) = E\Big[[X-E(X)][X-E(X)]^T\Big] = E[X X^T]$$
Hence, the $Var(Z_1)= \phi_1^T E[X X^T] \phi_1 = \phi_1^T C \phi_1$
Apart from finding the largest sample variance, we also constrain the loadings so that their sum of squares is equal to one, since otherwise setting these elements to be arbitrarily large in absolute value could result in an arbitrarily large variance. More formally,
$$\sum_{j=1}^{p}\phi_{j1}^2=1$$
In other words, the first principal component loading vector solves the optimization problem
$$\text{maximize}_\phi \quad \phi^TC\phi$$
$$\text{subject to} \sum_{j=1}^{p}\phi_{j1}^2 = \phi_1^T \phi_1 =1$$
This objective function can be solved by the Lagrange multiplier, minimizing the loss function:
$$L = \phi^T C\phi - \lambda(\phi^T \phi-1)$$
Next, to solve for $\phi$, we set the partial derivative of L with respect to $\phi$ to 0.
$$\frac{\partial L}{\partial \phi_1} = C\phi - \lambda \phi_1 =0 $$
$$ C\phi_1 = \lambda \phi_1 $$
Surprisingly we see that it is actually a eigendecomposition problem. To refresh our mind a little bit, here is a very good [youtube video](https://www.youtube.com/watch?v=PFDu9oVAE-g&index=14&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) explaining what eigenvalue and eigenvector is in a very geometrical way.
Therefore, from the equation above, we pick $\phi$ as the eigenvector associated with the largest eigenvalue.
Also, most data can’t be well-described by a single principal component. Typically, we compute multiple principal components by computing all eigenvectors of the covariance matrix of $X$ and ranking them by their eigenvalues. After sorting the eigenpairs, the next question is “how many principal components are we going to choose for our new feature subspace?” A useful measure is the so-called “explained variance,” which can be calculated from the eigenvalues. The explained variance tells us how much information (variance) can be attributed to each of the principal components.
To sum up, here are the **steps that we take to perform a PCA analysis**
1. Standardize the data.
2. Obtain the Eigenvectors and Eigenvalues from the covariance matrix (technically the correlation matrix after performing the standardization).
3. Sort eigenvalues in descending order and choose the k eigenvectors that correspond to the k largest eigenvalues where k is the number of dimensions of the new feature subspace.
4. Projection onto the new feature space. During this step we will take the top k eigenvectors and use it to transform the original dataset X to obtain a k-dimensional feature subspace X′.
---
## <a id="process">PCA Analysis Example</a>
Let's use the classical IRIS data to illustrate the topics that we just covered, including
* What are the explained variance of each component? How many component should we pick?
* How will the scatter plot be if we plot in the dimension of first and second component?
```
# Read Data
df = pd.read_csv(
filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
header=None,
sep=',')
df.columns=['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
df.dropna(how="all", inplace=True) # drops the empty line at file-end
df.tail()
# split data table into data X and class labels y
X = df.iloc[:,0:4].values
y = df.iloc[:,4].values
```
**EDA**
To get a feeling for how the 3 different flower classes are distributes along the 4 different features, let us visualize them via histograms.
```
def plot_iris():
label_dict = {1: 'Iris-Setosa',
2: 'Iris-Versicolor',
3: 'Iris-Virgnica'}
feature_dict = {0: 'sepal length [cm]',
1: 'sepal width [cm]',
2: 'petal length [cm]',
3: 'petal width [cm]'}
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(8, 6))
for cnt in range(4):
plt.subplot(2, 2, cnt+1)
for lab in ('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'):
plt.hist(X[y==lab, cnt],
label=lab,
bins=10,
alpha=0.3,)
plt.xlabel(feature_dict[cnt])
plt.legend(loc='upper right', fancybox=True, fontsize=8)
plt.tight_layout()
plt.show()
plot_iris()
```
## Process
### 1. Standardize the data
```
# create a StandardScaler object
scaler = StandardScaler()
# fit and then transform to get the standardized dataset
scaler.fit(X)
X_std = scaler.transform(X)
```
### 2. Do eigendecomposition and sort eigenvalues in descending order
```
# n_components: Number of components to keep
# if n_components is not set all components are kept
my_pca = PCA(n_components=None)
my_pca.fit(X_std)
def plot_var_explained(var_exp, figsize=(6,4)):
"""variance explained per component plot"""
# get culmulative variance explained
cum_var_exp = np.cumsum(var_exp)
# plot
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=figsize)
plt.bar(range(len(var_exp)), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(len(var_exp)), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
var_exp = my_pca.explained_variance_ratio_
plot_var_explained(var_exp, figsize=(6,4))
# plot a simpler version of the bar chart
pd.DataFrame(my_pca.explained_variance_ratio_).plot.bar()
```
The plot above clearly shows that most of the variance (72.77% of the variance to be precise) can be explained by the first principal component alone. The second principal component still bears some information (23.03%) while the third and fourth principal components can safely be dropped without losing to much information. Together, the first two principal components contain 95.8% of the information.
### 3. Check the scores within each principal component
```
PC_df = pd.DataFrame(my_pca.components_,columns=df.iloc[:,0:4].columns).transpose()
PC_df
import seaborn as sns
plt.figure(figsize=None) #(4,4)
sns.heatmap(PC_df,cmap="RdBu_r",annot=PC_df.values, linewidths=1, center=0)
```
From the above heatmap & table, we can see that first component consist of all 4 features with a smaller weight on sepal_wid
### 4. Projection onto the new feature space
During this step we will take the top k eigenvectors and use it to transform the original dataset X to obtain a k-dimensional feature subspace X′.
```
sklearn_pca = PCA(n_components=2)
Y_sklearn = sklearn_pca.fit_transform(X_std)
Y_sklearn[1:10]
```
Each of the list in the array above shows the projected value of each observation onto the first two principal components. If we want to fit model using the data projected on to their first 2 principal component, then `Y_sklearn` is the data we want to use.
## <a id="usage">PCA Usage</a>
### Data Visualization
We can use PCA as a tool for data visualization. For instance, if we can obtain a two-dimensional representation of the data that captures most of the information, then we can plot hte observations in this low-dimensional space.
Let's see how it will be like using IRIS data if we plot it out in the first two principal components.
```
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
for lab, col in zip(('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'),
('blue', 'red', 'green')):
print(lab)
print(col)
plt.scatter(Y_sklearn[y==lab, 0],
Y_sklearn[y==lab, 1],
label=lab,
c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend(loc='lower center')
plt.tight_layout()
plt.show()
```
### Principal Component Regression
We can use princial components as predictors in a regression model in place of the original larger set of variables.
Let's compare the result of logistic regression using all the features with the one using only the first two component
```
# the code is copied from Ethen's PCA blog post, which is listed in the reference.
# split 30% of the iris data into a test set for evaluation
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size = 0.3, random_state = 1)
# create the pipeline, where we'll
# standardize the data, perform PCA and
# fit the logistic regression
pipeline1 = Pipeline([
('standardize', StandardScaler()),
('pca', PCA(n_components = 2)),
('logistic', LogisticRegression(random_state = 1))
])
pipeline1.fit(X_train, y_train)
y_pred1 = pipeline1.predict(X_test)
# pipeline without PCA
pipeline2 = Pipeline([
('standardize', StandardScaler()),
('logistic', LogisticRegression(random_state = 1))
])
pipeline2.fit(X_train, y_train)
y_pred2 = pipeline2.predict(X_test)
# access the prediction accuracy
print('PCA Accuracy %.3f' % accuracy_score(y_test, y_pred1))
print('Accuracy %.3f' % accuracy_score(y_test, y_pred2))
```
We saw that by using only the first two component, the accuracy only drop by 0.022, which is about 2-3% from the original accuracy. Actually, by using the first three principal component, we can get the same accuracy as the original model with all the features.
### Reference
* [PCA in 3 steps](http://sebastianraschka.com/Articles/2015_pca_in_3_steps.html)
* [Everything you did and didn't know about PCA
](http://alexhwilliams.info/itsneuronalblog/2016/03/27/pca/)
* [Ethen: Principal Component Analysis (PCA) from scratch](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/dim_reduct/PCA.ipynb)
* [Wiki: Matrix Multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication)
* [Sklearn: Pipelining: chaining a PCA and a logistic regression](http://scikit-learn.org/stable/auto_examples/plot_digits_pipe.html#sphx-glr-auto-examples-plot-digits-pipe-py)
| github_jupyter |
# Chatbot using Seq2Seq LSTM models
In this notebook, we will assemble a seq2seq LSTM model using Keras Functional API to create a working Chatbot which would answer questions asked to it.
Chatbots have become applications themselves. You can choose the field or stream and gather data regarding various questions. We can build a chatbot for an e-commerce webiste or a school website where parents could get information about the school.
Messaging platforms like Allo have implemented chatbot services to engage users. The famous [Google Assistant](https://assistant.google.com/), [Siri](https://www.apple.com/in/siri/), [Cortana](https://www.microsoft.com/en-in/windows/cortana) and [Alexa](https://www.alexa.com/) may have been build using simialr models.
So, let's start building our Chatbot.
## 1) Importing the packages
We will import [TensorFlow](https://www.tensorflow.org) and our beloved [Keras](https://www.tensorflow.org/guide/keras). Also, we import other modules which help in defining model layers.
```
import numpy as np
import tensorflow as tf
import pickle
from tensorflow.keras import layers , activations , models , preprocessing
```
## 2) Preprocessing the data
### A) Download the data
The dataset hails from [chatterbot/english on Kaggle](https://www.kaggle.com/kausr25/chatterbotenglish).com by [kausr25](https://www.kaggle.com/kausr25). It contains pairs of questions and answers based on a number of subjects like food, history, AI etc.
The raw data could be found from this repo -> https://github.com/shubham0204/Dataset_Archives
```
!wget https://github.com/shubham0204/Dataset_Archives/blob/master/chatbot_nlp.zip?raw=true -O chatbot_nlp.zip
!unzip chatbot_nlp.zip
```
### B) Reading the data from the files
We parse each of the `.yaml` files.
* Concatenate two or more sentences if the answer has two or more of them.
* Remove unwanted data types which are produced while parsing the data.
* Append `<START>` and `<END>` to all the `answers`.
* Create a `Tokenizer` and load the whole vocabulary ( `questions` + `answers` ) into it.
```
from tensorflow.keras import preprocessing , utils
import os
import yaml
dir_path = 'chatbot_nlp/data'
files_list = os.listdir(dir_path + os.sep)
questions = list()
answers = list()
for filepath in files_list:
stream = open( dir_path + os.sep + filepath , 'rb')
docs = yaml.safe_load(stream)
conversations = docs['conversations']
for con in conversations:
if len( con ) > 2 :
questions.append(con[0])
replies = con[ 1 : ]
ans = ''
for rep in replies:
ans += ' ' + rep
answers.append( ans )
elif len( con )> 1:
questions.append(con[0])
answers.append(con[1])
answers_with_tags = list()
for i in range( len( answers ) ):
if type( answers[i] ) == str:
answers_with_tags.append( answers[i] )
else:
questions.pop( i )
answers = list()
for i in range( len( answers_with_tags ) ) :
answers.append( '<START> ' + answers_with_tags[i] + ' <END>' )
tokenizer = preprocessing.text.Tokenizer()
tokenizer.fit_on_texts( questions + answers )
VOCAB_SIZE = len( tokenizer.word_index )+1
print( 'VOCAB SIZE : {}'.format( VOCAB_SIZE ))
```
### C) Preparing data for Seq2Seq model
Our model requires three arrays namely `encoder_input_data`, `decoder_input_data` and `decoder_output_data`.
For `encoder_input_data` :
* Tokenize the `questions`. Pad them to their maximum length.
For `decoder_input_data` :
* Tokenize the `answers`. Pad them to their maximum length.
For `decoder_output_data` :
* Tokenize the `answers`. Remove the first element from all the `tokenized_answers`. This is the `<START>` element which we added earlier.
```
from gensim.models import Word2Vec
import re
vocab = []
for word in tokenizer.word_index:
vocab.append( word )
def tokenize( sentences ):
tokens_list = []
vocabulary = []
for sentence in sentences:
sentence = sentence.lower()
sentence = re.sub( '[^a-zA-Z]', ' ', sentence )
tokens = sentence.split()
vocabulary += tokens
tokens_list.append( tokens )
return tokens_list , vocabulary
#p = tokenize( questions + answers )
#model = Word2Vec( p[ 0 ] )
#embedding_matrix = np.zeros( ( VOCAB_SIZE , 100 ) )
#for i in range( len( tokenizer.word_index ) ):
#embedding_matrix[ i ] = model[ vocab[i] ]
# encoder_input_data
tokenized_questions = tokenizer.texts_to_sequences( questions )
maxlen_questions = max( [ len(x) for x in tokenized_questions ] )
padded_questions = preprocessing.sequence.pad_sequences( tokenized_questions , maxlen=maxlen_questions , padding='post' )
encoder_input_data = np.array( padded_questions )
print( encoder_input_data.shape , maxlen_questions )
# decoder_input_data
tokenized_answers = tokenizer.texts_to_sequences( answers )
maxlen_answers = max( [ len(x) for x in tokenized_answers ] )
padded_answers = preprocessing.sequence.pad_sequences( tokenized_answers , maxlen=maxlen_answers , padding='post' )
decoder_input_data = np.array( padded_answers )
print( decoder_input_data.shape , maxlen_answers )
# decoder_output_data
tokenized_answers = tokenizer.texts_to_sequences( answers )
for i in range(len(tokenized_answers)) :
tokenized_answers[i] = tokenized_answers[i][1:]
padded_answers = preprocessing.sequence.pad_sequences( tokenized_answers , maxlen=maxlen_answers , padding='post' )
onehot_answers = utils.to_categorical( padded_answers , VOCAB_SIZE )
decoder_output_data = np.array( onehot_answers )
print( decoder_output_data.shape )
```
## 3) Defining the Encoder-Decoder model
The model will have Embedding, LSTM and Dense layers. The basic configuration is as follows.
* 2 Input Layers : One for `encoder_input_data` and another for `decoder_input_data`.
* Embedding layer : For converting token vectors to fix sized dense vectors. **( Note : Don't forget the `mask_zero=True` argument here )**
* LSTM layer : Provide access to Long-Short Term cells.
Working :
1. The `encoder_input_data` comes in the Embedding layer ( `encoder_embedding` ).
2. The output of the Embedding layer goes to the LSTM cell which produces 2 state vectors ( `h` and `c` which are `encoder_states` )
3. These states are set in the LSTM cell of the decoder.
4. The decoder_input_data comes in through the Embedding layer.
5. The Embeddings goes in LSTM cell ( which had the states ) to produce seqeunces.
<center><img style="float: center;" src="https://cdn-images-1.medium.com/max/1600/1*bnRvZDDapHF8Gk8soACtCQ.gif"></center>
Image credits to [Hackernoon](https://hackernoon.com/tutorial-3-what-is-seq2seq-for-text-summarization-and-why-68ebaa644db0).
```
encoder_inputs = tf.keras.layers.Input(shape=( maxlen_questions , ))
encoder_embedding = tf.keras.layers.Embedding( VOCAB_SIZE, 200 , mask_zero=True ) (encoder_inputs)
encoder_outputs , state_h , state_c = tf.keras.layers.LSTM( 200 , return_state=True )( encoder_embedding )
encoder_states = [ state_h , state_c ]
decoder_inputs = tf.keras.layers.Input(shape=( maxlen_answers , ))
decoder_embedding = tf.keras.layers.Embedding( VOCAB_SIZE, 200 , mask_zero=True) (decoder_inputs)
decoder_lstm = tf.keras.layers.LSTM( 200 , return_state=True , return_sequences=True )
decoder_outputs , _ , _ = decoder_lstm ( decoder_embedding , initial_state=encoder_states )
decoder_dense = tf.keras.layers.Dense( VOCAB_SIZE , activation=tf.keras.activations.softmax )
output = decoder_dense ( decoder_outputs )
model = tf.keras.models.Model([encoder_inputs, decoder_inputs], output )
model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss='categorical_crossentropy')
model.summary()
```
## 4) Training the model
We train the model for a number of epochs with `RMSprop` optimizer and `categorical_crossentropy` loss function.
```
model.fit([encoder_input_data , decoder_input_data], decoder_output_data, batch_size=50, epochs=150 )
model.save( 'model.h5' )
```
## 5) Defining inference models
We create inference models which help in predicting answers.
**Encoder inference model** : Takes the question as input and outputs LSTM states ( `h` and `c` ).
**Decoder inference model** : Takes in 2 inputs, one are the LSTM states ( Output of encoder model ), second are the answer input seqeunces ( ones not having the `<start>` tag ). It will output the answers for the question which we fed to the encoder model and its state values.
```
def make_inference_models():
encoder_model = tf.keras.models.Model(encoder_inputs, encoder_states)
decoder_state_input_h = tf.keras.layers.Input(shape=( 200 ,))
decoder_state_input_c = tf.keras.layers.Input(shape=( 200 ,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_embedding , initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = tf.keras.models.Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
return encoder_model , decoder_model
```
## 6) Talking with our Chatbot
First, we define a method `str_to_tokens` which converts `str` questions to Integer tokens with padding.
```
def str_to_tokens( sentence : str ):
words = sentence.lower().split()
tokens_list = list()
for word in words:
tokens_list.append( tokenizer.word_index[ word ] )
return preprocessing.sequence.pad_sequences( [tokens_list] , maxlen=maxlen_questions , padding='post')
```
1. First, we take a question as input and predict the state values using `enc_model`.
2. We set the state values in the decoder's LSTM.
3. Then, we generate a sequence which contains the `<start>` element.
4. We input this sequence in the `dec_model`.
5. We replace the `<start>` element with the element which was predicted by the `dec_model` and update the state values.
6. We carry out the above steps iteratively till we hit the `<end>` tag or the maximum answer length.
```
enc_model , dec_model = make_inference_models()
for _ in range(10):
states_values = enc_model.predict( str_to_tokens( input( 'Enter question : ' ) ) )
empty_target_seq = np.zeros( ( 1 , 1 ) )
empty_target_seq[0, 0] = tokenizer.word_index['start']
stop_condition = False
decoded_translation = ''
while not stop_condition :
dec_outputs , h , c = dec_model.predict([ empty_target_seq ] + states_values )
sampled_word_index = np.argmax( dec_outputs[0, -1, :] )
sampled_word = None
for word , index in tokenizer.word_index.items() :
if sampled_word_index == index :
decoded_translation += ' {}'.format( word )
sampled_word = word
if sampled_word == 'end' or len(decoded_translation.split()) > maxlen_answers:
stop_condition = True
empty_target_seq = np.zeros( ( 1 , 1 ) )
empty_target_seq[ 0 , 0 ] = sampled_word_index
states_values = [ h , c ]
print( decoded_translation )
```
## 7) Conversion to TFLite ( Optional )
We can convert our seq2seq model to a TensorFlow Lite model so that we can use it on edge devices.
```
!pip install tf-nightly
converter = tf.lite.TFLiteConverter.from_keras_model( enc_model )
buffer = converter.convert()
open( 'enc_model.tflite' , 'wb' ).write( buffer )
converter = tf.lite.TFLiteConverter.from_keras_model( dec_model )
open( 'dec_model.tflite' , 'wb' ).write( buffer )
```
| github_jupyter |
```
from config import *
import mPyPl as mp
from mPyPl.utils.flowutils import *
from mpyplx import *
from pipe import Pipe
from functools import partial
import numpy as np
import cv2
import itertools
from moviepy.editor import *
import pickle
import functools
from config import *
test_names = (
from_json(os.path.join(source_dir,'matches.json'))
| mp.where(lambda x: 'Test' in x.keys() and int(x['Test'])>0)
| mp.apply(['Id','Half'],'pattern',lambda x: "{}_{}_".format(x[0],x[1]))
| mp.select_field('pattern')
| mp.as_list
)
stream = (
mp.get_datastream(data_dir, ext=".fflow.pickle", classes={'noshot' : 0, 'shots': 1})
| datasplit_by_pattern(test_pattern=test_names)
| stratify_sample_tt()
| mp.apply(['class_id','split'],'descr',lambda x: "{}-{}".format(x[0],x[1]))
| summarize('descr')
| mp.as_list
)
train, test = (
stream
| mp.apply('filename', 'raw', lambda x: pickle.load(open(x, 'rb')), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('raw', 'gradients', calc_gradients, eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('gradients', 'polar', lambda x: to_polar(x), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('polar', 'channel1', lambda x: np.concatenate([y[0] for y in x]), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('polar', 'channel2', lambda x: np.concatenate([y[1] for y in x]), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.make_train_test_split()
)
train = train | mp.as_list
ch1 = stream | mp.select_field('channel1') | mp.as_list
ch1_flatten = np.concatenate(ch1)
ch2 = stream | mp.select_field('channel2') | mp.as_list
ch2_flatten = np.concatenate(ch2)
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(ch1_flatten, bins=100);
plt.hist(ch2_flatten, bins=100);
```
## OpticalFlow Model Training
```
scene_changes = pickle.load(open('scene.changes.pkl', 'rb'))
scene_changes = list(scene_changes[40].keys())
scene_changes = [ fn.replace('.resized.mp4', '.fflow.pickle') for fn in scene_changes]
retinaflow_shape = (25, 50, 2)
hist_params = [
dict(
bins=retinaflow_shape[1],
lower=0,
upper=150,
maxv=150
),
dict(
bins=retinaflow_shape[1],
lower=0,
upper=6.29,
maxv=6.29
),
]
stream = (
mp.get_datastream(data_dir, ext=".fflow.pickle", classes={'noshot' : 0, 'shots': 1})
| mp.filter('filename', lambda x: not x in scene_changes)
| datasplit_by_pattern(test_pattern=test_names)
| stratify_sample_tt()
| mp.apply(['class_id','split'],'descr',lambda x: "{}-{}".format(x[0],x[1]))
| summarize('descr')
| mp.as_list
)
train, test = (
stream
| mp.apply('filename', 'raw', lambda x: pickle.load(open(x, 'rb')), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('raw', 'gradients', calc_gradients, eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('gradients', 'polar', lambda x: to_polar(x), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('polar', 'histograms', lambda x: video_to_hist(x, hist_params), eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.apply('histograms', 'fflows', functools.partial(zero_pad,shape=retinaflow_shape),
eval_strategy=mp.EvalStrategies.LazyMemoized)
| mp.make_train_test_split()
)
no_train = stream | mp.filter('split',lambda x: x==mp.SplitType.Train) | mp.count
no_test = stream | mp.filter('split',lambda x: x==mp.SplitType.Test) | mp.count
# training params
LEARNING_RATE = 0.001
V = "v1"
MODEL_CHECKPOINT = "models/unet_ch_" + V + ".h5"
MODEL_PATH = MODEL_CHECKPOINT.replace("_ch_", "_model_")
HISTORY_PATH = MODEL_PATH.replace(".h5", "_history.pkl")
BATCH_SIZE = 16
EPOCHS = 50
from keras.callbacks import ModelCheckpoint
from keras.callbacks import EarlyStopping
callback_checkpoint = ModelCheckpoint(
MODEL_CHECKPOINT,
verbose=1,
monitor='val_loss',
save_best_only=True
)
callback_stopping = EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=7,
verbose=1,
mode='auto',
restore_best_weights=True
)
from keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor='val_loss', verbose=1, factor=0.5,
patience=4, cooldown=4, min_lr=0.0001)
from keras.models import Sequential
from keras.layers import *
from keras.regularizers import l2
from keras.optimizers import Adam
retinaflow_shape = (25, 50, 2)
model = Sequential()
model.add(Conv2D(64, (5,3), input_shape=retinaflow_shape))
model.add(Conv2D(32, (3,3), activation='relu', kernel_initializer='glorot_uniform'))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(32, activation='relu', kernel_initializer='glorot_uniform'))
model.add(Dense(1, activation='sigmoid', kernel_initializer='glorot_uniform'))
model.compile(loss='binary_crossentropy',
optimizer=Adam(lr=0.001),
metrics=['acc'])
model.summary()
history = model.fit_generator(
train | mp.infshuffle | mp.as_batch('fflows', 'class_id', batchsize=BATCH_SIZE),
steps_per_epoch = no_train // BATCH_SIZE,
validation_data = test | mp.infshuffle | mp.as_batch('fflows', 'class_id', batchsize=BATCH_SIZE),
validation_steps = no_test // BATCH_SIZE,
epochs=EPOCHS,
verbose=1,
callbacks=[callback_checkpoint, callback_stopping, reduce_lr]
)
%matplotlib inline
import matplotlib.pyplot as plt
def plot_history(history):
loss_list = [s for s in history.history.keys() if 'loss' in s and 'val' not in s]
val_loss_list = [s for s in history.history.keys() if 'loss' in s and 'val' in s]
acc_list = [s for s in history.history.keys() if 'acc' in s and 'val' not in s]
val_acc_list = [s for s in history.history.keys() if 'acc' in s and 'val' in s]
if len(loss_list) == 0:
print('Loss is missing in history')
return
## As loss always exists
epochs = range(1,len(history.history[loss_list[0]]) + 1)
## Loss
plt.figure(1)
for l in loss_list:
plt.plot(epochs, history.history[l], 'b', label='Training loss (' + str(str(format(history.history[l][-1],'.5f'))+')'))
for l in val_loss_list:
plt.plot(epochs, history.history[l], 'g', label='Validation loss (' + str(str(format(history.history[l][-1],'.5f'))+')'))
plt.title('Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
## Accuracy
plt.figure(2)
for l in acc_list:
plt.plot(epochs, history.history[l], 'b', label='Training accuracy (' + str(format(history.history[l][-1],'.5f'))+')')
for l in val_acc_list:
plt.plot(epochs, history.history[l], 'g', label='Validation accuracy (' + str(format(history.history[l][-1],'.5f'))+')')
plt.title('Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
plot_history(history)
```
| github_jupyter |
# Find \*.tifs with no matching \*.jpg
#### Created on Cinco de Mayo in 2020 by Jeremy Moore and David Armstrong to identify \*.tif images that don't have a matching \*.jpg image for the Asian Art Museum of San Francisco
1. Manually set root_dir_path to the full path of the directory containing your *all_jpgs* and *all_tifs* directories
1. Programatically create a *no_match* directory inside of *all_tifs*
1. Get list of all \*.tifs in *all_tifs* directory
1. Get the identier, or stem, of each \*.tif
1. Check if this identifier exists as a \*.jpg in the *all_jpgs* directory first as a test
1. Run again and if there is no matching \*.jpg, move the \*.tif into the *no_match* directory
***Update root_dir_path location and verify names of *.jpg and *.tif directories below BEFORE running any cells!***
```
# imports from standard library
from pathlib import Path
# set root directory path that contains the directories with our tifs and jpgs
root_dir_path = Path('/Users/dlisla/Pictures/test_directory')
print(f'root_dir_path: {root_dir_path}')
print(f'root_dir_path.name: {root_dir_path.name}')
# set path to directory with our all_jpgs and all_tifs
bad_jpg_dir_path = root_dir_path.joinpath('all_jpgs')
all_tifs_dir_path = root_dir_path.joinpath('all_tifs')
# create a directory inside of all_tifs directory named no_match to move
no_match_dir_path = all_tifs_dir_path.joinpath('no_match')
no_match_dir_path.mkdir() # will raise a FileExistsError if the no_match directory already exists
# verify existence of no_match directory, if False, then do not continue
print(f'Does the no_match directory exist? {no_match_dir_path.is_dir()}')
# get sorted list of all *.tifs in all_tifs directory
# NOTE: this is NOT recursive and will not look inside of all_tifs subdirectories
# NOTE: this may also find non-image hidden files that start with a '.' and end with .tif
tif_path_list = sorted(all_tifs_dir_path.glob('*.tif'))
print(f'Total number of *.tif: {len(tif_path_list)}\n')
print(f'First *.tif paths: {tif_path_list[0]}')
print(f'Last *.tif paths: {tif_path_list[-1]}')
# for loop to test our code test what will happen
for tif_path in tif_path_list:
# get image's identifier to match against the JPEG filenames
identifier = tif_path.stem # stem is the Python name for identifier
# set jpg filename and path
jpg_filename = f'{identifier}.jpg'
jpg_path = bad_jpg_dir_path.joinpath(jpg_filename)
# does jpg exist?
if jpg_path.is_file(): # there's a match
# print(f'{jpg_path.name} has a match!\n') # commented out to silently skip matched images
pass
else: # we need to move it into our no_match directory
print(f'{tif_path.name} has no matching *.jpg')
# set new tif path inside of the no_match directory
new_tif_path = no_match_dir_path.joinpath(tif_path.name)
print(f'Moving to {new_tif_path} . . . (not really, this is a test)\n')
# warning, will move files!
for tif_path in tif_path_list:
# get image's identifier to match against the JPEG filenames
identifier = tif_path.stem # stem is the Python name for identifier
# set jpg filename and path
jpg_filename = f'{identifier}.jpg'
jpg_path = bad_jpg_dir_path.joinpath(jpg_filename)
# does jpg exist?
if jpg_path.is_file(): # there's a match
# print(f'{jpg_path.name} has a match!\n') # commented out to silently skip matched images
pass
else: # we need to move it into our no_match directory
print(f'{tif_path.name} has no JPEG')
# set new tif path inside of the no_match directory
new_tif_path = no_match_dir_path.joinpath(tif_path.name)
print(f'Moving to {new_tif_path} . . .')
# move our file
tif_path.rename(new_tif_path)
if new_tif_path.is_file():
print('Success!\n')
else:
print('Something broke with moving:{tif_path.name} to {tif_path}!!\n')
```
| github_jupyter |
# Computer Vision Nanodegree
## Project: Image Captioning
---
In this notebook, you will learn how to load and pre-process data from the [COCO dataset](http://cocodataset.org/#home). You will also design a CNN-RNN model for automatically generating image captions.
Note that **any amendments that you make to this notebook will not be graded**. However, you will use the instructions provided in **Step 3** and **Step 4** to implement your own CNN encoder and RNN decoder by making amendments to the **models.py** file provided as part of this project. Your **models.py** file **will be graded**.
Feel free to use the links below to navigate the notebook:
- [Step 1](#step1): Explore the Data Loader
- [Step 2](#step2): Use the Data Loader to Obtain Batches
- [Step 3](#step3): Experiment with the CNN Encoder
- [Step 4](#step4): Implement the RNN Decoder
<a id='step1'></a>
## Step 1: Explore the Data Loader
We have already written a [data loader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) that you can use to load the COCO dataset in batches.
In the code cell below, you will initialize the data loader by using the `get_loader` function in **data_loader.py**.
> For this project, you are not permitted to change the **data_loader.py** file, which must be used as-is.
The `get_loader` function takes as input a number of arguments that can be explored in **data_loader.py**. Take the time to explore these arguments now by opening **data_loader.py** in a new window. Most of the arguments must be left at their default values, and you are only allowed to amend the values of the arguments below:
1. **`transform`** - an [image transform](http://pytorch.org/docs/master/torchvision/transforms.html) specifying how to pre-process the images and convert them to PyTorch tensors before using them as input to the CNN encoder. For now, you are encouraged to keep the transform as provided in `transform_train`. You will have the opportunity later to choose your own image transform to pre-process the COCO images.
2. **`mode`** - one of `'train'` (loads the training data in batches) or `'test'` (for the test data). We will say that the data loader is in training or test mode, respectively. While following the instructions in this notebook, please keep the data loader in training mode by setting `mode='train'`.
3. **`batch_size`** - determines the batch size. When training the model, this is number of image-caption pairs used to amend the model weights in each training step.
4. **`vocab_threshold`** - the total number of times that a word must appear in the in the training captions before it is used as part of the vocabulary. Words that have fewer than `vocab_threshold` occurrences in the training captions are considered unknown words.
5. **`vocab_from_file`** - a Boolean that decides whether to load the vocabulary from file.
We will describe the `vocab_threshold` and `vocab_from_file` arguments in more detail soon. For now, run the code cell below. Be patient - it may take a couple of minutes to run!
```
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
!pip install nltk
import nltk
nltk.download('punkt')
from data_loader import get_loader
from torchvision import transforms
# Define a transform to pre-process the training images.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Set the minimum word count threshold.
vocab_threshold = 5
# Specify the batch size.
batch_size = 10
# Obtain the data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=False)
```
When you ran the code cell above, the data loader was stored in the variable `data_loader`.
You can access the corresponding dataset as `data_loader.dataset`. This dataset is an instance of the `CoCoDataset` class in **data_loader.py**. If you are unfamiliar with data loaders and datasets, you are encouraged to review [this PyTorch tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
### Exploring the `__getitem__` Method
The `__getitem__` method in the `CoCoDataset` class determines how an image-caption pair is pre-processed before being incorporated into a batch. This is true for all `Dataset` classes in PyTorch; if this is unfamiliar to you, please review [the tutorial linked above](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
When the data loader is in training mode, this method begins by first obtaining the filename (`path`) of a training image and its corresponding caption (`caption`).
#### Image Pre-Processing
Image pre-processing is relatively straightforward (from the `__getitem__` method in the `CoCoDataset` class):
```python
# Convert image to tensor and pre-process using transform
image = Image.open(os.path.join(self.img_folder, path)).convert('RGB')
image = self.transform(image)
```
After loading the image in the training folder with name `path`, the image is pre-processed using the same transform (`transform_train`) that was supplied when instantiating the data loader.
#### Caption Pre-Processing
The captions also need to be pre-processed and prepped for training. In this example, for generating captions, we are aiming to create a model that predicts the next token of a sentence from previous tokens, so we turn the caption associated with any image into a list of tokenized words, before casting it to a PyTorch tensor that we can use to train the network.
To understand in more detail how COCO captions are pre-processed, we'll first need to take a look at the `vocab` instance variable of the `CoCoDataset` class. The code snippet below is pulled from the `__init__` method of the `CoCoDataset` class:
```python
def __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word,
end_word, unk_word, annotations_file, vocab_from_file, img_folder):
...
self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word,
end_word, unk_word, annotations_file, vocab_from_file)
...
```
From the code snippet above, you can see that `data_loader.dataset.vocab` is an instance of the `Vocabulary` class from **vocabulary.py**. Take the time now to verify this for yourself by looking at the full code in **data_loader.py**.
We use this instance to pre-process the COCO captions (from the `__getitem__` method in the `CoCoDataset` class):
```python
# Convert caption to tensor of word ids.
tokens = nltk.tokenize.word_tokenize(str(caption).lower()) # line 1
caption = [] # line 2
caption.append(self.vocab(self.vocab.start_word)) # line 3
caption.extend([self.vocab(token) for token in tokens]) # line 4
caption.append(self.vocab(self.vocab.end_word)) # line 5
caption = torch.Tensor(caption).long() # line 6
```
As you will see soon, this code converts any string-valued caption to a list of integers, before casting it to a PyTorch tensor. To see how this code works, we'll apply it to the sample caption in the next code cell.
```
sample_caption = 'A person doing a trick on a rail while riding a skateboard.'
```
In **`line 1`** of the code snippet, every letter in the caption is converted to lowercase, and the [`nltk.tokenize.word_tokenize`](http://www.nltk.org/) function is used to obtain a list of string-valued tokens. Run the next code cell to visualize the effect on `sample_caption`.
```
import nltk
sample_tokens = nltk.tokenize.word_tokenize(str(sample_caption).lower())
print(sample_tokens)
```
In **`line 2`** and **`line 3`** we initialize an empty list and append an integer to mark the start of a caption. The [paper](https://arxiv.org/pdf/1411.4555.pdf) that you are encouraged to implement uses a special start word (and a special end word, which we'll examine below) to mark the beginning (and end) of a caption.
This special start word (`"<start>"`) is decided when instantiating the data loader and is passed as a parameter (`start_word`). You are **required** to keep this parameter at its default value (`start_word="<start>"`).
As you will see below, the integer `0` is always used to mark the start of a caption.
```
sample_caption = []
start_word = data_loader.dataset.vocab.start_word
print('Special start word:', start_word)
sample_caption.append(data_loader.dataset.vocab(start_word))
print(sample_caption)
```
In **`line 4`**, we continue the list by adding integers that correspond to each of the tokens in the caption.
```
sample_caption.extend([data_loader.dataset.vocab(token) for token in sample_tokens])
print(sample_caption)
```
In **`line 5`**, we append a final integer to mark the end of the caption.
Identical to the case of the special start word (above), the special end word (`"<end>"`) is decided when instantiating the data loader and is passed as a parameter (`end_word`). You are **required** to keep this parameter at its default value (`end_word="<end>"`).
As you will see below, the integer `1` is always used to mark the end of a caption.
```
end_word = data_loader.dataset.vocab.end_word
print('Special end word:', end_word)
sample_caption.append(data_loader.dataset.vocab(end_word))
print(sample_caption)
```
Finally, in **`line 6`**, we convert the list of integers to a PyTorch tensor and cast it to [long type](http://pytorch.org/docs/master/tensors.html#torch.Tensor.long). You can read more about the different types of PyTorch tensors on the [website](http://pytorch.org/docs/master/tensors.html).
```
import torch
sample_caption = torch.Tensor(sample_caption).long()
print(sample_caption)
```
And that's it! In summary, any caption is converted to a list of tokens, with _special_ start and end tokens marking the beginning and end of the sentence:
```
[<start>, 'a', 'person', 'doing', 'a', 'trick', 'while', 'riding', 'a', 'skateboard', '.', <end>]
```
This list of tokens is then turned into a list of integers, where every distinct word in the vocabulary has an associated integer value:
```
[0, 3, 98, 754, 3, 396, 207, 139, 3, 753, 18, 1]
```
Finally, this list is converted to a PyTorch tensor. All of the captions in the COCO dataset are pre-processed using this same procedure from **`lines 1-6`** described above.
As you saw, in order to convert a token to its corresponding integer, we call `data_loader.dataset.vocab` as a function. The details of how this call works can be explored in the `__call__` method in the `Vocabulary` class in **vocabulary.py**.
```python
def __call__(self, word):
if not word in self.word2idx:
return self.word2idx[self.unk_word]
return self.word2idx[word]
```
The `word2idx` instance variable is a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) that is indexed by string-valued keys (mostly tokens obtained from training captions). For each key, the corresponding value is the integer that the token is mapped to in the pre-processing step.
Use the code cell below to view a subset of this dictionary.
```
# Preview the word2idx dictionary.
dict(list(data_loader.dataset.vocab.word2idx.items())[:10])
```
We also print the total number of keys.
```
# Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab))
```
As you will see if you examine the code in **vocabulary.py**, the `word2idx` dictionary is created by looping over the captions in the training dataset. If a token appears no less than `vocab_threshold` times in the training set, then it is added as a key to the dictionary and assigned a corresponding unique integer. You will have the option later to amend the `vocab_threshold` argument when instantiating your data loader. Note that in general, **smaller** values for `vocab_threshold` yield a **larger** number of tokens in the vocabulary. You are encouraged to check this for yourself in the next code cell by decreasing the value of `vocab_threshold` before creating a new data loader.
```
# Modify the minimum word count threshold.
vocab_threshold = 4
# Obtain the data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=False)
# Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab))
```
There are also a few special keys in the `word2idx` dictionary. You are already familiar with the special start word (`"<start>"`) and special end word (`"<end>"`). There is one more special token, corresponding to unknown words (`"<unk>"`). All tokens that don't appear anywhere in the `word2idx` dictionary are considered unknown words. In the pre-processing step, any unknown tokens are mapped to the integer `2`.
```
unk_word = data_loader.dataset.vocab.unk_word
print('Special unknown word:', unk_word)
print('All unknown words are mapped to this integer:', data_loader.dataset.vocab(unk_word))
```
Check this for yourself below, by pre-processing the provided nonsense words that never appear in the training captions.
```
print(data_loader.dataset.vocab('jfkafejw'))
print(data_loader.dataset.vocab('ieowoqjf'))
```
The final thing to mention is the `vocab_from_file` argument that is supplied when creating a data loader. To understand this argument, note that when you create a new data loader, the vocabulary (`data_loader.dataset.vocab`) is saved as a [pickle](https://docs.python.org/3/library/pickle.html) file in the project folder, with filename `vocab.pkl`.
If you are still tweaking the value of the `vocab_threshold` argument, you **must** set `vocab_from_file=False` to have your changes take effect.
But once you are happy with the value that you have chosen for the `vocab_threshold` argument, you need only run the data loader *one more time* with your chosen `vocab_threshold` to save the new vocabulary to file. Then, you can henceforth set `vocab_from_file=True` to load the vocabulary from file and speed the instantiation of the data loader. Note that building the vocabulary from scratch is the most time-consuming part of instantiating the data loader, and so you are strongly encouraged to set `vocab_from_file=True` as soon as you are able.
Note that if `vocab_from_file=True`, then any supplied argument for `vocab_threshold` when instantiating the data loader is completely ignored.
```
# Obtain the data loader (from file). Note that it runs much faster than before!
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_from_file=True)
```
In the next section, you will learn how to use the data loader to obtain batches of training data.
<a id='step2'></a>
## Step 2: Use the Data Loader to Obtain Batches
The captions in the dataset vary greatly in length. You can see this by examining `data_loader.dataset.caption_lengths`, a Python list with one entry for each training caption (where the value stores the length of the corresponding caption).
In the code cell below, we use this list to print the total number of captions in the training data with each length. As you will see below, the majority of captions have length 10. Likewise, very short and very long captions are quite rare.
```
from collections import Counter
# Tally the total number of training captions with each length.
counter = Counter(data_loader.dataset.caption_lengths)
lengths = sorted(counter.items(), key=lambda pair: pair[1], reverse=True)
for value, count in lengths:
print('value: %2d --- count: %5d' % (value, count))
```
To generate batches of training data, we begin by first sampling a caption length (where the probability that any length is drawn is proportional to the number of captions with that length in the dataset). Then, we retrieve a batch of size `batch_size` of image-caption pairs, where all captions have the sampled length. This approach for assembling batches matches the procedure in [this paper](https://arxiv.org/pdf/1502.03044.pdf) and has been shown to be computationally efficient without degrading performance.
Run the code cell below to generate a batch. The `get_train_indices` method in the `CoCoDataset` class first samples a caption length, and then samples `batch_size` indices corresponding to training data points with captions of that length. These indices are stored below in `indices`.
These indices are supplied to the data loader, which then is used to retrieve the corresponding data points. The pre-processed images and captions in the batch are stored in `images` and `captions`.
```
import numpy as np
import torch.utils.data as data
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
print('sampled indices:', indices)
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
print('images.shape:', images.shape)
print('captions.shape:', captions.shape)
# (Optional) Uncomment the lines of code below to print the pre-processed images and captions.
# print('images:', images)
# print('captions:', captions)
```
Each time you run the code cell above, a different caption length is sampled, and a different batch of training data is returned. Run the code cell multiple times to check this out!
You will train your model in the next notebook in this sequence (**2_Training.ipynb**). This code for generating training batches will be provided to you.
> Before moving to the next notebook in the sequence (**2_Training.ipynb**), you are strongly encouraged to take the time to become very familiar with the code in **data_loader.py** and **vocabulary.py**. **Step 1** and **Step 2** of this notebook are designed to help facilitate a basic introduction and guide your understanding. However, our description is not exhaustive, and it is up to you (as part of the project) to learn how to best utilize these files to complete the project. __You should NOT amend any of the code in either *data_loader.py* or *vocabulary.py*.__
In the next steps, we focus on learning how to specify a CNN-RNN architecture in PyTorch, towards the goal of image captioning.
<a id='step3'></a>
## Step 3: Experiment with the CNN Encoder
Run the code cell below to import `EncoderCNN` and `DecoderRNN` from **model.py**.
```
# Watch for any changes in model.py, and re-load it automatically.
% load_ext autoreload
% autoreload 2
# Import EncoderCNN and DecoderRNN.
from model import EncoderCNN, DecoderRNN
```
In the next code cell we define a `device` that you will use move PyTorch tensors to GPU (if CUDA is available). Run this code cell before continuing.
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
Run the code cell below to instantiate the CNN encoder in `encoder`.
The pre-processed images from the batch in **Step 2** of this notebook are then passed through the encoder, and the output is stored in `features`.
```
# Specify the dimensionality of the image embedding.
embed_size = 256
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Initialize the encoder. (Optional: Add additional arguments if necessary.)
encoder = EncoderCNN(embed_size)
# Move the encoder to GPU if CUDA is available.
encoder.to(device)
# Move last batch of images (from Step 2) to GPU if CUDA is available.
images = images.to(device)
# Pass the images through the encoder.
features = encoder(images)
print('type(features):', type(features))
print('features.shape:', features.shape)
# Check that your encoder satisfies some requirements of the project! :D
assert type(features)==torch.Tensor, "Encoder output needs to be a PyTorch Tensor."
assert (features.shape[0]==batch_size) & (features.shape[1]==embed_size), "The shape of the encoder output is incorrect."
```
The encoder that we provide to you uses the pre-trained ResNet-50 architecture (with the final fully-connected layer removed) to extract features from a batch of pre-processed images. The output is then flattened to a vector, before being passed through a `Linear` layer to transform the feature vector to have the same size as the word embedding.

You are welcome (and encouraged) to amend the encoder in **model.py**, to experiment with other architectures. In particular, consider using a [different pre-trained model architecture](http://pytorch.org/docs/master/torchvision/models.html). You may also like to [add batch normalization](http://pytorch.org/docs/master/nn.html#normalization-layers).
> You are **not** required to change anything about the encoder.
For this project, you **must** incorporate a pre-trained CNN into your encoder. Your `EncoderCNN` class must take `embed_size` as an input argument, which will also correspond to the dimensionality of the input to the RNN decoder that you will implement in Step 4. When you train your model in the next notebook in this sequence (**2_Training.ipynb**), you are welcome to tweak the value of `embed_size`.
If you decide to modify the `EncoderCNN` class, save **model.py** and re-execute the code cell above. If the code cell returns an assertion error, then please follow the instructions to modify your code before proceeding. The assert statements ensure that `features` is a PyTorch tensor with shape `[batch_size, embed_size]`.
<a id='step4'></a>
## Step 4: Implement the RNN Decoder
Before executing the next code cell, you must write `__init__` and `forward` methods in the `DecoderRNN` class in **model.py**. (Do **not** write the `sample` method yet - you will work with this method when you reach **3_Inference.ipynb**.)
> The `__init__` and `forward` methods in the `DecoderRNN` class are the only things that you **need** to modify as part of this notebook. You will write more implementations in the notebooks that appear later in the sequence.
Your decoder will be an instance of the `DecoderRNN` class and must accept as input:
- the PyTorch tensor `features` containing the embedded image features (outputted in Step 3, when the last batch of images from Step 2 was passed through `encoder`), along with
- a PyTorch tensor corresponding to the last batch of captions (`captions`) from Step 2.
Note that the way we have written the data loader should simplify your code a bit. In particular, every training batch will contain pre-processed captions where all have the same length (`captions.shape[1]`), so **you do not need to worry about padding**.
> While you are encouraged to implement the decoder described in [this paper](https://arxiv.org/pdf/1411.4555.pdf), you are welcome to implement any architecture of your choosing, as long as it uses at least one RNN layer, with hidden dimension `hidden_size`.
Although you will test the decoder using the last batch that is currently stored in the notebook, your decoder should be written to accept an arbitrary batch (of embedded image features and pre-processed captions [where all captions have the same length]) as input.

In the code cell below, `outputs` should be a PyTorch tensor with size `[batch_size, captions.shape[1], vocab_size]`. Your output should be designed such that `outputs[i,j,k]` contains the model's predicted score, indicating how likely the `j`-th token in the `i`-th caption in the batch is the `k`-th token in the vocabulary. In the next notebook of the sequence (**2_Training.ipynb**), we provide code to supply these scores to the [`torch.nn.CrossEntropyLoss`](http://pytorch.org/docs/master/nn.html#torch.nn.CrossEntropyLoss) optimizer in PyTorch.
```
# Specify the number of features in the hidden state of the RNN decoder.
hidden_size = 512
#-#-#-# Do NOT modify the code below this line. #-#-#-#
# Store the size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the decoder.
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move the decoder to GPU if CUDA is available.
decoder.to(device)
# Move last batch of captions (from Step 1) to GPU if CUDA is available
captions = captions.to(device)
# Pass the encoder output and captions through the decoder.
outputs = decoder(features, captions)
print('type(outputs):', type(outputs))
print('outputs.shape:', outputs.shape)
# Check that your decoder satisfies some requirements of the project! :D
assert type(outputs)==torch.Tensor, "Decoder output needs to be a PyTorch Tensor."
assert (outputs.shape[0]==batch_size) & (outputs.shape[1]==captions.shape[1]) & (outputs.shape[2]==vocab_size), "The shape of the decoder output is incorrect."
```
When you train your model in the next notebook in this sequence (**2_Training.ipynb**), you are welcome to tweak the value of `hidden_size`.
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import operator
df_Confirmed=pd.read_csv('/content/drive/My Drive/datasets/Tensorflow community challenge /Datasets /time_series_2019-ncov-Confirmed (1).csv')
df_Confirmed.head()
draft=df_Confirmed.copy()
df_Confirmed.keys()
df_Confirmed.describe()
key=df_Confirmed.describe().keys()
key
df_Confirmed=df_Confirmed.drop(['1/22/20', '1/23/20', '1/24/20', '1/25/20', '1/26/20',
'1/27/20', '1/28/20', '1/29/20', '1/30/20', '1/31/20', '2/1/20',
'2/2/20', '2/3/20', '2/4/20', '2/5/20', '2/6/20', '2/7/20', '2/8/20',
'2/9/20', '2/10/20', '2/11/20', '2/12/20', '2/13/20', '2/14/20',
'2/15/20', '2/16/20', '2/17/20', '2/18/20', '2/19/20', '2/20/20',
'2/21/20', '2/22/20', '2/23/20', '2/24/20', '2/25/20', '2/26/20',
'2/27/20', '2/28/20', '2/29/20', '3/1/20'],axis=1)
df_Confirmed.describe()
#df_Confirmed.head()
lastest_confirmed=df_Confirmed['3/22/20']
```
# **confirmed cases throughout the world reported**
```
df_contoury_wise=df_Confirmed.sort_values(by=['Country/Region'])
unique_country_list=list(df_contoury_wise['Country/Region'].unique())
confirmed_country_list=[]
no_cases=[]
for i in unique_country_list:
cases = lastest_confirmed[df_Confirmed['Country/Region']==i].sum()
if cases>0:
confirmed_country_list.append(cases)
else:
no_cases.append(i)
for i in no_cases:
unique_country_list.remove(i)
unique_countries = [k for k, v in sorted(zip(unique_country_list, confirmed_country_list), key=operator.itemgetter(1), reverse=True)]
for i in range(len(unique_countries)):
confirmed_country_list[i] = lastest_confirmed[df_Confirmed['Country/Region']==unique_countries[i]].sum()
for i in range(len(unique_countries)):
print(f'{unique_countries[i]}: {confirmed_country_list[i]} cases')
unique_provinces = list(df_Confirmed['Province/State'].unique())
outliers = ['United Kingdom', 'Denmark', 'France']
for i in outliers:
unique_provinces.remove(i)
province_confirmed_cases = []
no_cases = []
for i in unique_provinces:
cases = lastest_confirmed[df_Confirmed['Province/State']==i].sum()
if cases > 0:
province_confirmed_cases.append(cases)
else:
no_cases.append(i)
for i in no_cases:
unique_provinces.remove(i)
unique_provinces = [k for k, v in sorted(zip(unique_provinces, province_confirmed_cases), key=operator.itemgetter(1), reverse=True)]
for i in range(len(unique_provinces)):
province_confirmed_cases[i] = lastest_confirmed[df_Confirmed['Province/State']==unique_provinces[i]].sum()
#unique_provinces
print('Confirmed Cases by Province/States (US, China, Australia, Canada):')
for i in range(len(unique_provinces)):
print(f'{unique_provinces[i]}: {province_confirmed_cases[i]} cases')
nan_indices = []
for i in range(len(unique_provinces)):
if type(unique_provinces[i]) == float:
nan_indices.append(i)
unique_provinces = list(unique_provinces)
province_confirmed_cases = list(province_confirmed_cases)
for i in nan_indices:
unique_provinces.pop(i)
province_confirmed_cases.pop(i)
import random
import matplotlib.colors as mcolors
'''c = random.choices(list(mcolors.CSS4_COLORS.values()),k = len(unique_countries))
plt.figure(figsize=(15,15))
plt.pie(confirmed_country_list, colors=c)
plt.legend(unique_countries, loc='best', bbox_to_anchor=(0.7, 0., 1, 1))
plt.title('confirmed cases which are repored from country till the last date',size=32)
plt.show()'''
```
# **Date wise plot**
```
dates=df_Confirmed.keys()
dates=dates[4:]
dates=list(dates)
df_Confirmed.head()
cases_on_dates=[]
for i in dates:
sum_of_cases=df_Confirmed[i].sum()
cases_on_dates.append(sum_of_cases)
print(f'{i}: {sum_of_cases}')
c = random.choices(list(mcolors.CSS4_COLORS.values()),k = len(unique_countries))
c=random.choices(list(mcolors.CSS4_COLORS.values()),k=len(dates))
plt.figure(figsize=(15,15))
plt.pie(cases_on_dates,colors=c)
plt.title('confirmed cases on basis of dates',size=30)
plt.legend(dates, loc='best')
plt.show()
```
# **continent plot**
# top 10 country in which most of cases reporeted
```
# confirmed_country_list,unique_countries
```
```
visual_unique_countries = []
visual_confirmed_cases = []
others = np.sum(confirmed_country_list[10:])
for i in range(len(confirmed_country_list[:10])):
visual_unique_countries.append(unique_countries[i])
visual_confirmed_cases.append(confirmed_country_list[i])
visual_unique_countries.append('Others')
visual_confirmed_cases.append(others)
c=random.choices(list(mcolors.CSS4_COLORS.values()),k=len(dates))
plt.figure(figsize=(10,10))
plt.pie(visual_confirmed_cases,colors=c)
plt.legend(visual_unique_countries,loc='best')
plt.title('Top 10 country in which highest no of cases has been reported',size=30)
plt.show()
for i,j in enumerate(unique_countries):
print(i,j)
for i in range(len(unique_countries)):
print(f'{unique_countries[i]}: {confirmed_country_list[i]} ')
```
102
```
visual_unique_countries_all = []
visual_confirmed_cases_all = []
others = np.sum(confirmed_country_list[20:])
for i in range(len(confirmed_country_list[:20])):
visual_unique_countries_all.append(unique_countries[i])
visual_confirmed_cases_all.append(confirmed_country_list[i])
visual_unique_countries_all.append('Others')
visual_confirmed_cases_all.append(others)
c=random.choices(list(mcolors.CSS4_COLORS.values()),k=len(dates))
plt.figure(figsize=(10,10))
plt.pie(visual_confirmed_cases_all,colors=c)
plt.legend(visual_unique_countries_all,loc='best',bbox_to_anchor=(0.7, 0., 1, 1))
plt.title('All country in which cases has been reported',size=30)
plt.show()
visual_unique_provinces = []
visual_confirmed_cases2 = []
others = np.sum(province_confirmed_cases[10:])
for i in range(len(province_confirmed_cases[:10])):
visual_unique_provinces.append(unique_provinces[i])
visual_confirmed_cases2.append(province_confirmed_cases[i])
visual_unique_provinces.append('Others')
visual_confirmed_cases2.append(others)
c = random.choices(list(mcolors.CSS4_COLORS.values()),k = len(unique_countries))
plt.figure(figsize=(10,10))
plt.title(' Confirmed Cases per Province',size=32)
plt.pie(visual_confirmed_cases2, colors=c)
plt.legend(visual_unique_provinces, loc='best')
plt.show()
```
# so we are going to plot these top 10 countrys
```
c=random.choices(list(mcolors.CSS4_COLORS.values()),k=len(dates))
plt.figure(figsize=(10,10))
plt.pie(confirmed_country_list[:10],colors=c)
plt.legend(unique_countries[:10],loc='best')
plt.title('Top 10 country in which highest no of cases has been reported',size=30)
plt.show()
by_Continent=['Europe','Asia','Africa','North america','South america','Australia']
#s=list(map(str,input().split(" ")))
#Australia=s
'''Australia.append('Marshall Islands')
Australia.append('Solomon Islands')
Australia.append('New Zealand')
Australia.append('Papua New Guinea')'''
'''South_America=s
South_America'''
'''North_America=[]
North_America=s
North_America'''
'''North_America.append('Antigua and Barbuda')
North_America.append('El Salvador')
North_America.append('Dominican Republic')
North_America.append('Saint Kitts and Nevis')
North_America.append('Saint Lucia')
North_America.append('Saint Vincent and the Grenadines')
North_America.append('United States of America')
North_America.append('Trinidad and Tobago')'''
#North_America
'''Africa.append('Burkina Faso')'''
'''Africa.append('Sierra Leone')
Africa.append('South Africa')
Africa.append("Cote d'Ivoire")
Africa.append('Central African Republic')
Africa.append('Equatorial Guinea')'''
'''Africa'''
'''Asia.append('United Arab Emirates')
Asia'''
'''europe_provigences=['Albania',
'Andorra',
'Armenia',
'Austria',
'Azerbaijan','Belarus',
'Belgium',
'Bosnia and Herzegovina',
'Bulgaria',
'Croatia',
'Cyprus',
'Czechia',
'Denmark',
'Estonia',
'Finland',
'France',
'Georgia',
'Germany',
'Greece',
'Hungary',
'Iceland',
'Ireland',
'Italy',
'Kazakhstan',
'Kosovo',
'Latvia',
'Liechtenstein',
'Lithuania',
'Luxembourg',
'Malta',
'Moldova,'
'Monaco',
'Montenegro',
'Netherlands',
'North Macedonia',
'Norway',
'Poland',
'Portugal',
'Romania',
'Russia',
'San Marino',
'Serbia',
'Slovakia',
'Slovenia',
'Spain',
'Sweden',
'Switzerland',
'Turkey',
'Ukraine',
'United Kingdom',
'Vatican City' ,
'UK']'''
#continent_df=pd.DataFrame()
#continent_df['europe_provigences']=europe_provigences
#continent_df['Asia']=pd.Series(Asia)
#continent_df['Africa']=pd.Series(Africa)
#continent_df['North_America']=pd.Series(North_America)
#continent_df['South_America']=pd.Series(South_America)
#continent_df['Australia']=pd.Series(Australia)
#continent_df.to_csv('/content/continent_df.csv')
continent_df=pd.read_csv('/content/drive/My Drive/datasets/Tensorflow community challenge /Datasets /continent_df.csv')
continent_df.pop('Unnamed: 0')
europe=continent_df['europe_provigences'].values
Asia=continent_df['Asia'].values
Africa=continent_df['Africa'].values
North_America=continent_df['North_America'].values
South_America=continent_df['South_America'].values
Australia=continent_df['Australia'].values
South_America
confirmed_country_list,unique_countries
europe_total_cases=[]
South_America_total_cases=[]
Asia_total_cases=[]
Africa_total_cases=[]
North_America_total_cases=[]
Australia_total_cases=[]
for i in range(len(unique_countries)):
if unique_countries[i] in europe:
europe_total_cases.append(confirmed_country_list[i])
if unique_countries[i] in Asia:
Asia_total_cases.append(confirmed_country_list[i])
if unique_countries[i] in Africa:
Africa_total_cases.append(confirmed_country_list[i])
if unique_countries[i] in North_America:
North_America_total_cases.append(confirmed_country_list[i])
if unique_countries[i] in Australia:
Australia_total_cases.append(confirmed_country_list[i])
if unique_countries[i] in South_America:
South_America_total_cases.append(confirmed_country_list[i])
sum(europe_total_cases)
South_America_total_cases
sum(Asia_total_cases),sum(Africa_total_cases),sum(North_America_total_cases),sum(Australia_total_cases),sum(South_America_total_cases)
total_continent_through_World=[]
total_continent_through_World.append(sum(europe_total_cases))
total_continent_through_World.append(sum(Asia_total_cases))
total_continent_through_World.append(sum(Africa_total_cases))
total_continent_through_World.append(sum(North_America_total_cases))
total_continent_through_World.append(sum(Australia_total_cases))
total_continent_through_World.append(sum(South_America_total_cases))
total_continent_through_World_unique=[]
total_continent_through_World_unique.append('Europe')
total_continent_through_World_unique.append('Asia')
total_continent_through_World_unique.append('Africa')
total_continent_through_World_unique.append('North_America')
total_continent_through_World_unique.append('Australia')
total_continent_through_World_unique.append('South_America')
total_continent_through_World
y_pos = np.arange(len(total_continent_through_World_unique))
plt.figure(figsize=(10,10))
plt.bar(y_pos,total_continent_through_World)
plt.title('continent wise plot for date 3/22/20')
plt.xlabel('continent')
plt.ylabel('no of peoples')
plt.ylim(0,180000)
plt.xticks(y_pos, total_continent_through_World_unique)
plt.show()
```
| github_jupyter |
# Introduction
A mass on a spring experiences a force described by Hookes law.
For a displacment $x$, the force is
$$F=-kx,$$
where $k$ is the spring constant with units of N/m.
The equation of motion is
$$ F = ma $$
or
$$ -k x = m a .$$
Because acceleration is the second derivative of displacment, this is
a differential equation,
$$ \frac{d^2}{dt^2} = -\frac{k}{m} x.$$
The solution to this equation is harmonic motion, for example
$$ x(t) = A\sin\omega t,$$
where $A$ is some amplitude and $\omega = \sqrt{k/m}$.
This can be verified by plugging the solution into the differential equation.
The angular frequency $\omega$ is related to the frequency $f$ and the period $T$ by
$$f = \omega/2\pi$$ and $$T=2\pi/\omega$$
We can illustrate this rather trivial case with an interacive plot.
```
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.html import widgets
def make_plot(t):
fig, ax = plt.subplots()
x,y = 0,0
plt.plot(x, y, 'k.')
plt.plot(x + 0.3 * t, y, 'bo')
plt.xlim(-1,1)
plt.ylim(-1,1)
widgets.interact(make_plot, t=(-1,1,0.1))
```
We want to generalize this result to several massess connected by several springs.
# The spring constant as a second derivative of potential
The force related to poential energy by
$$ F = -\frac{d}{dx}V(x).$$
Ths equation comes directly from the definition that work is force times distance.
Integrating this, we find the potential energy of a mass on a spring,
$$ V(x) = \frac{1}{2}kx^2. $$
In fact, the spring contant can be defined to be the second derivative of the potential,
$$ k = \frac{d^2}{dx^2} V(x).$$ We take the value of the second derivative at the minimum
of the potential, which assumes that the oscillations are not very far from equilibrium.
We see that Hooke's law is simply
$$F = -\frac{d^2 V(x)}{dx^2} x, $$
where the second derivative is evaluated at the minimum of the potential.
For a general potential, we can write the equation of motion as
$$ \frac{d^2}{dt^2} x = -\frac{1}{m}\frac{d^2V(x)}{dx^2} x.$$
The expression on the right hand side is known as the dynamical matrix,
though this is a trivial 1x1 matrix.
# Two masses connected by a spring
Now the potential depends on two corrdinates,
$$ V(x_1, x_2) = \frac{1}{2} k (x_1 - x_2 - d),$$
where $d$ is the equilibrium separation of the particles.
Now the force on each particle depends on the positions of both of the particles,
$$
\begin{pmatrix}F_1 \\ F_2\end{pmatrix}
= -
\begin{pmatrix}
\frac{\partial^2 V}{\partial x_1^2} &
\frac{\partial^2 V}{\partial x_1\partial x_2} \\
\frac{\partial^2 V}{\partial x_1\partial x_2} &
\frac{\partial^2 V}{\partial x_2^2} \\
\end{pmatrix}
\begin{pmatrix}x_1 \\ x_2\end{pmatrix}
$$
For performing the derivatives, we find
$$
\begin{pmatrix}F_1 \\ F_2\end{pmatrix}
= -
\begin{pmatrix}
k & -k \\
-k & k \\
\end{pmatrix}
\begin{pmatrix}x_1 \\ x_2\end{pmatrix}
$$
The equations of motion are coupled,
$$
\begin{pmatrix}
\frac{d^2x_1}{dt^2} \\
\frac{d^2x_2}{dt^2} \\
\end{pmatrix}
= -
\begin{pmatrix}
k/m & -k/m \\
-k/m & k/m \\
\end{pmatrix}
\begin{pmatrix}x_1 \\ x_2\end{pmatrix}
$$
To decouple the equations, we find the eigenvalues and eigenvectors.
```
import numpy as np
a = np.array([[1, -1], [-1, 1]])
freq, vectors = np.linalg.eig(a)
vectors = vectors.transpose()
```
The frequencies of the two modes of vibration are (in multiples of $\sqrt{k/m}$)
```
freq
```
The first mode is a vibrational mode were the masses vibrate against each other (moving in opposite directions). This can be seen from the eigenvector.
```
vectors[0]
```
The second mode is a translation mode with zero frequency—both masses move in the same direction.
```
vectors[1]
```
We can interactively illustrate the vibrational mode.
```
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.html import widgets
def make_plot(t):
fig, ax = plt.subplots()
x,y = np.array([-1,1]), np.array([0,0])
plt.plot(x, y, 'k.')
plt.plot(x + 0.3 * vectors[0] * t, y, 'bo')
plt.xlim(-1.5,1.5)
plt.ylim(-1.5,1.5)
widgets.interact(make_plot, t=(-1,1,0.1))
```
# Finding the dynamical matrix with numerical derivatives
We start from a function $V(x)$. If we want to calculate a derivative,
we just use the difference formula but don't take the delta too small.
Using $\delta x = 10^{-6}$ is safe.
$$
F = -\frac{dV(x)}{dx} \approx
\frac{V(x+\Delta x) - V(x-\Delta x)}{2\Delta x}
$$
Note that it is more accurate to do this symmetric difference formula
than it would be to use the usual forward derivative from calculus class.
It's easy to see this formula is just calculating the slope of the function using points near $x$.
```
def V(x):
return 0.5 * x**2
deltax = 1e-6
def F_approx(x):
return ( V(x + deltax) - V(x - deltax) ) / (2 * deltax)
[(x, F_approx(x)) for x in np.linspace(-2,2,9)]
```
Next, we can find the second derivative by using the difference formula twice.
We find the nice expression,
$$
\frac{d^2V}{dx^2} \approx \frac{V(x+\Delta x) - 2V(x) + V(x-\Delta x)}{(\Delta x)^2}.
$$
This formula has the nice interpretation of comparing the value of $V(x)$ to
the average of points on either side. If it is equal to the average, the line
is straight and the second derivative is zero.
If average of the outer values is larger than $V(x)$, then the ends curve upward,
and the second derivative is positive.
Likewise, if the average of the outer values is less than $V(x)$, then the ends curve downward,
and the second derivative is negative.
```
def dV2dx2_approx(x):
return ( V(x + deltax) - 2 * V(x) + V(x - deltax) ) / deltax**2
[(x, dV2dx2_approx(x)) for x in np.linspace(-2,2,9)]
```
Now we can use these derivative formulas to calcuate the dynamical matrix
for the two masses on one spring. Well use $k=1$ and $m=1$ for simplicity.
```
def V2(x1, x2):
return 0.5 * (x1 - x2)**2
x1, x2 = -1, 1
mat = np.array(
[[(V2(x1+deltax, x2) - 2 * V2(x1,x2) + V2(x1-deltax, x2)) / deltax**2 ,
(V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax)
- V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2],
[(V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax)
- V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2,
(V2(x1, x2+deltax) - 2 * V2(x1,x2) + V2(x1, x2-deltax)) / deltax**2 ]]
)
mat
freq, vectors = np.linalg.eig(mat)
vectors = vectors.transpose()
for f,v in zip(freq, vectors):
print("freqency", f, ", eigenvector", v)
```
For practical calcuations, we have to automate this matrix construction for an arbitrary potential.
| github_jupyter |
A **Deep Q Network** implementation in tensorflow with target network & random
experience replay. The code is tested with Gym's discrete action space
environment, CartPole-v0 on Colab.
---
## Notations:
Model network = $Q_{\theta}$
Model parameter = $\theta$
Model network Q value = $Q_{\theta}$ (s, a)
Target network = $Q_{\phi}$
Target parameter = $\phi$
Target network Q value = $Q_{\phi}$ ($s^{'}$, $a^{'}$)
---
## Equations:
TD target = r (s, a) $+$ $\gamma$ $max_{a}$ $Q_{\phi}$ $s^{'}$, $a^{'}$)
TD error = (TD target) $-$ (Model network Q value)
= [r (s, a) $+$ $\gamma$ $max_{a^{'}}$ $Q_{\phi}$ ($s^{'}$, $a^{'}$)] $-$ $Q_{\theta}$ (s, a)
---
## Key implementation details:
Update target parameter $\phi$ with model parameter $\theta$.
Copy $\theta$ to $\phi$ with *either* soft or hard parameter update.
Hard parameter update:
```
with tf.variable_scope('hard_replace'):
self.target_replace_hard = [t.assign(m) for t, m in zip(self.target_net_params, self.model_net_params)]
```
```
# hard params replacement
if self.learn_step % self.tau_step == 0:
self.sess.run(self.target_replace_hard)
self.learn_step += 1
```
Soft parameter update: polyak $\cdot$ $\theta$ + (1 $-$ polyak) $\cdot$ $\phi$
```
with tf.variable_scope('soft_replace'):
self.target_replace_soft = [t.assign(self.polyak * m + (1 - self.polyak) * t)
for t, m in zip(self.target_net_params, self.model_net_params)]
```
Stop TD target from contributing to gradient computation:
```
# exclude td_target in gradient computation
td_target = tf.stop_gradient(td_target)
```
---
## References:
[Human-level control through deep reinforcement learning
(Mnih et al., 2015)](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf)
---
<br>
```
import tensorflow as tf
import gym
import numpy as np
from matplotlib import pyplot as plt
# random sampling for learning from experience replay
class Exp():
def __init__(self, obs_size, max_size):
self.obs_size = obs_size
self.num_obs = 0
self.max_size = max_size
self.mem_full = False
# memory structure that stores samples from observations
self.mem = {'s' : np.zeros(self.max_size * self.obs_size, dtype=np.float32).reshape(self.max_size,self.obs_size),
'a' : np.zeros(self.max_size * 1, dtype=np.int32).reshape(self.max_size,1),
'r' : np.zeros(self.max_size * 1).reshape(self.max_size,1),
'done' : np.zeros(self.max_size * 1, dtype=np.int32).reshape(self.max_size,1)}
# stores sample obervation at each time step in experience memory
def store(self, s, a, r, done):
i = self.num_obs % self.max_size
self.mem['s'][i,:] = s
self.mem['a'][i,:] = a
self.mem['r'][i,:] = r
self.mem['done'][i,:] = done
self.num_obs += 1
if self.num_obs == self.max_size:
self.num_obs = 0 # reset number of observation
self.mem_full = True
# returns a minibatch of experience
def minibatch(self, minibatch_size):
if self.mem_full == False:
max_i = min(self.num_obs, self.max_size) - 1
else:
max_i = self.max_size - 1
# randomly sample a minibatch of indexes
sampled_i = np.random.randint(max_i, size=minibatch_size)
s = self.mem['s'][sampled_i,:].reshape(minibatch_size, self.obs_size)
a = self.mem['a'][sampled_i].reshape(minibatch_size)
r = self.mem['r'][sampled_i].reshape((minibatch_size,1))
s_next = self.mem['s'][sampled_i + 1,:].reshape(minibatch_size, self.obs_size)
done = self.mem['done'][sampled_i].reshape((minibatch_size,1))
return (s, a, r, s_next, done)
# Evaluates behavior policy while improving target policy
class DQN_agent():
def __init__(self, num_actions, obs_size, nhidden,
epoch,
epsilon, gamma, learning_rate,
replace, polyak, tau_step,
mem_size, minibatch_size):
super(DQN_agent, self).__init__()
self.actions = range(num_actions)
self.num_actions = num_actions
self.obs_size = obs_size # number of features
self.nhidden = nhidden # hidden nodes
self.epoch = epoch # for epsilon decay & to decide when to start training
self.epsilon = epsilon # for eploration
self.gamma = gamma # discount factor
self.learning_rate = learning_rate # learning rate alpha
# for params replacement
self.replace = replace # type of replacement
self.polyak = polyak # for soft replacement
self.tau_step = tau_step # for hard replacement
self.learn_step = 0 # steps after learning
# for Experience replay
self.mem = Exp(self.obs_size, mem_size) # memory that holds experiences
self.minibatch_size = minibatch_size
self.step = 0 # each step in a episode
# for tensorflow ops
self.built_graph()
self.sess = tf.Session()
self.sess.run(tf.global_variables_initializer())
self.sess.run(self.target_replace_hard)
self.cum_loss_per_episode = 0 # for charting display
# decay epsilon after each epoch
def epsilon_decay(self):
if self.step % self.epoch == 0:
self.epsilon = max(.01, self.epsilon * .95)
# epsilon-greedy behaviour policy for action selection
def act(self, state):
if np.random.random() < self.epsilon:
i = np.random.randint(0,len(self.actions))
else:
# get Q(s,a) from model network
Q_val = self.sess.run(self.model_Q_val, feed_dict={self.s: np.reshape(state, (1,state.shape[0]))})
# get index of largest Q(s,a)
i = np.argmax(Q_val)
action = self.actions[i]
self.step += 1
self.epsilon_decay()
return action
def learn(self, s, a, r, done):
# stores observation in memory as experience at each time step
self.mem.store(s, a, r, done)
# starts training a minibatch from experience after 1st epoch
if self.step > self.epoch:
self.replay() # start training with experience replay
def td_target(self, r, done, target_Q_val):
# select max Q values from target network (greedy policy)
max_target_Q_val = tf.reduce_max(target_Q_val, axis=1, keepdims=True)
# if state = done, td_target = r
td_target = (1.0 - tf.cast(done, tf.float32)) * tf.math.multiply(self.gamma, max_target_Q_val) + r
# exclude td_target in gradient computation
td_target = tf.stop_gradient(td_target)
return td_target
# select Q(s,a) from actions using e-greedy as behaviour policy from model network
def predicted_Q_val(self, a, model_Q_val):
# create 1D tensor of length = number of rows in a
arr = tf.range(tf.shape(a)[0], dtype=tf.int32)
# stack by column to create indices for Q(s,a) selections based on a
indices = tf.stack([arr, a], axis=1)
# select Q(s,a) using indice from model_Q_val
Q_val = tf.gather_nd(model_Q_val, indices)
Q_val = tf.reshape(Q_val, (self.minibatch_size, 1))
return Q_val
# contruct neural network
def built_net(self, var_scope, w_init, b_init, features, num_hidden, num_output):
with tf.variable_scope(var_scope):
feature_layer = tf.contrib.layers.fully_connected(features, num_hidden,
activation_fn = tf.nn.relu,
weights_initializer = w_init,
biases_initializer = b_init)
Q_val = tf.contrib.layers.fully_connected(feature_layer, num_output,
activation_fn = None,
weights_initializer = w_init,
biases_initializer = b_init)
return Q_val
# contruct tensorflow graph
def built_graph(self):
tf.reset_default_graph()
self.s = tf.placeholder(tf.float32, [None,self.obs_size], name='s')
self.a = tf.placeholder(tf.int32, [None,], name='a')
self.r = tf.placeholder(tf.float32, [None,1], name='r')
self.s_next = tf.placeholder(tf.float32, [None,self.obs_size], name='s_next')
self.done = tf.placeholder(tf.int32, [None,1], name='done')
# weight, bias initialization
w_init = tf.initializers.lecun_uniform()
b_init = tf.initializers.he_uniform(1e-4)
self.model_Q_val = self.built_net('model_net', w_init, b_init, self.s, self.nhidden, self.num_actions)
self.target_Q_val = self.built_net('target_net', w_init, b_init, self.s_next, self.nhidden, self.num_actions)
with tf.variable_scope('td_target'):
td_target = self.td_target(self.r, self.done, self.target_Q_val)
with tf.variable_scope('predicted_Q_val'):
predicted_Q_val = self.predicted_Q_val(self.a, self.model_Q_val)
with tf.variable_scope('loss'):
self.loss = tf.losses.huber_loss(td_target, predicted_Q_val)
with tf.variable_scope('optimizer'):
self.optimizer = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.loss)
# get network params
with tf.variable_scope('params'):
self.target_net_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='target_net')
self.model_net_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='model_net')
# replace target net params with model net params
with tf.variable_scope('hard_replace'):
self.target_replace_hard = [t.assign(m) for t, m in zip(self.target_net_params, self.model_net_params)]
with tf.variable_scope('soft_replace'):
self.target_replace_soft = [t.assign(self.polyak * m + (1 - self.polyak) * t)
for t, m in zip(self.target_net_params, self.model_net_params)]
# decide soft or hard params replacement
def replace_params(self):
if self.replace == 'soft':
# soft params replacement
self.sess.run(self.target_replace_soft)
else:
# hard params replacement
if self.learn_step % self.tau_step == 0:
self.sess.run(self.target_replace_hard)
self.learn_step += 1
def replay(self):
# select minibatch of experiences from memory for training
(s, a, r, s_next, done) = self.mem.minibatch(self.minibatch_size)
# training
_, loss = self.sess.run([self.optimizer, self.loss], feed_dict = {self.s: s,
self.a: a,
self.r: r,
self.s_next: s_next,
self.done: done})
self.cum_loss_per_episode += loss
self.replace_params()
# compute stats
def stats(r_per_episode, R, cum_R, cum_R_episodes,
cum_loss_per_episode, cum_loss, cum_loss_episodes):
r_per_episode = np.append(r_per_episode, R) # store reward per episode
cum_R_episodes += R
cum_R = np.append(cum_R, cum_R_episodes) # store cumulative reward of all episodes
cum_loss_episodes += cum_loss_per_episode
cum_loss = np.append(cum_loss, cum_loss_episodes) # store cumulative loss of all episodes
return (r_per_episode, cum_R_episodes, cum_R, cum_loss_episodes, cum_loss)
# plot performance
def plot_charts(values, y_label):
fig = plt.figure(figsize=(10,5))
plt.title("DQN performance")
plt.xlabel("Episode")
plt.ylabel(y_label)
plt.plot(values)
plt.show(fig)
def display(r_per_episode, cum_R, cum_loss):
plot_charts(r_per_episode, "Reward")
plot_charts(cum_R, "cumulative_reward")
plot_charts(cum_loss, "cumulative_loss")
avg_r = np.sum(r_per_episode) / max_episodes
print("avg_r", avg_r)
avg_loss = np.sum(cum_loss) / max_episodes
print("avg_loss", avg_loss)
def run_episodes(env, agent, max_episodes):
r_per_episode = np.array([0])
cum_R = np.array([0])
cum_loss = np.array([0])
cum_R_episodes = 0
cum_loss_episodes = 0
# repeat each episode
for episode_number in range(max_episodes):
s = env.reset() # reset new episode
done = False
R = 0
# repeat each step
while not done:
# select action using behaviour policy(epsilon-greedy) from model network
a = agent.act(s)
# take action in environment
next_s, r, done, _ = env.step(a)
# agent learns
agent.learn(s, a, r, done)
s = next_s
R += r
(r_per_episode, cum_R_episodes, cum_R, cum_loss_episodes, cum_loss) = stats(r_per_episode, R, cum_R, cum_R_episodes,
agent.cum_loss_per_episode, cum_loss, cum_loss_episodes)
display(r_per_episode, cum_R, cum_loss)
env.close()
env = gym.make('CartPole-v0') # openai gym environment
#env = gym.make('Pong-v0') # openai gym environment
max_episodes = 500
epoch = 100
num_actions = env.action_space.n # number of possible actions
obs_size = env.observation_space.shape[0] # dimension of state space
nhidden = 128 # number of hidden nodes
epsilon = .9
gamma = .9
learning_rate = .3
replace = 'soft' # params replacement type, 'soft' for soft replacement or empty string '' for hard replacement
polyak = .001
tau_step = 300
mem_size = 30000
minibatch_size = 64
%matplotlib inline
agent = DQN_agent(num_actions, obs_size, nhidden,
epoch,
epsilon, gamma, learning_rate,
replace, polyak, tau_step,
mem_size, minibatch_size)
run_episodes(env, agent, max_episodes)
```
| github_jupyter |
# Using PS GPIO with PYNQ
## Goal
The aim of this notebook is to show how to use the Zynq PS GPIO from PYNQ. The PS GPIO are simple wires from the PS, and don't need a controller in the programmable logic.
Up to 96 input, output and tri-state PS GPIO are available via the EMIO in the Zynq Ultrascale+. They can be used to connect simple control and data signals to IP or external Inputs/Outputs in the PL.
## Hardware
This example uses a bitstream that connects PS GPIO to the PMod on the KV260.

### External Peripherals
An LED, a Slider switch and a Buzzer are connected via the Pmod connector and a Grove adapter. These will be used to demonstrate the PS GPIO are working.

### Download the tutorial overlay
The `ps_gpio_kv260.bit` and `ps_gpio_kv260.hwh` files are in the `ps_gpio` directory local to this folder.
The bitstream can be downloaded using the PYNQ `Overlay` class.
```
from pynq import Overlay
ps_gpio_design = Overlay("./ps_gpio/ps_gpio_kv260.bit")
```
## PYNQ GPIO class
The PYNQ GPIO class will be used to access the PS GPIO.
```
from pynq import GPIO
```
### GPIO help
### Create Python GPIO objects for the led, slider and buzzer and set the direction:
```
led = GPIO(GPIO.get_gpio_pin(6), 'out')
buzzer = GPIO(GPIO.get_gpio_pin(0), 'out')
slider = GPIO(GPIO.get_gpio_pin(1), 'in')
slider_led = GPIO(GPIO.get_gpio_pin(5), 'out')
# = GPIO(GPIO.get_gpio_pin(2), 'out')
# = GPIO(GPIO.get_gpio_pin(3), 'out')
# = GPIO(GPIO.get_gpio_pin(4), 'out')
# = GPIO(GPIO.get_gpio_pin(7), 'out')
```
### led.write() help
## Test LED
Turn on the LED
Turn off the LED
## Blinky
```
from time import sleep
DELAY = 0.1
for i in range(20):
led.write(0)
sleep(DELAY)
led.write(1)
sleep(DELAY)
```
### Slider
Read from Slider
```
for i in range(50):
sliver_value = slider.read()
slider_led.write(sliver_value)
led.write(sliver_value)
sleep(DELAY)
```
### Buzzer
```
buzzer.write(1)
buzzer.write(0)
def play_sound(frequency, duration=100):
period = 1/frequency
timeHigh = period/2
for i in range(0, int(duration)): #, int(timeHigh*1000)):
buzzer.write(1)
sleep(timeHigh)
buzzer.write(0)
sleep(timeHigh)
```
Alarm clock
```
for i in range(10):
play_sound(5000)
sleep(.1)
```
### Use an IPython Widget to control the buzzer
The following example uses an IPython *Integer Slider* to call the `play_sound()` method defined above
```
from ipywidgets import interact
import ipywidgets as widgets
interact(play_sound, frequency=widgets.IntSlider(min=500, max=10000, step=500, value=500), duration =100);
```
| github_jupyter |
# Example 1d: Spin-Bath model, fitting of spectrum and correlation functions
### Introduction
The HEOM method solves the dynamics and steady state of a system and its environment, the latter of which is encoded in a set of auxiliary density matrices.
In this example we show the evolution of a single two-level system in contact with a single Bosonic environment. The properties of the system are encoded in Hamiltonian, and a coupling operator which describes how it is coupled to the environment.
The Bosonic environment is implicitly assumed to obey a particular Hamiltonian (see paper), the parameters of which are encoded in the spectral density, and subsequently the free-bath correlation functions.
In the example below we show how model an Ohmic environment with exponential cut-off in two ways. First we fit the spectrum with a set of underdamped brownian oscillator functions. Second, we evaluate the correlation functions, and fit those with a certain choice of exponential functions.
```
%pylab inline
from qutip import *
%load_ext autoreload
%autoreload 2
from bofin.heom import BosonicHEOMSolver
def cot(x):
return 1./np.tan(x)
# Defining the system Hamiltonian
eps = .0 # Energy of the 2-level system.
Del = .2 # Tunnelling term
Hsys = 0.5 * eps * sigmaz() + 0.5 * Del* sigmax()
# Initial state of the system.
rho0 = basis(2,0) * basis(2,0).dag()
#Import mpmath functions for evaluation of correlation functions
from mpmath import mp
from mpmath import zeta
from mpmath import gamma
mp.dps = 15; mp.pretty = True
Q = sigmaz()
alpha = 3.25
T = 0.5
wc = 1
beta = 1/T
s = 1
tlist = np.linspace(0, 10, 5000)
tlist3 = linspace(0,15,50000)
#note: the arguments to zeta should be in as high precision as possible, might need some adjustment
# see http://mpmath.org/doc/current/basics.html#providing-correct-input
ct = [complex((1/pi)*alpha * wc**(1-s) * beta**(-(s+1)) * (zeta(s+1,(1+beta*wc-1.0j*wc*t)/(beta*wc)) +
zeta(s+1,(1+1.0j*wc*t)/(beta*wc)))) for t in tlist]
#also check long timescales
ctlong = [complex((1/pi)*alpha * wc**(1-s) * beta**(-(s+1)) * (zeta(s+1,(1+beta*wc-1.0j*wc*t)/(beta*wc)) +
zeta(s+1,(1+1.0j*wc*t)/(beta*wc)))) for t in tlist3]
corrRana = real(ctlong)
corrIana = imag(ctlong)
pref = 1.
#lets try fitting the spectrurum
#use underdamped case with meier tannor form
wlist = np.linspace(0, 25, 20000)
from scipy.optimize import curve_fit
#seperate functions for plotting later:
def fit_func_nocost(x, a, b, c, N):
tot = 0
for i in range(N):
tot+= 2 * a[i] * b[i] * (x)/(((x+c[i])**2 + (b[i]**2))*((x-c[i])**2 + (b[i]**2)))
cost = 0.
return tot
def wrapper_fit_func_nocost(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]),list(args[0][2*N:3*N])
# print("debug")
return fit_func_nocost(x, a, b, c, N)
# function that evaluates values with fitted params at
# given inputs
def checker(tlist, vals, N):
y = []
for i in tlist:
# print(i)
y.append(wrapper_fit_func_nocost(i, N, vals))
return y
#######
#Real part
def wrapper_fit_func(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]),list(args[0][2*N:3*N])
# print("debug")
return fit_func(x, a, b, c, N)
def fit_func(x, a, b, c, N):
tot = 0
for i in range(N):
tot+= 2 * a[i] * b[i] * (x)/(((x+c[i])**2 + (b[i]**2))*((x-c[i])**2 + (b[i]**2)))
cost = 0.
#for i in range(N):
#print(i)
# cost += ((corrRana[0]-a[i]*np.cos(d[i])))
tot+=0.0*cost
return tot
def fitterR(ans, tlist, k):
# the actual computing of fit
popt = []
pcov = []
# tries to fit for k exponents
for i in range(k):
#params_0 = [0]*(2*(i+1))
params_0 = [0.]*(3*(i+1))
upper_a = 100*abs(max(ans, key = abs))
#sets initial guess
guess = []
#aguess = [ans[0]]*(i+1)#[max(ans)]*(i+1)
aguess = [abs(max(ans, key = abs))]*(i+1)
bguess = [1*wc]*(i+1)
cguess = [1*wc]*(i+1)
guess.extend(aguess)
guess.extend(bguess)
guess.extend(cguess)
# sets bounds
# a's = anything , b's negative
# sets lower bound
b_lower = []
alower = [-upper_a]*(i+1)
blower = [0.1*wc]*(i+1)
clower = [0.1*wc]*(i+1)
b_lower.extend(alower)
b_lower.extend(blower)
b_lower.extend(clower)
# sets higher bound
b_higher = []
ahigher = [upper_a]*(i+1)
#bhigher = [np.inf]*(i+1)
bhigher = [100*wc]*(i+1)
chigher = [100*wc]*(i+1)
b_higher.extend(ahigher)
b_higher.extend(bhigher)
b_higher.extend(chigher)
param_bounds = (b_lower, b_higher)
p1, p2 = curve_fit(lambda x, *params_0: wrapper_fit_func(x, i+1, \
params_0), tlist, ans, p0=guess, bounds = param_bounds,sigma=[0.0001 for w in wlist], maxfev = 1000000000)
popt.append(p1)
pcov.append(p2)
print(i+1)
return popt
# print(popt)
J = [w * alpha * e**(-w/wc) for w in wlist]
k = 4
popt1 = fitterR(J, wlist, k)
for i in range(k):
y = checker(wlist, popt1[i],i+1)
print(popt1[i])
plt.plot(wlist, J, wlist, y)
plt.show()
lam = list(popt1[k-1])[:k]
gamma = list(popt1[k-1])[k:2*k] #damping terms
w0 = list(popt1[k-1])[2*k:3*k] #w0 termss
print(lam)
print(gamma)
print(w0)
lamT = []
print(lam)
print(gamma)
print(w0)
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
axes.plot(wlist, J, 'r--', linewidth=2, label="original")
for kk,ll in enumerate(lam):
#axes.plot(wlist, [lam[kk] * gamma[kk] * (w)/(((w**2-w0[kk]**2)**2 + (gamma[kk]**2*w**2))) for w in wlist],linewidth=2)
axes.plot(wlist, [2* lam[kk] * gamma[kk] * (w)/(((w+w0[kk])**2 + (gamma[kk]**2))*((w-w0[kk])**2 + (gamma[kk]**2))) for w in wlist],linewidth=2, label="fit")
axes.set_xlabel(r'$w$', fontsize=28)
axes.set_ylabel(r'J', fontsize=28)
axes.legend()
fig.savefig('noisepower.eps')
wlist2 = np.linspace(-10,10 , 50000)
s1 = [w * alpha * e**(-abs(w)/wc) * ((1/(e**(w/T)-1))+1) for w in wlist2]
s2 = [sum([(2* lam[kk] * gamma[kk] * (w)/(((w+w0[kk])**2 + (gamma[kk]**2))*((w-w0[kk])**2 + (gamma[kk]**2)))) * ((1/(e**(w/T)-1))+1) for kk,lamkk in enumerate(lam)]) for w in wlist2]
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
axes.plot(wlist2, s1, 'r', linewidth=2,label="original")
axes.plot(wlist2, s2, 'b', linewidth=2,label="fit")
axes.set_xlabel(r'$w$', fontsize=28)
axes.set_ylabel(r'S(w)', fontsize=28)
#axes.axvline(x=Del)
print(min(s2))
axes.legend()
#fig.savefig('powerspectrum.eps')
#J(w>0) * (n(w>w)+1)
def cot(x):
return 1./np.tan(x)
def coth(x):
"""
Calculates the coth function.
Parameters
----------
x: np.ndarray
Any numpy array or list like input.
Returns
-------
cothx: ndarray
The coth function applied to the input.
"""
return 1/np.tanh(x)
#underdamped meier tannior version with terminator
TermMax = 1000
TermOps = 0.*spre(sigmaz())
Nk = 1 # number of exponentials in approximation of the Matsubara approximation
pref = 1
ckAR = []
vkAR = []
ckAI = []
vkAI = []
for kk, ll in enumerate(lam):
#print(kk)
lamt = lam[kk]
Om = w0[kk]
Gamma = gamma[kk]
print(T)
print(coth(beta*(Om+1.0j*Gamma)/2))
ckAR_temp = [(lamt/(4*Om))*coth(beta*(Om+1.0j*Gamma)/2),(lamt/(4*Om))*coth(beta*(Om-1.0j*Gamma)/2)]
for k in range(1,Nk+1):
#print(k)
ek = 2*pi*k/beta
ckAR_temp.append((-2*lamt*2*Gamma/beta)*ek/(((Om+1.0j*Gamma)**2+ek**2)*((Om-1.0j*Gamma)**2+ek**2)))
term = 0
for k in range(Nk+1,TermMax):
#print(k)
ek = 2*pi*k/beta
ck = ((-2*lamt*2*Gamma/beta)*ek/(((Om+1.0j*Gamma)**2+ek**2)*((Om-1.0j*Gamma)**2+ek**2)))
term += ck/ek
ckAR.extend(ckAR_temp)
vkAR_temp = [-1.0j*Om+Gamma,1.0j*Om+Gamma]
vkAR_temp.extend([2 * np.pi * k * T + 0.j for k in range(1,Nk+1)])
vkAR.extend(vkAR_temp)
factor=1./4.
ckAI.extend([-factor*lamt*1.0j/(Om),factor*lamt*1.0j/(Om)])
vkAI.extend( [-(-1.0j*(Om) - Gamma),-(1.0j*(Om) - Gamma)])
TermOps += term * (2*spre(Q)*spost(Q.dag()) - spre(Q.dag()*Q) - spost(Q.dag()*Q))
print(ckAR)
print(vkAR)
Q2 = []
NR = len(ckAR)
NI = len(ckAI)
Q2.extend([ sigmaz() for kk in range(NR)])
Q2.extend([ sigmaz() for kk in range(NI)])
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)
#corrRana = real(ct)
#corrIana = imag(ct)
corrRana = real(ctlong)
corrIana = imag(ctlong)
def checker2(tlisttemp):
y = []
for i in tlisttemp:
# print(i)
temp = []
for kkk,ck in enumerate(ckAR):
temp.append(ck*exp(-vkAR[kkk]*i))
y.append(sum(temp))
return y
yR = checker2(tlist3)
# function that evaluates values with fitted params at
# given inputs
def checker2(tlisttemp):
y = []
for i in tlisttemp:
# print(i)
temp = []
for kkk,ck in enumerate(ckAI):
if i==0:
print(vkAI[kkk])
temp.append(ck*exp(-vkAI[kkk]*i))
y.append(sum(temp))
return y
yI = checker2(tlist3)
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 20
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False
tlist2 = tlist3
from cycler import cycler
wlist2 = np.linspace(-2*pi*4,2 * pi *4 , 50000)
wlist2 = np.linspace(-7,7 , 50000)
fig = plt.figure(figsize=(12,10))
grid = plt.GridSpec(2, 2, wspace=0.4, hspace=0.3)
default_cycler = (cycler(color=['r', 'g', 'b', 'y','c','m','k']) +
cycler(linestyle=['-', '--', ':', '-.',(0, (1, 10)), (0, (5, 10)),(0, (3, 10, 1, 10))]))
plt.rc('axes',prop_cycle=default_cycler )
axes1 = fig.add_subplot(grid[0,0])
axes1.set_yticks([0.,1.])
axes1.set_yticklabels([0,1])
axes1.plot(tlist2, corrRana,"r",linewidth=3,label="Original")
axes1.plot(tlist2, yR,"g",dashes=[3,3],linewidth=2,label="Reconstructed")
axes1.legend(loc=0)
axes1.set_ylabel(r'$C_R(t)$',fontsize=28)
axes1.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes1.locator_params(axis='y', nbins=4)
axes1.locator_params(axis='x', nbins=4)
axes1.text(2.,1.5,"(a)",fontsize=28)
axes2 = fig.add_subplot(grid[0,1])
axes2.set_yticks([0.,-0.4])
axes2.set_yticklabels([0,-0.4])
axes2.plot(tlist2, corrIana,"r",linewidth=3,label="Original")
axes2.plot(tlist2, yI,"g",dashes=[3,3], linewidth=2,label="Reconstructed")
axes2.legend(loc=0)
axes2.set_ylabel(r'$C_I(t)$',fontsize=28)
axes2.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes2.locator_params(axis='y', nbins=4)
axes2.locator_params(axis='x', nbins=4)
axes2.text(12.5,-0.2,"(b)",fontsize=28)
axes3 = fig.add_subplot(grid[1,0])
axes3.set_yticks([0.,.5,1])
axes3.set_yticklabels([0,0.5,1])
axes3.plot(wlist, J, "r",linewidth=3,label="$J(\omega)$ original")
y = checker(wlist, popt1[3],4)
axes3.plot(wlist, y, "g", dashes=[3,3], linewidth=2, label="$J(\omega)$ Fit $k_J = 4$")
axes3.set_ylabel(r'$J(\omega)$',fontsize=28)
axes3.set_xlabel(r'$\omega/\omega_c$',fontsize=28)
axes3.locator_params(axis='y', nbins=4)
axes3.locator_params(axis='x', nbins=4)
axes3.legend(loc=0)
axes3.text(3,1.1,"(c)",fontsize=28)
s1 = [w * alpha * e**(-abs(w)/wc) * ((1/(e**(w/T)-1))+1) for w in wlist2]
s2 = [sum([(2* lam[kk] * gamma[kk] * (w)/(((w+w0[kk])**2 + (gamma[kk]**2))*((w-w0[kk])**2 + (gamma[kk]**2)))) * ((1/(e**(w/T)-1))+1) for kk,lamkk in enumerate(lam)]) for w in wlist2]
axes4 = fig.add_subplot(grid[1,1])
axes4.set_yticks([0.,1])
axes4.set_yticklabels([0,1])
axes4.plot(wlist2, s1,"r",linewidth=3,label="Original")
axes4.plot(wlist2, s2, "g", dashes=[3,3], linewidth=2,label="Reconstructed")
axes4.set_xlabel(r'$\omega/\omega_c$', fontsize=28)
axes4.set_ylabel(r'$S(\omega)$', fontsize=28)
axes4.locator_params(axis='y', nbins=4)
axes4.locator_params(axis='x', nbins=4)
axes4.legend()
axes4.text(4.,1.2,"(d)",fontsize=28)
fig.savefig("figures/figFiJspec.pdf")
NC = 11
NR = len(ckAR)
NI = len(ckAI)
print(NR)
print(NI)
Q2 = []
Q2.extend([ sigmaz() for kk in range(NR)])
Q2.extend([ sigmaz() for kk in range(NI)])
#Q2 = [Q for kk in range(NR+NI)]
#print(Q2)
options = Options(nsteps=1500, store_states=True, rtol=1e-12, atol=1e-12, method="bdf")
import time
start = time.time()
print("start")
Ltot = liouvillian(Hsys) + TermOps
HEOMFit = BosonicHEOMSolver(Ltot, Q2, ckAR, ckAI, vkAR, vkAI, NC, options=options)
print("end")
end = time.time()
print(end - start)
#tlist4 = np.linspace(0, 50, 1000)
tlist4 = np.linspace(0, 4*pi/Del, 600)
tlist4 = np.linspace(0, 30*pi/Del, 600)
rho0 = basis(2,0) * basis(2,0).dag()
import time
start = time.time()
resultFit = HEOMFit.run(rho0, tlist4)
end = time.time()
print(end - start)
# Define some operators with which we will measure the system
# 1,1 element of density matrix - corresonding to groundstate
P11p=basis(2,0) * basis(2,0).dag()
P22p=basis(2,1) * basis(2,1).dag()
# 1,2 element of density matrix - corresonding to coherence
P12p=basis(2,0) * basis(2,1).dag()
# Calculate expectation values in the bases
P11exp11K4NK1TL = expect(resultFit.states, P11p)
P22exp11K4NK1TL = expect(resultFit.states, P22p)
P12exp11K4NK1TL = expect(resultFit.states, P12p)
tlist3 = linspace(0,15,50000)
#also check long timescales
ctlong = [complex((1/pi)*alpha * wc**(1-s) * beta**(-(s+1)) * (zeta(s+1,(1+beta*wc-1.0j*wc*t)/(beta*wc)) +
zeta(s+1,(1+1.0j*wc*t)/(beta*wc)))) for t in tlist3]
corrRana = real(ctlong)
corrIana = imag(ctlong)
tlist2 = tlist3
from scipy.optimize import curve_fit
#seperate functions for plotting later:
def fit_func_nocost(x, a, b, c, N):
tot = 0
for i in range(N):
# print(i)
tot += a[i]*np.exp(b[i]*x)*np.cos(c[i]*x)
cost = 0.
return tot
def wrapper_fit_func_nocost(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]), list(args[0][2*N:3*N])
# print("debug")
return fit_func_nocost(x, a, b, c, N)
# function that evaluates values with fitted params at
# given inputs
def checker(tlist_local, vals, N):
y = []
for i in tlist_local:
# print(i)
y.append(wrapper_fit_func_nocost(i, N, vals))
return y
#######
#Real part
def wrapper_fit_func(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]), list(args[0][2*N:3*N])
# print("debug")
return fit_func(x, a, b, c, N)
def fit_func(x, a, b, c, N):
tot = 0
for i in range(N):
# print(i)
tot += a[i]*np.exp(b[i]*x)*np.cos(c[i]*x )
cost = 0.
for i in range(N):
#print(i)
cost += ((corrRana[0]-a[i]))
tot+=0.0*cost
return tot
def fitterR(ans, tlist_local, k):
# the actual computing of fit
popt = []
pcov = []
# tries to fit for k exponents
for i in range(k):
#params_0 = [0]*(2*(i+1))
params_0 = [0.]*(3*(i+1))
upper_a = 20*abs(max(ans, key = abs))
#sets initial guess
guess = []
#aguess = [ans[0]]*(i+1)#[max(ans)]*(i+1)
aguess = [abs(max(ans, key = abs))]*(i+1)
bguess = [-wc]*(i+1)
cguess = [wc]*(i+1)
guess.extend(aguess)
guess.extend(bguess)
guess.extend(cguess) #c
# sets bounds
# a's = anything , b's negative
# sets lower bound
b_lower = []
alower = [-upper_a]*(i+1)
blower = [-np.inf]*(i+1)
clower = [0]*(i+1)
b_lower.extend(alower)
b_lower.extend(blower)
b_lower.extend(clower)
# sets higher bound
b_higher = []
ahigher = [upper_a]*(i+1)
#bhigher = [np.inf]*(i+1)
bhigher = [0.1]*(i+1)
chigher = [np.inf]*(i+1)
b_higher.extend(ahigher)
b_higher.extend(bhigher)
b_higher.extend(chigher)
param_bounds = (b_lower, b_higher)
p1, p2 = curve_fit(lambda x, *params_0: wrapper_fit_func(x, i+1, \
params_0), tlist_local, ans, p0=guess, sigma=[0.1 for t in tlist_local], bounds = param_bounds, maxfev = 100000000)
popt.append(p1)
pcov.append(p2)
print(i+1)
return popt
# print(popt)
k = 3
popt1 = fitterR(corrRana, tlist2, k)
for i in range(k):
y = checker(tlist2, popt1[i],i+1)
plt.plot(tlist2, corrRana, tlist2, y)
plt.show()
#y = checker(tlist3, popt1[k-1],k)
#plt.plot(tlist3, real(ctlong), tlist3, y)
#plt.show()
#######
#Imag part
def fit_func2(x, a, b, c, N):
tot = 0
for i in range(N):
# print(i)
tot += a[i]*np.exp(b[i]*x)*np.sin(c[i]*x)
cost = 0.
for i in range(N):
# print(i)
cost += (corrIana[0]-a[i])
tot+=0*cost
return tot
# actual fitting function
def wrapper_fit_func2(x, N, *args):
a, b, c = list(args[0][:N]), list(args[0][N:2*N]), list(args[0][2*N:3*N])
# print("debug")
return fit_func2(x, a, b, c, N)
# function that evaluates values with fitted params at
# given inputs
def checker2(tlist_local, vals, N):
y = []
for i in tlist_local:
# print(i)
y.append(wrapper_fit_func2(i, N, vals))
return y
def fitterI(ans, tlist_local, k):
# the actual computing of fit
popt = []
pcov = []
# tries to fit for k exponents
for i in range(k):
#params_0 = [0]*(2*(i+1))
params_0 = [0.]*(3*(i+1))
upper_a = abs(max(ans, key = abs))*5
#sets initial guess
guess = []
#aguess = [ans[0]]*(i+1)#[max(ans)]*(i+1)
aguess = [-abs(max(ans, key = abs))]*(i+1)
bguess = [-2]*(i+1)
cguess = [1]*(i+1)
guess.extend(aguess)
guess.extend(bguess)
guess.extend(cguess) #c
# sets bounds
# a's = anything , b's negative
# sets lower bound
b_lower = []
alower = [-upper_a]*(i+1)
blower = [-100]*(i+1)
clower = [0]*(i+1)
b_lower.extend(alower)
b_lower.extend(blower)
b_lower.extend(clower)
# sets higher bound
b_higher = []
ahigher = [upper_a]*(i+1)
bhigher = [0.01]*(i+1)
chigher = [100]*(i+1)
b_higher.extend(ahigher)
b_higher.extend(bhigher)
b_higher.extend(chigher)
param_bounds = (b_lower, b_higher)
p1, p2 = curve_fit(lambda x, *params_0: wrapper_fit_func2(x, i+1, \
params_0), tlist_local, ans, p0=guess, sigma=[0.0001 for t in tlist_local], bounds = param_bounds, maxfev = 100000000)
popt.append(p1)
pcov.append(p2)
print(i+1)
return popt
# print(popt)
k1 = 3
popt2 = fitterI(corrIana, tlist2, k1)
for i in range(k1):
y = checker2(tlist2, popt2[i], i+1)
plt.plot(tlist2, corrIana, tlist2, y)
plt.show()
#tlist3 = linspace(0,1,1000)
#y = checker(tlist3, popt2[k-1],k)
#plt.plot(tlist3, imag(ctlong), tlist3, y)
#plt.show()
#ckAR1 = list(popt1[k-1])[:len(list(popt1[k-1]))//2]
ckAR1 = list(popt1[k-1])[:k]
#0.5 from cosine
ckAR = [0.5*x+0j for x in ckAR1]
#dress with exp(id)
#for kk in range(k):
# ckAR[kk] = ckAR[kk]*exp(1.0j*list(popt1[k-1])[3*k+kk])
ckAR.extend(conjugate(ckAR)) #just directly double
# vkAR, vkAI
vkAR1 = list(popt1[k-1])[k:2*k] #damping terms
wkAR1 = list(popt1[k-1])[2*k:3*k] #oscillating term
vkAR = [-x-1.0j*wkAR1[kk] for kk, x in enumerate(vkAR1)] #combine
vkAR.extend([-x+1.0j*wkAR1[kk] for kk, x in enumerate(vkAR1)]) #double
print(ckAR)
print(vkAR)
#ckAR1 = list(popt1[k-1])[:len(list(popt1[k-1]))//2]
ckAI1 = list(popt2[k1-1])[:k1]
#0.5 from cosine
ckAI = [-1.0j*0.5*x for x in ckAI1]
#dress with exp(id)
#for kk in range(k1):
# ckAI[kk] = ckAI[kk]*exp(1.0j*list(popt2[k1-1])[3*k1+kk])
ckAI.extend(conjugate(ckAI)) #just directly double
# vkAR, vkAI
vkAI1 = list(popt2[k1-1])[k1:2*k1] #damping terms
wkAI1 = list(popt2[k1-1])[2*k1:3*k1] #oscillating term
vkAI = [-x-1.0j*wkAI1[kk] for kk, x in enumerate(vkAI1)] #combine
vkAI.extend([-x+1.0j*wkAI1[kk] for kk, x in enumerate(vkAI1)]) #double
print(ckAI)
print(vkAI)
#check the spectrum of the fit
def spectrum_matsubara_approx(w, ck, vk):
"""
Calculates the approximate Matsubara correlation spectrum
from ck and vk.
Parameters
==========
w: np.ndarray
A 1D numpy array of frequencies.
ck: float
The coefficient of the exponential function.
vk: float
The frequency of the exponential function.
"""
return ck*2*(vk)/(w**2 + vk**2)
def spectrum_approx(w, ck,vk):
"""
Calculates the approximate non Matsubara correlation spectrum
from the bath parameters.
Parameters
==========
w: np.ndarray
A 1D numpy array of frequencies.
coup_strength: float
The coupling strength parameter.
bath_broad: float
A parameter characterizing the FWHM of the spectral density, i.e.,
the bath broadening.
bath_freq: float
The bath frequency.
"""
sw = []
for kk,ckk in enumerate(ck):
#sw.append((ckk*(real(vk[kk]))/((w-imag(vk[kk]))**2+(real(vk[kk])**2))))
sw.append((ckk*(real(vk[kk]))/((w-imag(vk[kk]))**2+(real(vk[kk])**2))))
return sw
from cycler import cycler
wlist2 = np.linspace(-7,7 , 50000)
s1 = [w * alpha * e**(-abs(w)/wc) * ((1/(e**(w/T)-1))+1) for w in wlist2]
s2 = spectrum_approx(wlist2,ckAR,vkAR)
s2.extend(spectrum_approx(wlist2,[1.0j*ckk for ckk in ckAI],vkAI))
#s2 = spectrum_approx(wlist2,ckAI,vkAI)
print(len(s2))
s2sum = [0. for w in wlist2]
for s22 in s2:
for kk,ww in enumerate(wlist2):
s2sum[kk] += s22[kk]
fig = plt.figure(figsize=(12,10))
grid = plt.GridSpec(2, 2, wspace=0.4, hspace=0.3)
default_cycler = (cycler(color=['r', 'g', 'b', 'y','c','m','k']) +
cycler(linestyle=['-', '--', ':', '-.',(0, (1, 10)), (0, (5, 10)),(0, (3, 10, 1, 10))]))
plt.rc('axes',prop_cycle=default_cycler )
axes1 = fig.add_subplot(grid[0,0])
axes1.set_yticks([0.,1.])
axes1.set_yticklabels([0,1])
y = checker(tlist2, popt1[2], 3)
axes1.plot(tlist2, corrRana,'r',linewidth=3,label="Original")
axes1.plot(tlist2, y,'g',dashes=[3,3],linewidth=3,label="Fit $k_R = 3$")
axes1.legend(loc=0)
axes1.set_ylabel(r'$C_R(t)$',fontsize=28)
axes1.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes1.locator_params(axis='y', nbins=3)
axes1.locator_params(axis='x', nbins=3)
axes1.text(2.5,0.5,"(a)",fontsize=28)
axes2 = fig.add_subplot(grid[0,1])
y = checker2(tlist2, popt2[2], 3)
axes2.plot(tlist2, corrIana,'r',linewidth=3,label="Original")
axes2.plot(tlist2, y,'g',dashes=[3,3],linewidth=3,label="Fit $k_I = 3$")
axes2.legend(loc=0)
axes2.set_yticks([0.,-0.4])
axes2.set_yticklabels([0,-0.4])
axes2.set_ylabel(r'$C_I(t)$',fontsize=28)
axes2.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes2.locator_params(axis='y', nbins=3)
axes2.locator_params(axis='x', nbins=3)
axes2.text(12.5,-0.1,"(b)",fontsize=28)
axes3 = fig.add_subplot(grid[1,0:])
axes3.plot(wlist2, s1, 'r',linewidth=3,label="$S(\omega)$ original")
axes3.plot(wlist2, real(s2sum), 'g',dashes=[3,3],linewidth=3, label="$S(\omega)$ reconstruction")
axes3.set_yticks([0.,1.])
axes3.set_yticklabels([0,1])
axes3.set_xlim(-5,5)
axes3.set_ylabel(r'$S(\omega)$',fontsize=28)
axes3.set_xlabel(r'$\omega/\omega_c$',fontsize=28)
axes3.locator_params(axis='y', nbins=3)
axes3.locator_params(axis='x', nbins=3)
axes3.legend(loc=1)
axes3.text(-4,1.5,"(c)",fontsize=28)
fig.savefig("figures/figFitCspec.pdf")
Q2 = []
NR = len(ckAR)
NI = len(ckAI)
Q2.extend([ sigmaz() for kk in range(NR)])
Q2.extend([ sigmaz() for kk in range(NI)])
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)
NC = 11
#Q2 = [Q for kk in range(NR+NI)]
#print(Q2)
options = Options(nsteps=1500, store_states=True, rtol=1e-12, atol=1e-12, method="bdf")
import time
start = time.time()
#HEOMFit = BosonicHEOMSolver(Hsys, Q2, ckAR2, ckAI2, vkAR2, vkAI2, NC, options=options)
HEOMFitC = BosonicHEOMSolver(Hsys, Q2, ckAR, ckAI, vkAR, vkAI, NC, options=options)
print("hello")
end = time.time()
print(end - start)
tlist4 = np.linspace(0, 30*pi/Del, 600)
rho0 = basis(2,0) * basis(2,0).dag()
import time
start = time.time()
resultFit = HEOMFitC.run(rho0, tlist4)
print("hello")
end = time.time()
print(end - start)
# Define some operators with which we will measure the system
# 1,1 element of density matrix - corresonding to groundstate
P11p=basis(2,0) * basis(2,0).dag()
P22p=basis(2,1) * basis(2,1).dag()
# 1,2 element of density matrix - corresonding to coherence
P12p=basis(2,0) * basis(2,1).dag()
# Calculate expectation values in the bases
P11expC11k33L = expect(resultFit.states, P11p)
P22expC11k33L = expect(resultFit.states, P22p)
P12expC11k33L = expect(resultFit.states, P12p)
qsave(P11expC11k33L,'P11expC12k33L')
qsave(P11exp11K4NK1TL,'P11exp11K4NK1TL')
qsave(P11exp11K3NK1TL,'P11exp11K3NK1TL')
qsave(P11exp11K3NK2TL,'P11exp11K3NK2TL')
P11expC11k33L=qload('data/P11expC12k33L')
P11exp11K4NK1TL=qload('data/P11exp11K4NK1TL')
P11exp11K3NK1TL=qload('data/P11exp11K3NK1TL')
P11exp11K3NK2TL=qload('data/P11exp11K3NK2TL')
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 28
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False
tlist4 = np.linspace(0, 4*pi/Del, 600)
# Plot the results
fig, axes = plt.subplots(2, 1, sharex=True, figsize=(12,15))
axes[0].set_yticks([0.6,0.8,1])
axes[0].set_yticklabels([0.6,0.8,1])
axes[0].plot(tlist4, np.real(P11expC11k33L), 'y', linewidth=2, label="Correlation Function Fit $k_R=k_I=3$")
axes[0].plot(tlist4, np.real(P11exp11K3NK1TL), 'b-.', linewidth=2, label="Spectral Density Fit $k_J=3$, $N_k=1$ & Terminator")
axes[0].plot(tlist4, np.real(P11exp11K3NK2TL), 'r--', linewidth=2, label="Spectral Density Fit $k_J=3$, $N_k=2$ & Terminator")
axes[0].plot(tlist4, np.real(P11exp11K4NK1TL), 'g--', linewidth=2, label="Spectral Density Fit $k_J=4$, $N_k=1$ & Terminator")
axes[0].set_ylabel(r'$\rho_{11}$',fontsize=30)
axes[0].set_xlabel(r'$t\;\omega_c$',fontsize=30)
axes[0].locator_params(axis='y', nbins=3)
axes[0].locator_params(axis='x', nbins=3)
axes[0].legend(loc=0, fontsize=25)
axes[1].set_yticks([0,0.01])
axes[1].set_yticklabels([0,0.01])
#axes[0].plot(tlist4, np.real(P11exp11K3NK1TL)-np.real(P11expC11k33L), 'b-.', linewidth=2, label="Correlation Function Fit $k_R=k_I=3$")
axes[1].plot(tlist4, np.real(P11exp11K3NK1TL)-np.real(P11expC11k33L), 'b-.', linewidth=2, label="Spectral Density Fit $k_J=3$, $K=1$ & Terminator")
axes[1].plot(tlist4, np.real(P11exp11K3NK2TL)-np.real(P11expC11k33L), 'r--', linewidth=2, label="Spectral Density Fit $k_J=3$, $K=2$ & Terminator")
axes[1].plot(tlist4, np.real(P11exp11K4NK1TL)-np.real(P11expC11k33L), 'g--', linewidth=2, label="Spectral Density Fit $k_J=4$, $K=1$ & Terminator")
axes[1].set_ylabel(r'$\rho_{11}$ difference',fontsize=30)
axes[1].set_xlabel(r'$t\;\omega_c$',fontsize=30)
axes[1].locator_params(axis='y', nbins=3)
axes[1].locator_params(axis='x', nbins=3)
#axes[1].legend(loc=0, fontsize=25)
fig.savefig("figures/figFit.pdf")
tlist4 = np.linspace(0, 4*pi/Del, 600)
# Plot the results
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(12,5))
axes.plot(tlist4, np.real(P12expC11k33L), 'y', linewidth=2, label="Correlation Function Fit $k_R=k_I=3$")
axes.plot(tlist4, np.real(P12exp11K3NK1TL), 'b-.', linewidth=2, label="Spectral Density Fit $k_J=3$, $K=1$ & Terminator")
axes.plot(tlist4, np.real(P12exp11K3NK2TL), 'r--', linewidth=2, label="Spectral Density Fit $k_J=3$, $K=1$ & Terminator")
axes.plot(tlist4, np.real(P12exp11K4NK1TL), 'g--', linewidth=2, label="Spectral Density Fit $k_J=4$, $K=1$ & Terminator")
axes.set_ylabel(r'$\rho_{12}$',fontsize=28)
axes.set_xlabel(r'$t\;\omega_c$',fontsize=28)
axes.locator_params(axis='y', nbins=6)
axes.locator_params(axis='x', nbins=6)
axes.legend(loc=0)
from qutip.ipynbtools import version_table
version_table()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/Monk_Object_Detection/blob/master/example_notebooks/4_efficientdet/train%20-%20with%20validation%20dataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Installation
- Run these commands
- git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
- cd Monk_Object_Detection/4_efficientdet/installation
- Select the right requirements file and run
- cat requirements.txt | xargs -n 1 -L 1 pip install
```
! git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git
# For colab use the command below
! cd Monk_Object_Detection/4_efficientdet/installation && cat requirements_colab.txt | xargs -n 1 -L 1 pip install
# For Local systems and cloud select the right CUDA version
#! cd Monk_Object_Detection/4_efficientdet/installation && cat requirements_cuda9.0.txt | xargs -n 1 -L 1 pip install
```
# About the network
1. Paper on EfficientDet: https://arxiv.org/abs/1911.09070
2. Blog 1 on EfficientDet: https://towardsdatascience.com/efficientdet-scalable-and-efficient-object-detection-review-4472ffc34fd9
3. Blog 2 on EfficientDet: https://medium.com/@nainaakash012/efficientdet-scalable-and-efficient-object-detection-ea05ccd28427
# COCO Format - 1
## Dataset Directory Structure
../sample_dataset (root_dir)
|
|------ship (coco_dir)
| |
| |----images (img_dir)
| |
| |------Train (set_dir) (Train)
| |
| |---------img1.jpg
| |---------img2.jpg
| |---------..........(and so on)
|
|
| |---annotations
| |----|
| |--------------------instances_Train.json (instances_<set_dir>.json)
| |--------------------classes.txt
- instances_Train.json -> In proper COCO format
- classes.txt -> A list of classes in alphabetical order
For TrainSet
- root_dir = "../sample_dataset";
- coco_dir = "ship";
- img_dir = "images";
- set_dir = "Train";
Note: Annotation file name too coincides against the set_dir
# COCO Format - 2
## Dataset Directory Structure
../sample_dataset (root_dir)
|
|------ship (coco_dir)
| |
| |---ImagesTrain (set_dir)
| |----|
| |-------------------img1.jpg
| |-------------------img2.jpg
| |-------------------.........(and so on)
|
|
| |---annotations
| |----|
| |--------------------instances_ImagesTrain.json (instances_<set_dir>.json)
| |--------------------classes.txt
- instances_Train.json -> In proper COCO format
- classes.txt -> A list of classes in alphabetical order
For TrainSet
- root_dir = "../sample_dataset";
- coco_dir = "ship";
- img_dir = "./";
- set_dir = "ImagesTrain";
Note: Annotation file name too coincides against the set_dir
# Sample Dataset Credits
credits: https://www.tejashwi.io/object-detection-with-fizyr-retinanet/
```
import os
import sys
sys.path.append("Monk_Object_Detection/4_efficientdet/lib/");
from train_detector import Detector
gtf = Detector();
root_dir = "Monk_Object_Detection/example_notebooks/sample_dataset";
coco_dir = "ship";
img_dir = "./";
set_dir = "Images";
gtf.Train_Dataset(root_dir, coco_dir, img_dir, set_dir, batch_size=8, image_size=512, use_gpu=True)
# Available models
# model_name="efficientnet-b0"
# model_name="efficientnet-b1"
# model_name="efficientnet-b2"
# model_name="efficientnet-b3"
# model_name="efficientnet-b4"
# model_name="efficientnet-b5"
# model_name="efficientnet-b6"
# model_name="efficientnet-b7"
# model_name="efficientnet-b8"
gtf.Model(model_name="efficientnet-b0");
# To resume training
#gtf.Model(model_name="efficientnet-b0", load_pretrained_model_from="path to model.pth");
gtf.Set_Hyperparams(lr=0.0001, val_interval=1, es_min_delta=0.0, es_patience=0)
gtf.Train(num_epochs=10, model_output_dir="trained/");
```
# Inference
```
import os
import sys
sys.path.append("Monk_Object_Detection/4_efficientdet/lib/");
from infer_detector import Infer
gtf = Infer();
gtf.Model(model_dir="trained/")
f = open("Monk_Object_Detection/example_notebooks/sample_dataset/ship/annotations/classes.txt", 'r');
class_list = f.readlines();
f.close();
for i in range(len(class_list)):
class_list[i] = class_list[i][:-1]
class_list
img_path = "Monk_Object_Detection/example_notebooks/sample_dataset/ship/test/img1.jpg";
scores, labels, boxes = gtf.Predict(img_path, class_list, vis_threshold=0.4);
from IPython.display import Image
Image(filename='output.jpg')
img_path = "../sample_dataset/ship/test/img4.jpg";
scores, labels, boxes = gtf.Predict(img_path, class_list, vis_threshold=0.4);
from IPython.display import Image
Image(filename='output.jpg')
img_path = "Monk_Object_Detection/example_notebooks/sample_dataset/ship/test/img5.jpg";
scores, labels, boxes = gtf.Predict(img_path, class_list, vis_threshold=0.4);
from IPython.display import Image
Image(filename='output.jpg')
img_path = "Monk_Object_Detection/example_notebooks/sample_dataset/ship/test/img6.jpg";
scores, labels, boxes = gtf.Predict(img_path, class_list, vis_threshold=0.4);
from IPython.display import Image
Image(filename='output.jpg')
```
| github_jupyter |
# Generating conditional probability tables subject to constraints
```
import os
from pathlib import Path
from itertools import product
import numpy as np
import pandas as pd
from fake_data_for_learning.fake_data_for_learning import (
BayesianNodeRV, FakeDataBayesianNetwork, SampleValue
)
from fake_data_for_learning.utils import RandomCpt
from fake_data_for_learning.probability_polytopes import (
MapMultidimIndexToLinear, ProbabilityPolytope, ExpectationConstraint
)
```
Suppose we want to generate data from a discrete Bayesian network, such as
Product -> Days <- Rating,
where e.g. Product is the (insurance) product name, Rating is rating strength (i.e. market price / technical price) for a submission, and Days is the number of days to generate a quote for the submission.
The number of entries in probability and conditional probability tables to define this Bayesian network is
$ | Product | + | Rating | + | Product | \times | Rating | \times | Days |$.
For example, let us define Industry and Rating as follows
```
product_values = ['financial', 'liability', 'property']
product_type = BayesianNodeRV('product_type', np.array([0.2, 0.5, 0.3]), values=product_values)
rating_values = range(2)
rating = BayesianNodeRV('rating', np.array([0.3, 0.7]))
```
Suppose that Days is also discrete, e.g.
```
days_values = range(4)
```
Then if we choose the ordering of the conditional probability table axes as Product, Rating, Days, we can generate the entries of the conditional probability table for Days conditioned on Industry and Rating with `utils.RandomCpt`:
```
random_cpt = RandomCpt(len(product_values), len(rating_values), len(days_values))
X = random_cpt()
X[0, 0, :].sum()
```
So the total number of probability table entries to specify is, as in the formula above,
```
f'Number of probability table entries: {len(product_values) + len(rating_values) + (len(product_values) * len(rating_values) * len(days_values))}'
```
It would be nice to specify certain properties of the matrix without having to change entries individually. For example, we may want to insist that
\begin{equation*}
E(D | P = property) = 3.5 \\
E(D | P = financial) = 1.0 \\
E(D | P= liability) = 2.0
\end{equation*}
Denote the entries of the conditional probability table as
$$(\rho_{p, r | d})$$
The the above constraints become
\begin{equation*}
\frac{1}{|R|} \sum_{r, d} d \, \rho_{\mathrm{property},\, r\, | d} = 3.5 \\
\frac{1}{|R|} \sum_{r, d} d \, \rho_{\mathrm{financial},\, r\, | d} = 1.0\\
\frac{1}{|R|} \sum_{r, d} d \, \rho_{\mathrm{liability},\, r\, | d} = 2.0.
\end{equation*}
As $(\rho)$ is a conditional probability table, we also have the constraints
\begin{equation*}
0 \leq \rho_{p,\,r\,|d} \leq 1 \textrm{ for all }(p,\,r,\,d),\\
\sum_{d} \rho_{p,\,r,\,| d} = 1 \textrm{ for each pair } (p, \, r)
\end{equation*}
Together, these constraints define convex polytope contained in (probability) simplex $\Delta_{R-1} \subseteq \mathbb{R}^{R}$, where $R = |Product | \times | Rating | \times | Days|$ (see e.g. Chapter 1 of *Lectures on Algebraic Statistics*, Drton, Sturmfels, Sullivant). This polytope is defined as an intersection of half-spaces, i.e. using the so-called *H-representation* of the polytope, see *Lectures on Polytopes* by Ziegler, Chapters 0 and 1.
To generate a random (conditional) probability table to these constraints, the vertex-, or *V-representation* of the probability polytope $P$ is much more useful, because given the a vertex matrix $V$, where each column is a vertex of $P$ in $\mathbb{R}^R$, and all points in $P$ can be obtained as
$$
\begin{equation*}
x = V \cdot t
\end{equation*}
$$
where $t \in \mathbb{R}^N$, with $N$ being the number of vertices for $P$, and $t$ satisfying $0 \leq t_i \leq 1$, $\sum t_i = 1$.
Once we have determined the V-representation $V$, then the problem of generating conditional probability tables subject to our given expectation value constraints reduces to the much simpler problem of generating points on the non-negative quadrant of the unit (hyper) cube in $R^N$.
Before we get to our goal of generating these probability tables for our hit ratio problem, let's look at elementary examples.
## (Conditional) Probability Polytopes
The simplest example of a probability polytope is that of a Bernoulli random variable.
```
bernoulli = ProbabilityPolytope(('outcome',), dict(outcome=range(2)))
A, b = bernoulli.get_probability_half_planes()
print(A, '\n', b)
```
We convert the formulation A x <= b to the V-description
```
bernoulli.get_vertex_representation()
tertiary = ProbabilityPolytope(('outcome',), dict(outcome=range(3)))
tertiary.get_vertex_representation()
conditional_bernoullis = ProbabilityPolytope(
('input', 'output'), dict(input=range(2), output=range(2))
)
conditional_bernoullis.get_vertex_representation()
```
The benefit of having the vertex-representation (V-representation) of the probability polytope is that generating random (conditional) probability tables is straightforward, namely, we can get all elements of the probability polytope by taking combinations of the vertex (column) vectors.
In the flattened coordinates, we have, e.g.
```
conditional_bernoullis.generate_flat_random_cpt()
```
In the multidimensional coordinates for conditional probability tables here, we have e.g.
```
conditional_bernoullis.generate_random_cpt()
```
## Adding contraints on conditional expectation values
```
conditional_bernoullis.set_expectation_constraints(
[ExpectationConstraint(equation=dict(input=1), moment=1, value=0.5)]
)
conditional_bernoullis.get_expect_equations_col_indices(conditional_bernoullis.expect_constraints[0].equation)
conditional_bernoullis.get_vertex_representation()
conditional_bernoullis.generate_random_cpt()
two_input_constrained_polytope = ProbabilityPolytope(
('input', 'more_input', 'output'),
dict(input=['hi', 'low'], more_input=range(2), output=range(2))
)
two_input_constrained_polytope.set_expectation_constraints(
[ExpectationConstraint(equation=dict(more_input=0), moment=1, value=0.25)]
)
two_input_constrained_polytope.get_vertex_representation()
```
## Hit rate polytope again
```
days_polytope = ProbabilityPolytope(
('product', 'rating', 'days'),
coords = {
'product': product_values,
'rating': rating_values,
'days': days_values
}
)
days_polytope.set_expectation_constraints(
[
ExpectationConstraint(equation=dict(product='financial'), moment=1, value=0.2),
ExpectationConstraint(equation=dict(product='liability'), moment=1, value=0.9),
ExpectationConstraint(equation=dict(product='property'), moment=1, value=0.5),
]
)
days_cpt = days_polytope.generate_random_cpt()
days_cpt
```
Now we create our Bayesian network with desired constraints on some expectation values
```
days = BayesianNodeRV('days', days_cpt, parent_names=['product_type', 'rating'])
bn = FakeDataBayesianNetwork(product_type, rating)#, days)
bn = FakeDataBayesianNetwork(product_type, rating, days)
bn.rvs(10)
```
| github_jupyter |
# Spatiotemporal distribution of AxFUCCI cells
```
# Required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import scipy
import os
import seaborn as sns
from pyabc import (Distribution, History)
# Experimental data
outgrowth_df = pd.read_csv('./outgrowth.csv')
outgrowth_df.set_index(['day', 'tail'], inplace=True)
outgrowth_mean = outgrowth_df.groupby('day').mean()['outgrowth']
percentage_df = pd.read_csv('./percentage_100um.csv')
df = percentage_df
for day in range(0,6):
df.loc[df['day'] == day, 'position'] = (outgrowth_mean[day] - (df.loc[df['day'] == day, 'position']-100)).astype(int)
percentage_df = df
percentage_df.set_index(['day', 'tail', 'position'], inplace=True)
percentage_df = percentage_df.drop(['unlabelled'], axis=1)
experiments = percentage_df
# Fitting results for each animal tail and day
means,stds = {},{}
for day,df_day in percentage_df.groupby(level='day'):
tails_mean, tails_std = {},{}
for tail,df_animal in df_day.groupby(level='tail'):
db_path = ("sqlite:///" + os.path.join("./fitting_results/",
"sp_fitting-day="+str(day)+"-tail="+str(tail)+".db"))
h = History(db_path)
df = h.get_distribution(m=0)
tails_mean[tail] = df[0].mean()
tails_std[tail] = df[0].std()
means[day] = pd.DataFrame.from_dict(tails_mean, orient='index')
stds[day] = pd.DataFrame.from_dict(tails_std, orient='index')
means = pd.concat(means, names=['day','tail'])
means['sp'] = outgrowth_mean-means['sp']
stds = pd.concat(stds, names=['day','tail'])
day_means = means.groupby('day').mean()
day_std = means.groupby('day').std()
# Kolmogorov-Smirnov statistic
significance = 0.05
for day in range(0,6):
print("Day",day)
green = scipy.stats.ks_2samp(means.xs(day,level='day')['c1g'],means.xs(day,level='day')['c2g'])
magenta = scipy.stats.ks_2samp(means.xs(day,level='day')['c1m'],means.xs(day,level='day')['c2m'])
print("Green:",green)
print("H0:",green[1]>significance)
print("Magenta",magenta)
print("H0:",magenta[1]>significance)
# Fitting results plots
for day in range(0,6):
sp_mean = means.groupby('day')['sp'].mean().iloc[day]
sp_std = means.groupby('day')['sp'].std().iloc[day]
c1g = means.groupby('day')['c1g'].mean().iloc[day]
c2g = means.groupby('day')['c2g'].mean().iloc[day]
c1g_std = means.groupby('day')['c1g'].std().iloc[day]
c2g_std = means.groupby('day')['c2g'].std().iloc[day]
c1m = means.groupby('day')['c1m'].mean().iloc[day]
c2m = means.groupby('day')['c2m'].mean().iloc[day]
c1m_std = means.groupby('day')['c1m'].std().iloc[day]
c2m_std = means.groupby('day')['c2m'].std().iloc[day]
pos = experiments.sort_index().xs(day,level='day').groupby('position').mean().dropna().index
data = experiments.sort_index().xs(day,level='day').reset_index().dropna()
ax = sns.scatterplot(x='position', y='green', data=data,style='tail',color='green')
ax = sns.lineplot(x='position', y='green', data=data,style='tail',color='green')
ax.step([-3000,sp_mean,sp_mean,3000], [c2g,c2g,c1g,c1g], color='darkgreen',linewidth=5, alpha=0.5)
ax = sns.scatterplot(x='position', y='magenta', data=data,color='magenta',style='tail')
ax = sns.lineplot(x='position', y='magenta', data=data,color='magenta',style='tail')
ax.step([-3000,sp_mean,sp_mean,3000], [c2m,c2m,c1m,c1m], color='darkmagenta',linewidth=5, alpha=0.5)
if day == 4:
plt.axvline(-717.65,color='black',linestyle='--')
plt.axvspan(-717.65-271.9, -717.65+271.9, color='black', alpha=0.1)
plt.axvspan(-717.65-2*271.9, -717.65+2*271.9, color='black', alpha=0.1)
if day == 5:
plt.axvline(-446.43,color='black',linestyle='--')
plt.axvspan(-446.43-112.46, -446.43+112.46, color='black', alpha=0.1)
plt.axvspan(-446.43-2*112.46, -446.43+2*112.46, color='black', alpha=0.1)
title = 'Time = '+ str(day)+" dpa"
plt.xlim(data['position'].min()-100,data['position'].max()+100)
plt.ylim(0,110)
plt.xlabel('AP Position' + ' (' + r'$\mu$'+'m)')
plt.ylabel('G0/G1 and S/G2 AxFUCCI cells (%)')
plt.suptitle(title,size='24')
plt.rcParams.update({'font.size': 14})
plt.legend([],[], frameon=False)
plt.savefig('./fit_plot2/ap-border_'+str(day), dpi=300, bbox_inches='tight')
plt.show()
```
| github_jupyter |
# Set up Azure ML Automated Machine Learning on SQL Server 2019 CTP 2.4 big data cluster
\# Prerequisites:
\# - An Azure subscription and resource group
\# - An Azure Machine Learning workspace
\# - A SQL Server 2019 CTP 2.4 big data cluster with Internet access and a database named 'automl'
\# - Azure CLI
\# - kubectl command
\# - The https://github.com/Azure/MachineLearningNotebooks repository downloaded (cloned) to your local machine
\# In the 'automl' database, create a table named 'dbo.nyc_energy' as follows:
\# - In SQL Server Management Studio, right-click the 'automl' database, select Tasks, then Import Flat File.
\# - Select the file AzureMlCli\notebooks\how-to-use-azureml\automated-machine-learning\forecasting-energy-demand\nyc_energy.csv.
\# - Using the "Modify Columns" page, allow nulls for all columns.
\# Create an Azure Machine Learning Workspace using the instructions at https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-workspace
\# Create an Azure service principal. You can do this with the following commands:
az login
az account set --subscription *subscriptionid*
\# The following command prints out the **appId** and **tenant**,
\# which you insert into the indicated cell later in this notebook
\# to allow AutoML to authenticate with Azure:
az ad sp create-for-rbac --name *principlename* --password *password*
\# Log into the master instance of SQL Server 2019 CTP 2.4:
kubectl exec -it mssql-master-pool-0 -n *clustername* -c mssql-server -- /bin/bash
mkdir /tmp/aml
cd /tmp/aml
\# **Modify** the following with your subscription_id, resource_group, and workspace_name:
cat > config.json << EOF
{
"subscription_id": "123456ab-78cd-0123-45ef-abcd12345678",
"resource_group": "myrg1",
"workspace_name": "myws1"
}
EOF
\# The directory referenced below is appropriate for the master instance of SQL Server 2019 CTP 2.4.
cd /opt/mssql/mlservices/runtime/python/bin
./python -m pip install azureml-sdk[automl]
./python -m pip install --upgrade numpy
./python -m pip install --upgrade sklearn

```
-- Enable external scripts to allow invoking Python
sp_configure 'external scripts enabled',1
reconfigure with override
GO
-- Use database 'automl'
USE [automl]
GO
-- This is a table to hold the Azure ML connection information.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[aml_connection](
[Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
[ConnectionName] [nvarchar](255) NULL,
[TenantId] [nvarchar](255) NULL,
[AppId] [nvarchar](255) NULL,
[Password] [nvarchar](255) NULL,
[ConfigFile] [nvarchar](255) NULL
) ON [PRIMARY]
GO
```
# Copy the values from create-for-rbac above into the cell below
```
-- Use the following values:
-- Leave the name as 'Default'
-- Insert <tenant> returned by create-for-rbac above
-- Insert <AppId> returned by create-for-rbac above
-- Insert <password> used in create-for-rbac above
-- Leave <path> as '/tmp/aml/config.json'
INSERT INTO [dbo].[aml_connection]
VALUES (
N'Default', -- Name
N'11111111-2222-3333-4444-555555555555', -- Tenant
N'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee', -- AppId
N'insertpasswordhere', -- Password
N'/tmp/aml/config.json' -- Path
);
GO
-- This is a table to hold the results from the AutoMLTrain procedure.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[aml_model](
[Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
[Model] [varchar](max) NOT NULL, -- The model, which can be passed to AutoMLPredict for testing or prediction.
[RunId] [nvarchar](250) NULL, -- The RunId, which can be used to view the model in the Azure Portal.
[CreatedDate] [datetime] NULL,
[ExperimentName] [nvarchar](100) NULL, -- Azure ML Experiment Name
[WorkspaceName] [nvarchar](100) NULL, -- Azure ML Workspace Name
[LogFileText] [nvarchar](max) NULL
)
GO
ALTER TABLE [dbo].[aml_model] ADD DEFAULT (getutcdate()) FOR [CreatedDate]
GO
-- This stored procedure uses automated machine learning to train several models
-- and return the best model.
--
-- The result set has several columns:
-- best_run - ID of the best model found
-- experiment_name - training run name
-- fitted_model - best model found
-- log_file_text - console output
-- workspace - name of the Azure ML workspace where run history is stored
--
-- An example call for a classification problem is:
-- insert into dbo.aml_model(RunId, ExperimentName, Model, LogFileText, WorkspaceName)
-- exec dbo.AutoMLTrain @input_query='
-- SELECT top 100000
-- CAST([pickup_datetime] AS NVARCHAR(30)) AS pickup_datetime
-- ,CAST([dropoff_datetime] AS NVARCHAR(30)) AS dropoff_datetime
-- ,[passenger_count]
-- ,[trip_time_in_secs]
-- ,[trip_distance]
-- ,[payment_type]
-- ,[tip_class]
-- FROM [dbo].[nyctaxi_sample] order by [hack_license] ',
-- @label_column = 'tip_class',
-- @iterations=10
--
-- An example call for forecasting is:
-- insert into dbo.aml_model(RunId, ExperimentName, Model, LogFileText, WorkspaceName)
-- exec dbo.AutoMLTrain @input_query='
-- select cast(timeStamp as nvarchar(30)) as timeStamp,
-- demand,
-- precip,
-- temp,
-- case when timeStamp < ''2017-01-01'' then 0 else 1 end as is_validate_column
-- from nyc_energy
-- where demand is not null and precip is not null and temp is not null
-- and timeStamp < ''2017-02-01''',
-- @label_column='demand',
-- @task='forecasting',
-- @iterations=10,
-- @iteration_timeout_minutes=5,
-- @time_column_name='timeStamp',
-- @is_validate_column='is_validate_column',
-- @experiment_name='automl-sql-forecast',
-- @primary_metric='normalized_root_mean_squared_error'
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE OR ALTER PROCEDURE [dbo].[AutoMLTrain]
(
@input_query NVARCHAR(MAX), -- The SQL Query that will return the data to train and validate the model.
@label_column NVARCHAR(255)='Label', -- The name of the column in the result of @input_query that is the label.
@primary_metric NVARCHAR(40)='AUC_weighted', -- The metric to optimize.
@iterations INT=100, -- The maximum number of pipelines to train.
@task NVARCHAR(40)='classification', -- The type of task. Can be classification, regression or forecasting.
@experiment_name NVARCHAR(32)='automl-sql-test', -- This can be used to find the experiment in the Azure Portal.
@iteration_timeout_minutes INT = 15, -- The maximum time in minutes for training a single pipeline.
@experiment_timeout_minutes INT = 60, -- The maximum time in minutes for training all pipelines.
@n_cross_validations INT = 3, -- The number of cross validations.
@blacklist_models NVARCHAR(MAX) = '', -- A comma separated list of algos that will not be used.
-- The list of possible models can be found at:
-- https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings
@whitelist_models NVARCHAR(MAX) = '', -- A comma separated list of algos that can be used.
-- The list of possible models can be found at:
-- https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings
@experiment_exit_score FLOAT = 0, -- Stop the experiment if this score is acheived.
@sample_weight_column NVARCHAR(255)='', -- The name of the column in the result of @input_query that gives a sample weight.
@is_validate_column NVARCHAR(255)='', -- The name of the column in the result of @input_query that indicates if the row is for training or validation.
-- In the values of the column, 0 means for training and 1 means for validation.
@time_column_name NVARCHAR(255)='', -- The name of the timestamp column for forecasting.
@connection_name NVARCHAR(255)='default' -- The AML connection to use.
) AS
BEGIN
DECLARE @tenantid NVARCHAR(255)
DECLARE @appid NVARCHAR(255)
DECLARE @password NVARCHAR(255)
DECLARE @config_file NVARCHAR(255)
SELECT @tenantid=TenantId, @appid=AppId, @password=Password, @config_file=ConfigFile
FROM aml_connection
WHERE ConnectionName = @connection_name;
EXEC sp_execute_external_script @language = N'Python', @script = N'import pandas as pd
import logging
import azureml.core
import pandas as pd
import numpy as np
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn import datasets
import pickle
import codecs
from azureml.core.authentication import ServicePrincipalAuthentication
from azureml.core.workspace import Workspace
if __name__.startswith("sqlindb"):
auth = ServicePrincipalAuthentication(tenantid, appid, password)
ws = Workspace.from_config(path=config_file, auth=auth)
project_folder = "./sample_projects/" + experiment_name
experiment = Experiment(ws, experiment_name)
data_train = input_data
X_valid = None
y_valid = None
sample_weight_valid = None
if is_validate_column != "" and is_validate_column is not None:
data_train = input_data[input_data[is_validate_column] <= 0]
data_valid = input_data[input_data[is_validate_column] > 0]
data_train.pop(is_validate_column)
data_valid.pop(is_validate_column)
y_valid = data_valid.pop(label_column).values
if sample_weight_column != "" and sample_weight_column is not None:
sample_weight_valid = data_valid.pop(sample_weight_column).values
X_valid = data_valid
n_cross_validations = None
y_train = data_train.pop(label_column).values
sample_weight = None
if sample_weight_column != "" and sample_weight_column is not None:
sample_weight = data_train.pop(sample_weight_column).values
X_train = data_train
if experiment_timeout_minutes == 0:
experiment_timeout_minutes = None
if experiment_exit_score == 0:
experiment_exit_score = None
if blacklist_models == "":
blacklist_models = None
if blacklist_models is not None:
blacklist_models = blacklist_models.replace(" ", "").split(",")
if whitelist_models == "":
whitelist_models = None
if whitelist_models is not None:
whitelist_models = whitelist_models.replace(" ", "").split(",")
automl_settings = {}
preprocess = True
if time_column_name != "" and time_column_name is not None:
automl_settings = { "time_column_name": time_column_name }
preprocess = False
log_file_name = "automl_errors.log"
automl_config = AutoMLConfig(task = task,
debug_log = log_file_name,
primary_metric = primary_metric,
iteration_timeout_minutes = iteration_timeout_minutes,
experiment_timeout_minutes = experiment_timeout_minutes,
iterations = iterations,
n_cross_validations = n_cross_validations,
preprocess = preprocess,
verbosity = logging.INFO,
X = X_train,
y = y_train,
path = project_folder,
blacklist_models = blacklist_models,
whitelist_models = whitelist_models,
experiment_exit_score = experiment_exit_score,
sample_weight = sample_weight,
X_valid = X_valid,
y_valid = y_valid,
sample_weight_valid = sample_weight_valid,
**automl_settings)
local_run = experiment.submit(automl_config, show_output = True)
best_run, fitted_model = local_run.get_output()
pickled_model = codecs.encode(pickle.dumps(fitted_model), "base64").decode()
log_file_text = ""
try:
with open(log_file_name, "r") as log_file:
log_file_text = log_file.read()
except:
log_file_text = "Log file not found"
returned_model = pd.DataFrame({"best_run": [best_run.id], "experiment_name": [experiment_name], "fitted_model": [pickled_model], "log_file_text": [log_file_text], "workspace": [ws.name]}, dtype=np.dtype(np.str))
'
, @input_data_1 = @input_query
, @input_data_1_name = N'input_data'
, @output_data_1_name = N'returned_model'
, @params = N'@label_column NVARCHAR(255),
@primary_metric NVARCHAR(40),
@iterations INT, @task NVARCHAR(40),
@experiment_name NVARCHAR(32),
@iteration_timeout_minutes INT,
@experiment_timeout_minutes INT,
@n_cross_validations INT,
@blacklist_models NVARCHAR(MAX),
@whitelist_models NVARCHAR(MAX),
@experiment_exit_score FLOAT,
@sample_weight_column NVARCHAR(255),
@is_validate_column NVARCHAR(255),
@time_column_name NVARCHAR(255),
@tenantid NVARCHAR(255),
@appid NVARCHAR(255),
@password NVARCHAR(255),
@config_file NVARCHAR(255)'
, @label_column = @label_column
, @primary_metric = @primary_metric
, @iterations = @iterations
, @task = @task
, @experiment_name = @experiment_name
, @iteration_timeout_minutes = @iteration_timeout_minutes
, @experiment_timeout_minutes = @experiment_timeout_minutes
, @n_cross_validations = @n_cross_validations
, @blacklist_models = @blacklist_models
, @whitelist_models = @whitelist_models
, @experiment_exit_score = @experiment_exit_score
, @sample_weight_column = @sample_weight_column
, @is_validate_column = @is_validate_column
, @time_column_name = @time_column_name
, @tenantid = @tenantid
, @appid = @appid
, @password = @password
, @config_file = @config_file
WITH RESULT SETS ((best_run NVARCHAR(250), experiment_name NVARCHAR(100), fitted_model VARCHAR(MAX), log_file_text NVARCHAR(MAX), workspace NVARCHAR(100)))
END
-- This procedure returns a list of metrics for each iteration of a training run.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE OR ALTER PROCEDURE [dbo].[AutoMLGetMetrics]
(
@run_id NVARCHAR(250), -- The RunId
@experiment_name NVARCHAR(32)='automl-sql-test', -- This can be used to find the experiment in the Azure Portal.
@connection_name NVARCHAR(255)='default' -- The AML connection to use.
) AS
BEGIN
DECLARE @tenantid NVARCHAR(255)
DECLARE @appid NVARCHAR(255)
DECLARE @password NVARCHAR(255)
DECLARE @config_file NVARCHAR(255)
SELECT @tenantid=TenantId, @appid=AppId, @password=Password, @config_file=ConfigFile
FROM aml_connection
WHERE ConnectionName = @connection_name;
EXEC sp_execute_external_script @language = N'Python', @script = N'import pandas as pd
import logging
import azureml.core
import numpy as np
from azureml.core.experiment import Experiment
from azureml.train.automl.run import AutoMLRun
from azureml.core.authentication import ServicePrincipalAuthentication
from azureml.core.workspace import Workspace
auth = ServicePrincipalAuthentication(tenantid, appid, password)
ws = Workspace.from_config(path=config_file, auth=auth)
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = run_id)
children = list(ml_run.get_children())
iterationlist = []
metricnamelist = []
metricvaluelist = []
for run in children:
properties = run.get_properties()
if "iteration" in properties:
iteration = int(properties["iteration"])
for metric_name, metric_value in run.get_metrics().items():
if isinstance(metric_value, float):
iterationlist.append(iteration)
metricnamelist.append(metric_name)
metricvaluelist.append(metric_value)
metrics = pd.DataFrame({"iteration": iterationlist, "metric_name": metricnamelist, "metric_value": metricvaluelist})
'
, @output_data_1_name = N'metrics'
, @params = N'@run_id NVARCHAR(250),
@experiment_name NVARCHAR(32),
@tenantid NVARCHAR(255),
@appid NVARCHAR(255),
@password NVARCHAR(255),
@config_file NVARCHAR(255)'
, @run_id = @run_id
, @experiment_name = @experiment_name
, @tenantid = @tenantid
, @appid = @appid
, @password = @password
, @config_file = @config_file
WITH RESULT SETS ((iteration INT, metric_name NVARCHAR(100), metric_value FLOAT))
END
-- This procedure predicts values based on a model returned by AutoMLTrain and a dataset.
-- It returns the dataset with a new column added, which is the predicted value.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE OR ALTER PROCEDURE [dbo].[AutoMLPredict]
(
@input_query NVARCHAR(MAX), -- A SQL query returning data to predict on.
@model NVARCHAR(MAX), -- A model returned from AutoMLTrain.
@label_column NVARCHAR(255)='' -- Optional name of the column from input_query, which should be ignored when predicting
) AS
BEGIN
EXEC sp_execute_external_script @language = N'Python', @script = N'import pandas as pd
import azureml.core
import numpy as np
from azureml.train.automl import AutoMLConfig
import pickle
import codecs
model_obj = pickle.loads(codecs.decode(model.encode(), "base64"))
test_data = input_data.copy()
if label_column != "" and label_column is not None:
y_test = test_data.pop(label_column).values
X_test = test_data
predicted = model_obj.predict(X_test)
combined_output = input_data.assign(predicted=predicted)
'
, @input_data_1 = @input_query
, @input_data_1_name = N'input_data'
, @output_data_1_name = N'combined_output'
, @params = N'@model NVARCHAR(MAX), @label_column NVARCHAR(255)'
, @model = @model
, @label_column = @label_column
END
```
| github_jupyter |
```
#Note: You need to reset the kernel for the keras installation to take place
#Todo: Remove this line once it is installed, reset the kernel: Menu > Kernel > Reset & Clear Output
!git clone https://github.com/fchollet/keras.git && cd keras && python setup.py install --user
import keras
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.preprocessing import image
from keras.engine import Layer
from keras.applications.inception_resnet_v2 import preprocess_input
from keras.layers import Conv2D, UpSampling2D, InputLayer, Conv2DTranspose, Input, Reshape, merge, concatenate, Activation, Dense, Dropout, Flatten
from keras.layers.normalization import BatchNormalization
from keras.callbacks import TensorBoard
from keras.models import Sequential, Model
from keras.layers.core import RepeatVector, Permute
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from skimage.color import rgb2lab, lab2rgb, rgb2gray, gray2rgb
from skimage.transform import resize
from skimage.io import imsave
import numpy as np
import os
import random
import tensorflow as tf
# Get images
X = []
for filename in os.listdir('colornet/'):
X.append(img_to_array(load_img('colornet/'+filename)))
X = np.array(X, dtype=float)
Xtrain = 1.0/255*X
#Load weights
inception = InceptionResNetV2(weights=None, include_top=True)
inception.load_weights('/data/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5')
inception.graph = tf.get_default_graph()
def conv_stack(data, filters, s):
output = Conv2D(filters, (3, 3), strides=s, activation='relu', padding='same')(data)
#output = BatchNormalization()(output)
return output
embed_input = Input(shape=(1000,))
#Encoder
encoder_input = Input(shape=(256, 256, 1,))
encoder_output = conv_stack(encoder_input, 64, 2)
encoder_output = conv_stack(encoder_output, 128, 1)
encoder_output = conv_stack(encoder_output, 128, 2)
encoder_output = conv_stack(encoder_output, 256, 1)
encoder_output = conv_stack(encoder_output, 256, 2)
encoder_output = conv_stack(encoder_output, 512, 1)
encoder_output = conv_stack(encoder_output, 512, 1)
encoder_output = conv_stack(encoder_output, 256, 1)
#Fusion
# y_mid: (None, 256, 28, 28)
fusion_output = RepeatVector(32 * 32)(embed_input)
fusion_output = Reshape(([32, 32, 1000]))(fusion_output)
fusion_output = concatenate([fusion_output, encoder_output], axis=3)
fusion_output = Conv2D(256, (1, 1), activation='relu')(fusion_output)
#Decoder
decoder_output = conv_stack(fusion_output, 128, 1)
decoder_output = UpSampling2D((2, 2))(decoder_output)
decoder_output = conv_stack(decoder_output, 64, 1)
decoder_output = UpSampling2D((2, 2))(decoder_output)
decoder_output = conv_stack(decoder_output, 32, 1)
decoder_output = conv_stack(decoder_output, 16, 1)
decoder_output = Conv2D(2, (2, 2), activation='tanh', padding='same')(decoder_output)
decoder_output = UpSampling2D((2, 2))(decoder_output)
model = Model(inputs=[encoder_input, embed_input], outputs=decoder_output)
#Create embedding
def create_inception_embedding(grayscaled_rgb):
grayscaled_rgb_resized = []
for i in grayscaled_rgb:
i = resize(i, (299, 299, 3), mode='constant')
grayscaled_rgb_resized.append(i)
grayscaled_rgb_resized = np.array(grayscaled_rgb_resized)
grayscaled_rgb_resized = preprocess_input(grayscaled_rgb_resized)
with inception.graph.as_default():
embed = inception.predict(grayscaled_rgb_resized)
return embed
# Image transformer
datagen = ImageDataGenerator(
shear_range=0.2,
zoom_range=0.2,
rotation_range=20,
horizontal_flip=True)
#Generate training data
batch_size = 20
def image_a_b_gen(batch_size):
for batch in datagen.flow(Xtrain, batch_size=batch_size):
grayscaled_rgb = gray2rgb(rgb2gray(batch))
embed = create_inception_embedding(grayscaled_rgb)
lab_batch = rgb2lab(batch)
X_batch = lab_batch[:,:,:,0]
X_batch = X_batch.reshape(X_batch.shape+(1,))
Y_batch = lab_batch[:,:,:,1:] / 128
yield ([X_batch, create_inception_embedding(grayscaled_rgb)], Y_batch)
#Train model
tensorboard = TensorBoard(log_dir="/output")
model.compile(optimizer='adam', loss='mse')
model.fit_generator(image_a_b_gen(batch_size), callbacks=[tensorboard], epochs=1000, steps_per_epoch=20)
# Save model
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
model.save_weights("color_tensorflow_real_mode.h5")
#Make predictions on validation images
color_me = []
for filename in os.listdir('/data/images/Test/'):
color_me.append(img_to_array(load_img('/data/images/Test/'+filename)))
color_me = np.array(color_me, dtype=float)
color_me_embed = create_inception_embedding(color_me)
color_me = rgb2lab(1.0/255*color_me)[:,:,:,0]
color_me = color_me.reshape(color_me.shape+(1,))
# Test model
output = model.predict([color_me, color_me_embed])
output = output * 128
# Output colorizations
for i in range(len(output)):
cur = np.zeros((256, 256, 3))
cur[:,:,0] = color_me[i][:,:,0]
cur[:,:,1:] = output[i]
imsave("result/img_"+str(i)+".png", lab2rgb(cur))
```
| github_jupyter |
# Depression Detection in Social Media Posts
#### Imports
```
import warnings
warnings.filterwarnings("ignore")
import ftfy
import matplotlib.pyplot as plt
import nltk
import numpy as np
import pandas as pd
import re
from math import exp
from numpy import sign
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from gensim.models import KeyedVectors
from nltk.corpus import stopwords
from nltk import PorterStemmer
from keras.models import Model, Sequential
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.layers import Conv1D, Dense, Input, LSTM, Embedding, Dropout, Activation, MaxPooling1D
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
```
#### Constants
```
# Reproducibility
np.random.seed(1234)
DEPRES_NROWS = 3200 # number of rows to read from DEPRESSIVE_TWEETS_CSV
RANDOM_NROWS = 12000 # number of rows to read from RANDOM_TWEETS_CSV
MAX_SEQUENCE_LENGTH = 140 # Max tweet size
MAX_NB_WORDS = 20000
EMBEDDING_DIM = 300
TRAIN_SPLIT = 0.6
TEST_SPLIT = 0.2
LEARNING_RATE = 0.1
EPOCHS= 10
```
## Section 1: Load Data
Loading depressive tweets scraped from twitter using [TWINT](https://github.com/haccer/twint) and random tweets from Kaggle dataset [twitter_sentiment](https://www.kaggle.com/ywang311/twitter-sentiment/data).
#### File Paths
```
#DEPRESSIVE_TWEETS_CSV = 'depressive_tweets.csv'
DEPRESSIVE_TWEETS_CSV = 'depressive_tweets_processed.csv'
RANDOM_TWEETS_CSV = 'Sentiment Analysis Dataset 2.csv'
EMBEDDING_FILE = 'GoogleNews-vectors-negative300.bin.gz'
depressive_tweets_df = pd.read_csv(DEPRESSIVE_TWEETS_CSV, sep = '|', header = None, usecols = range(0,9), nrows = DEPRES_NROWS)
random_tweets_df = pd.read_csv(RANDOM_TWEETS_CSV, encoding = "ISO-8859-1", usecols = range(0,4), nrows = RANDOM_NROWS)
depressive_tweets_df.head()
random_tweets_df.head()
```
## Section 2: Data Processing
### Load Pretrained Word2Vec Model
The pretrained vectors for the Word2Vec model is from [here](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit).
Using a Keyed Vectors file, we can get the embedding of any word by calling `.word_vec(word)` and we can get all the words in the model's vocabulary through `.vocab`.
```
word2vec = KeyedVectors.load_word2vec_format(EMBEDDING_FILE, binary=True)
```
### Preprocessing
Preprocessing the tweets in order to:
* Remove links and images
* Remove hashtags
* Remove @ mentions
* Remove emojis
* Remove stop words
* Remove punctuation
* Get rid of stuff like "what's" and making it "what is'
* Stem words so they are all the same tense (e.g. ran -> run)
```
# Expand Contraction
cList = {
"ain't": "am not",
"aren't": "are not",
"can't": "cannot",
"can't've": "cannot have",
"'cause": "because",
"could've": "could have",
"couldn't": "could not",
"couldn't've": "could not have",
"didn't": "did not",
"doesn't": "does not",
"don't": "do not",
"hadn't": "had not",
"hadn't've": "had not have",
"hasn't": "has not",
"haven't": "have not",
"he'd": "he would",
"he'd've": "he would have",
"he'll": "he will",
"he'll've": "he will have",
"he's": "he is",
"how'd": "how did",
"how'd'y": "how do you",
"how'll": "how will",
"how's": "how is",
"I'd": "I would",
"I'd've": "I would have",
"I'll": "I will",
"I'll've": "I will have",
"I'm": "I am",
"I've": "I have",
"isn't": "is not",
"it'd": "it had",
"it'd've": "it would have",
"it'll": "it will",
"it'll've": "it will have",
"it's": "it is",
"let's": "let us",
"ma'am": "madam",
"mayn't": "may not",
"might've": "might have",
"mightn't": "might not",
"mightn't've": "might not have",
"must've": "must have",
"mustn't": "must not",
"mustn't've": "must not have",
"needn't": "need not",
"needn't've": "need not have",
"o'clock": "of the clock",
"oughtn't": "ought not",
"oughtn't've": "ought not have",
"shan't": "shall not",
"sha'n't": "shall not",
"shan't've": "shall not have",
"she'd": "she would",
"she'd've": "she would have",
"she'll": "she will",
"she'll've": "she will have",
"she's": "she is",
"should've": "should have",
"shouldn't": "should not",
"shouldn't've": "should not have",
"so've": "so have",
"so's": "so is",
"that'd": "that would",
"that'd've": "that would have",
"that's": "that is",
"there'd": "there had",
"there'd've": "there would have",
"there's": "there is",
"they'd": "they would",
"they'd've": "they would have",
"they'll": "they will",
"they'll've": "they will have",
"they're": "they are",
"they've": "they have",
"to've": "to have",
"wasn't": "was not",
"we'd": "we had",
"we'd've": "we would have",
"we'll": "we will",
"we'll've": "we will have",
"we're": "we are",
"we've": "we have",
"weren't": "were not",
"what'll": "what will",
"what'll've": "what will have",
"what're": "what are",
"what's": "what is",
"what've": "what have",
"when's": "when is",
"when've": "when have",
"where'd": "where did",
"where's": "where is",
"where've": "where have",
"who'll": "who will",
"who'll've": "who will have",
"who's": "who is",
"who've": "who have",
"why's": "why is",
"why've": "why have",
"will've": "will have",
"won't": "will not",
"won't've": "will not have",
"would've": "would have",
"wouldn't": "would not",
"wouldn't've": "would not have",
"y'all": "you all",
"y'alls": "you alls",
"y'all'd": "you all would",
"y'all'd've": "you all would have",
"y'all're": "you all are",
"y'all've": "you all have",
"you'd": "you had",
"you'd've": "you would have",
"you'll": "you you will",
"you'll've": "you you will have",
"you're": "you are",
"you've": "you have"
}
c_re = re.compile('(%s)' % '|'.join(cList.keys()))
def expandContractions(text, c_re=c_re):
def replace(match):
return cList[match.group(0)]
return c_re.sub(replace, text)
def clean_tweets(tweets):
cleaned_tweets = []
for tweet in tweets:
tweet = str(tweet)
# if url links then dont append to avoid news articles
# also check tweet length, save those > 10 (length of word "depression")
if re.match("(\w+:\/\/\S+)", tweet) == None and len(tweet) > 10:
#remove hashtag, @mention, emoji and image URLs
tweet = ' '.join(re.sub("(@[A-Za-z0-9]+)|(\#[A-Za-z0-9]+)|(<Emoji:.*>)|(pic\.twitter\.com\/.*)", " ", tweet).split())
#fix weirdly encoded texts
tweet = ftfy.fix_text(tweet)
#expand contraction
tweet = expandContractions(tweet)
#remove punctuation
tweet = ' '.join(re.sub("([^0-9A-Za-z \t])", " ", tweet).split())
#stop words
stop_words = set(stopwords.words('english'))
word_tokens = nltk.word_tokenize(tweet)
filtered_sentence = [w for w in word_tokens if not w in stop_words]
tweet = ' '.join(filtered_sentence)
#stemming words
tweet = PorterStemmer().stem(tweet)
cleaned_tweets.append(tweet)
return cleaned_tweets
```
Applying the preprocessing `clean_text` function to every element in the depressive tweets and random tweets data.
```
depressive_tweets_arr = [x for x in depressive_tweets_df[5]]
random_tweets_arr = [x for x in random_tweets_df['SentimentText']]
X_d = clean_tweets(depressive_tweets_arr)
X_r = clean_tweets(random_tweets_arr)
```
### Tokenizer
Using a Tokenizer to assign indices and filtering out unfrequent words. Tokenizer creates a map of every unique word and an assigned index to it. The parameter called num_words indicates that we only care about the top 20000 most frequent words.
```
tokenizer = Tokenizer(num_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(X_d + X_r)
```
Applying the tokenizer to depressive tweets and random tweets data.
```
sequences_d = tokenizer.texts_to_sequences(X_d)
sequences_r = tokenizer.texts_to_sequences(X_r)
```
Number of unique words in tokenizer. Has to be <= 20,000.
```
word_index = tokenizer.word_index
print('Found %s unique tokens' % len(word_index))
```
Pad sequences all to the same length of 140 words.
```
data_d = pad_sequences(sequences_d, maxlen=MAX_SEQUENCE_LENGTH)
data_r = pad_sequences(sequences_r, maxlen=MAX_SEQUENCE_LENGTH)
print('Shape of data_d tensor:', data_d.shape)
print('Shape of data_r tensor:', data_r.shape)
```
### Embedding Matrix
The embedding matrix is a `n x m` matrix where `n` is the number of words and `m` is the dimension of the embedding. In this case, `m=300` and `n=20000`. We take the min between the number of unique words in our tokenizer and max words in case there are less unique words than the max we specified.
```
nb_words = min(MAX_NB_WORDS, len(word_index))
embedding_matrix = np.zeros((nb_words, EMBEDDING_DIM))
for (word, idx) in word_index.items():
if word in word2vec.vocab and idx < MAX_NB_WORDS:
embedding_matrix[idx] = word2vec.word_vec(word)
```
### Splitting and Formatting Data
Assigning labels to the depressive tweets and random tweets data, and splitting the arrays into test (60%), validation (20%), and train data (20%). Combine depressive tweets and random tweets arrays and shuffle.
```
# Assigning labels to the depressive tweets and random tweets data
labels_d = np.array([1] * DEPRES_NROWS)
labels_r = np.array([0] * RANDOM_NROWS)
# Splitting the arrays into test (60%), validation (20%), and train data (20%)
perm_d = np.random.permutation(len(data_d))
idx_train_d = perm_d[:int(len(data_d)*(TRAIN_SPLIT))]
idx_test_d = perm_d[int(len(data_d)*(TRAIN_SPLIT)):int(len(data_d)*(TRAIN_SPLIT+TEST_SPLIT))]
idx_val_d = perm_d[int(len(data_d)*(TRAIN_SPLIT+TEST_SPLIT)):]
perm_r = np.random.permutation(len(data_r))
idx_train_r = perm_r[:int(len(data_r)*(TRAIN_SPLIT))]
idx_test_r = perm_r[int(len(data_r)*(TRAIN_SPLIT)):int(len(data_r)*(TRAIN_SPLIT+TEST_SPLIT))]
idx_val_r = perm_r[int(len(data_r)*(TRAIN_SPLIT+TEST_SPLIT)):]
# Combine depressive tweets and random tweets arrays
data_train = np.concatenate((data_d[idx_train_d], data_r[idx_train_r]))
labels_train = np.concatenate((labels_d[idx_train_d], labels_r[idx_train_r]))
data_test = np.concatenate((data_d[idx_test_d], data_r[idx_test_r]))
labels_test = np.concatenate((labels_d[idx_test_d], labels_r[idx_test_r]))
data_val = np.concatenate((data_d[idx_val_d], data_r[idx_val_r]))
labels_val = np.concatenate((labels_d[idx_val_d], labels_r[idx_val_r]))
# Shuffling
perm_train = np.random.permutation(len(data_train))
data_train = data_train[perm_train]
labels_train = labels_train[perm_train]
perm_test = np.random.permutation(len(data_test))
data_test = data_test[perm_test]
labels_test = labels_test[perm_test]
perm_val = np.random.permutation(len(data_val))
data_val = data_val[perm_val]
labels_val = labels_val[perm_val]
```
## Section 3: Building the Model
### Building Model (LSTM + CNN)
The model takes in an input and then outputs a single number representing the probability that the tweet indicates depression. The model takes in each input sentence, replace it with it's embeddings, then run the new embedding vector through a convolutional layer. CNNs are excellent at learning spatial structure from data, the convolutional layer takes advantage of that and learn some structure from the sequential data then pass into a standard LSTM layer. Last but not least, the output of the LSTM layer is fed into a standard Dense model for prediction.
```
model = Sequential()
# Embedded layer
model.add(Embedding(len(embedding_matrix), EMBEDDING_DIM, weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH, trainable=False))
# Convolutional Layer
model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.2))
# LSTM Layer
model.add(LSTM(300))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
```
### Compiling Model
```
model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['acc'])
print(model.summary())
```
## Section 4: Training the Model
The model is trained `EPOCHS` time, and Early Stopping argument is used to end training if the loss or accuracy don't improve within 3 epochs.
```
early_stop = EarlyStopping(monitor='val_loss', patience=3)
hist = model.fit(data_train, labels_train, \
validation_data=(data_val, labels_val), \
epochs=EPOCHS, batch_size=40, shuffle=True, \
callbacks=[early_stop])
```
### Results
Summarize history for accuracy
```
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
```
Summarize history for loss
```
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
Percentage accuracy of model
```
labels_pred = model.predict(data_test)
labels_pred = np.round(labels_pred.flatten())
accuracy = accuracy_score(labels_test, labels_pred)
print("Accuracy: %.2f%%" % (accuracy*100))
```
f1, precision, and recall scores
```
print(classification_report(labels_test, labels_pred))
```
## Section 5: Comparing the Model to Base Line
In order to evaluate the effectiveness of the LSTM + CNN model, a logistic regression model is trained with the same train data and the same number of epochs, and tested with the same test data.
### Logistic Regression Base Line Model
```
class LogReg:
"""
Class to represent a logistic regression model.
"""
def __init__(self, l_rate, epochs, n_features):
"""
Create a new model with certain parameters.
:param l_rate: Initial learning rate for model.
:param epoch: Number of epochs to train for.
:param n_features: Number of features.
"""
self.l_rate = l_rate
self.epochs = epochs
self.coef = [0.0] * n_features
self.bias = 0.0
def sigmoid(self, score, threshold=20.0):
"""
Prevent overflow of exp by capping activation at 20.
:param score: A real valued number to convert into a number between 0 and 1
"""
if abs(score) > threshold:
score = threshold * sign(score)
activation = exp(score)
return activation / (1.0 + activation)
def predict(self, features):
"""
Given an example's features and the coefficients, predicts the class.
:param features: List of real valued features for a single training example.
:return: Returns the predicted class (either 0 or 1).
"""
value = sum([features[i]*self.coef[i] for i in range(len(features))]) + self.bias
return self.sigmoid(value)
def sg_update(self, features, label):
"""
Computes the update to the weights based on a predicted example.
:param features: Features to train on.
:param label: Corresponding label for features.
"""
yhat = self.predict(features)
e = label - yhat
self.bias = self.bias + self.l_rate * e * yhat * (1-yhat)
for i in range(len(features)):
self.coef[i] = self.coef[i] + self.l_rate * e * yhat * (1-yhat) * features[i]
return
def train(self, X, y):
"""
Computes logistic regression coefficients using stochastic gradient descent.
:param X: Features to train on.
:param y: Corresponding label for each set of features.
:return: Returns a list of model weight coefficients where coef[0] is the bias.
"""
for epoch in range(self.epochs):
for features, label in zip(X, y):
self.sg_update(features, label)
return self.bias, self.coef
def get_accuracy(y_bar, y_pred):
"""
Computes what percent of the total testing data the model classified correctly.
:param y_bar: List of ground truth classes for each example.
:param y_pred: List of model predicted class for each example.
:return: Returns a real number between 0 and 1 for the model accuracy.
"""
correct = 0
for i in range(len(y_bar)):
if y_bar[i] == y_pred[i]:
correct += 1
accuracy = (correct / len(y_bar)) * 100.0
return accuracy
```
Training the logistic regression model
```
# Logistic Model
logreg = LogReg(LEARNING_RATE, EPOCHS, len(data_train[0]))
bias_logreg, weights_logreg = logreg.train(data_train, labels_train)
y_logistic = [round(logreg.predict(example)) for example in data_test]
```
Getting the accuracy of the logistic regression model predicting the test data
```
# Compare accuracies
accuracy_logistic = get_accuracy(y_logistic, labels_test)
print('Logistic Regression Accuracy: {:0.3f}'.format(accuracy_logistic))
```
| github_jupyter |
# Feldman and Cousins intervals with asymptotics.
This is a copy of `FC_interval_freq.ipynb` using the asymptotic formulae instead of toys.
```
import numpy as np
import matplotlib.pyplot as plt
import os
import time
import zfit
from zfit.loss import UnbinnedNLL
from zfit.minimize import Minuit
zfit.settings.set_seed(10)
from hepstats.hypotests.calculators import AsymptoticCalculator
from hepstats.hypotests import ConfidenceInterval
from hepstats.hypotests.parameters import POIarray
from hepstats.hypotests.exceptions import POIRangeError
from utils import one_minus_cl_plot, pltdist, plotfitresult
```
In this example we consider an experiment where the observable $x$ is simply the measured value of $\mu$ in an experiment with a Gaussian resolution with known width $\sigma = 1$. We will compute the confidence belt for a 90 % condifdence level for the mean of the Gaussian $\mu$.
We define a sampler below for a Gaussian pdf with $\sigma = 1$ using the `zfit` library, the sampler allows to generate samples for different values of $\mu$. 1000 entries are generated for each sample.
```
bounds = (-10, 10)
obs = zfit.Space('x', limits=bounds)
mean = zfit.Parameter("mean", 0)
sigma = zfit.Parameter("sigma", 1.0)
model = zfit.pdf.Gauss(obs=obs, mu=mean, sigma=sigma)
data = model.create_sampler(1000)
data.resample()
```
Below is defined the negative-likelihood function which is needed to compute Feldman and Cousins intervals as described in [arXiv:1109.0714](https://arxiv.org/abs/1109.0714). The negative-likelihood function is mimised to compute the measured mean $x$ and its uncertainty $\sigma_x$.
```
# Create the negative log likelihood
nll = UnbinnedNLL(model=model, data=data)
# Instantiate a minuit minimizer
minimizer = Minuit(verbosity=0)
# minimisation of the loss function
minimum = minimizer.minimize(loss=nll)
minimum.hesse();
print(minimum)
x_err = minimum.params[mean]["minuit_hesse"]["error"]
```
To compute the the confidence belt on $\mu$ 90 % CL intervals have to be computed for several values of the measured mean $x$. Samples are generated for $\mu = n \times \sigma_x$ with $n = -6, -5, -4, ..., 3, 4, 5, 6$, and fitted to measure the mean $x_n$.
90 % CL intervals are evaluated for each $x_n$ for the two following cases, $\mu > 0$ and $\mu$ unbounded.
With `hepstats`, The intervals are obtained with `ConfidenceInterval` object using a calculator. Here the calculator is the `AsymptoticCalculator` which computes the intervals using asymptotic formulae (see [Asymptotic formulae for likelihood-based tests of new physics](https://arxiv.org/pdf/1007.1727.pdf)), an example of a 68 % CL interval with the `AsymptoticCalculator` can be found [here](https://github.com/scikit-hep/hepstats/blob/master/notebooks/hypotests/confidenceinterval_asy_zfit.ipynb).
The option `qtilde = True` should be used if $\mu > 0$.
```
results = {}
for n in np.arange(-6, 7, 1.0):
x = n * x_err
if n not in results:
zfit.settings.set_seed(5)
data.resample(param_values={mean: x})
minimum = minimizer.minimize(loss=nll)
minimum.hesse();
results_n = {}
results_n["x"] = minimum.params[mean]["value"]
results_n["x_err"] = minimum.params[mean]["minuit_hesse"]["error"]
calculator = AsymptoticCalculator(minimum, minimizer)
x_min = results_n["x"] - results_n["x_err"]*3
x_max = results_n["x"] + results_n["x_err"]*3
if n < -1:
x_max = max(0.5 * results_n["x_err"], x_max)
poinull = POIarray(mean, np.linspace(x_min, x_max, 50))
results_n["calculator"] = calculator
results_n["poinull"] = poinull
else:
results_n = results[n]
calculator = results_n["calculator"]
poinull = results_n["poinull"]
if "mu_lower" not in results_n:
for qtilde in [True, False]:
while True:
try:
ci = ConfidenceInterval(calculator, poinull, qtilde=qtilde)
interval = ci.interval(alpha=0.05, printlevel=0)
break
except POIRangeError:
values = poinull.values
poinull = POIarray(mean, np.concatenate([values, [values[-1] + np.diff(values)[0]]]))
results_n["poinull"] = poinull
if qtilde:
results_n["mu_lower"] = interval["lower"]
results_n["mu_upper"] = interval["upper"]
else:
results_n["mu_lower_unbound"] = interval["lower"]
results_n["mu_upper_unbound"] = interval["upper"]
results[n] = results_n
```
The plot of the confidence belt of $\mu$ at 90 % CL as function of the measured mean values $x$ (in unit of $\sigma_x$), for the bounded and unbounded case are shown below.
```
f = plt.figure(figsize=(9, 8))
plt.plot([v["x"]/v["x_err"] for v in results.values()],
[v["mu_upper_unbound"]/v["x_err"] for v in results.values()], color="black", label="90 % CL, no boundaries")
plt.plot([v["x"]/v["x_err"] for v in results.values()],
[v["mu_lower_unbound"]/v["x_err"] for v in results.values()], color="black")
plt.plot([v["x"]/v["x_err"] for v in results.values()],
[v["mu_upper"]/v["x_err"] for v in results.values()], "--", color="crimson", label="90 % CL, $\mu > 0$")
plt.plot([v["x"]/v["x_err"] for v in results.values()],
[v["mu_lower"]/v["x_err"] for v in results.values()], "--", color="crimson")
plt.ylim(0.)
plt.legend(fontsize=15)
plt.ylabel("Mean $\mu$", fontsize=15)
plt.xlabel("Measured mean $x$", fontsize=15);
```
For the unbounded and the $\mu > 0$ cases the plot reproduces the figure 3 and 10, respectively, of [A Unified Approach to the Classical Statistical Analysis of Small Signals, Gary J. Feldman, Robert D. Cousins](https://arxiv.org/pdf/physics/9711021.pdf).
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import xarray as xr
import cartopy.crs as ccrs
import glob
import os
import scipy.stats
from matplotlib import cm
import seaborn as sns
import dask
import pickle
from datetime import datetime
import ast
from dask.distributed import Client, LocalCluster
if __name__ == "__main__":
cluster=LocalCluster(host="tcp://127.0.0.1:2459",dashboard_address="127.0.0.1:2461",n_workers=4)
client = Client(cluster)
models = [x.split('/')[-1] for x in glob.glob("/terra/data/cmip5/global/rcp85/*")]
dask.config.set(**{'array.slicing.split_large_chunks': False})
import warnings
warnings.simplefilter("ignore")
#annoying cftime serialization warning
dic = {}
for model in models:
try:
rcp85_files = sorted(glob.glob("/terra/data/cmip5/global/rcp85/"+str(model)+"/r1i1p1/mon/native/tas_*"))
rcp85 = xr.open_mfdataset(rcp85_files, decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').tas
rcp85 = rcp85.sel(time = slice('2000','2250'))
hist_files = sorted(glob.glob("/terra/data/cmip5/global/historical/"+str(model)+"/r1i1p1/mon/native/tas_*"))
hist = xr.open_mfdataset(hist_files, decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').tas
x = xr.concat([hist,rcp85],dim='time').load()
x = x.sortby(x.time)
x = x.resample(time='M').mean()
dic[model] = x - hist.sel(time=slice('1979','2005')).mean(dim='time')
except:
if model == 'BNU-ESM': # no historical monthly data
rcp85_files = sorted(glob.glob("/terra/data/cmip5/global/rcp85/"+str(model)+"/r1i1p1/mon/native/tas_*"))
rcp85 = xr.open_mfdataset(rcp85_files, decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').tas
rcp85 = rcp85.sel(time = slice('2000','2250'))
hist_files = sorted(glob.glob("/terra/data/cmip5/global/historical/"+str(model)+"/r1i1p1/day/native/tas_*"))
hist = xr.open_mfdataset(hist_files, decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').tas
hist = hist.resample(time='M').mean()
x = xr.concat([hist,rcp85],dim='time').load()
x = x.sortby(x.time)
x = x.resample(time='M').mean()
dic[model] = x - hist.sel(time=slice('1979','2005')).mean(dim='time')
elif model == 'MPI-ESM-LR': # a problem with the later than 2100 data
rcp85_files = sorted(glob.glob("/terra/data/cmip5/global/rcp85/"+str(model)+"/r1i1p1/mon/native/tas_*"))[0]
rcp85 = xr.open_mfdataset(rcp85_files, decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').tas
rcp85 = rcp85.sel(time = slice('2000','2250'))
hist_files = sorted(glob.glob("/terra/data/cmip5/global/historical/"+str(model)+"/r1i1p1/mon/native/tas_*"))
hist = xr.open_mfdataset(hist_files, decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').tas
x = xr.concat([hist,rcp85],dim='time').load()
x = x.sortby(x.time)
x = x.resample(time='M').mean()
dic[model] = x - (x.sel(time=slice('1979','2005')).mean(dim='time'))
elif model == 'CNRM-CM5': # a problem with the later than 2100 data
rcp85_files = sorted(glob.glob("/terra/data/cmip5/global/rcp85/"+str(model)+"/r1i1p1/mon/native/tas_*"))[:2]
rcp85 = xr.open_mfdataset(rcp85_files, decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').tas
rcp85 = rcp85.sel(time = slice('2000','2250'))
hist_files = sorted(glob.glob("/terra/data/cmip5/global/historical/"+str(model)+"/r1i1p1/mon/native/tas_*"))
hist = xr.open_mfdataset(hist_files, decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').tas
x = xr.concat([hist,rcp85],dim='time').load()
x = x.sortby(x.time)
x = x.resample(time='M').mean()
dic[model] = x - (x.sel(time=slice('1979','2005')).mean(dim='time'))
#NOAA
x = xr.open_mfdataset('/home/pmarsh/NOAA_2deg/air.2m.mon.mean.nc', decode_cf=True).sel(lat = -34, method = 'nearest').sel(lon = 18, method = 'nearest').air
x = x.sortby(x.time)
x = x.resample(time='M').mean()
x = x.sel(time=slice('1940','2016'))
dic['NOAA'] = x - (x.sel(time=slice('1979','2005')).mean(dim='time'))
#ERA5 - 1hr - daily avalable but missing some data
x = xr.open_mfdataset(sorted(glob.glob('/terra/data/reanalysis/global/reanalysis/ECMWF/ERA5/1hr/native/tas_*')), decode_cf=True).sel(latitude = -34, method = 'nearest').sel(longitude = 18, method = 'nearest').tas
x = x.resample(time='M').mean()
x = x.sortby(x.time).load()
dic['ERA5'] = x - (x.sel(time=slice('1979','2005')).mean(dim='time'))
pickle.dump(dic, open( "monthly_tas_dic.p", "wb" ) )
client.close()
```
| github_jupyter |
```
%load_ext watermark
%watermark -p torch,pytorch_lightning,torchvision,torchmetrics,matplotlib
%load_ext pycodestyle_magic
%flake8_on --ignore W291,W293,E703
```
<a href="https://pytorch.org"><img src="https://raw.githubusercontent.com/pytorch/pytorch/master/docs/source/_static/img/pytorch-logo-dark.svg" width="90"/></a> <a href="https://www.pytorchlightning.ai"><img src="https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/docs/source/_static/images/logo.svg" width="150"/></a>
# Model Zoo -- VGG16 Trained on CIFAR-10
This notebook implements the VGG16 convolutional network [1] and applies it to CIFAR-10 digit classification.

### References
- [1] Simonyan, K., & Zisserman, A. (2014). [Very deep convolutional networks for large-scale image recognition](https://arxiv.org/abs/1409.1556). arXiv preprint arXiv:1409.1556.
## General settings and hyperparameters
- Here, we specify some general hyperparameter values and general settings
- Note that for small datatsets, it is not necessary and better not to use multiple workers as it can sometimes cause issues with too many open files in PyTorch. So, if you have problems with the data loader later, try setting `NUM_WORKERS = 0` instead.
```
BATCH_SIZE = 256
NUM_EPOCHS = 25
LEARNING_RATE = 0.001
NUM_WORKERS = 4
```
## Implementing a Neural Network using PyTorch Lightning's `LightningModule`
- In this section, we set up the main model architecture using the `LightningModule` from PyTorch Lightning.
- When using PyTorch Lightning, we can start with defining our neural network model in pure PyTorch, and then we use it in the `LightningModule` to get all the extra benefits that PyTorch Lightning provides.
- In this case, since Torchvision already offers a nice and efficient PyTorch implementation of MobileNet-v2, let's load it from the Torchvision hub:
```
import torch.nn as nn
class PyTorchVGG16(nn.Module):
def __init__(self, num_classes):
super().__init__()
# calculate same padding:
# (w - k + 2*p)/s + 1 = o
# => p = (s(o-1) - w + k)/2
self.block_1 = nn.Sequential(
nn.Conv2d(in_channels=3,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
# (1(32-1)- 32 + 3)/2 = 1
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=64,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_2 = nn.Sequential(
nn.Conv2d(in_channels=64,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=128,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_3 = nn.Sequential(
nn.Conv2d(in_channels=128,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_4 = nn.Sequential(
nn.Conv2d(in_channels=256,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_5 = nn.Sequential(
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.features = nn.Sequential(
self.block_1, self.block_2,
self.block_3, self.block_4,
self.block_5
)
self.classifier = nn.Sequential(
nn.Linear(512, 4096),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(4096, num_classes),
)
# self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
for m in self.modules():
if isinstance(m, torch.nn.Conv2d):
#n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
#m.weight.data.normal_(0, np.sqrt(2. / n))
m.weight.detach().normal_(0, 0.05)
if m.bias is not None:
m.bias.detach().zero_()
elif isinstance(m, torch.nn.Linear):
m.weight.detach().normal_(0, 0.05)
m.bias.detach().detach().zero_()
def forward(self, x):
x = self.features(x)
# x = self.avgpool(x)
x = x.view(x.size(0), -1)
logits = self.classifier(x)
return logits
```
- Next, we can define our `LightningModule` as a wrapper around our PyTorch model:
```
import pytorch_lightning as pl
import torchmetrics
# LightningModule that receives a PyTorch model as input
class LightningModel(pl.LightningModule):
def __init__(self, model, learning_rate):
super().__init__()
self.learning_rate = learning_rate
# The inherited PyTorch module
self.model = model
# Save settings and hyperparameters to the log directory
# but skip the model parameters
self.save_hyperparameters(ignore=['model'])
# Set up attributes for computing the accuracy
self.train_acc = torchmetrics.Accuracy()
self.valid_acc = torchmetrics.Accuracy()
self.test_acc = torchmetrics.Accuracy()
# Defining the forward method is only necessary
# if you want to use a Trainer's .predict() method (optional)
def forward(self, x):
return self.model(x)
# A common forward step to compute the loss and labels
# this is used for training, validation, and testing below
def _shared_step(self, batch):
features, true_labels = batch
logits = self(features)
loss = torch.nn.functional.cross_entropy(logits, true_labels)
predicted_labels = torch.argmax(logits, dim=1)
return loss, true_labels, predicted_labels
def training_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("train_loss", loss)
# To account for Dropout behavior during evaluation
self.model.eval()
with torch.no_grad():
_, true_labels, predicted_labels = self._shared_step(batch)
self.train_acc.update(predicted_labels, true_labels)
self.log("train_acc", self.train_acc, on_epoch=True, on_step=False)
self.model.train()
return loss # this is passed to the optimzer for training
def validation_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("valid_loss", loss)
self.valid_acc(predicted_labels, true_labels)
self.log("valid_acc", self.valid_acc,
on_epoch=True, on_step=False, prog_bar=True)
def test_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.test_acc(predicted_labels, true_labels)
self.log("test_acc", self.test_acc, on_epoch=True, on_step=False)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
```
## Setting up the dataset
- In this section, we are going to set up our dataset.
### Inspecting the dataset
```
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
train_dataset = datasets.CIFAR10(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=True,
shuffle=True)
test_dataset = datasets.CIFAR10(root='./data',
train=False,
transform=transforms.ToTensor())
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=False,
shuffle=False)
from collections import Counter
train_counter = Counter()
for images, labels in train_loader:
train_counter.update(labels.tolist())
print('\nTraining label distribution:')
sorted(train_counter.items(), key=lambda pair: pair[0])
test_counter = Counter()
for images, labels in test_loader:
test_counter.update(labels.tolist())
print('\nTest label distribution:')
sorted(test_counter.items(), key=lambda pair: pair[0])
```
### A quick visual check
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torchvision
for images, labels in train_loader:
break
plt.figure(figsize=(8, 8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(torchvision.utils.make_grid(
images[:64],
padding=2,
normalize=True),
(1, 2, 0)))
plt.show()
```
### Performance baseline
- Especially for imbalanced datasets, it's quite useful to compute a performance baseline.
- In classification contexts, a useful baseline is to compute the accuracy for a scenario where the model always predicts the majority class -- you want your model to be better than that!
```
majority_class = test_counter.most_common(1)[0]
majority_class
```
- (To be fair, the classes in the test set are perfectly evenly distributed, so the majority class is an arbitrary choice in this case)
```
baseline_acc = majority_class[1] / sum(test_counter.values())
print('Accuracy when always predicting the majority class:')
print(f'{baseline_acc:.2f} ({baseline_acc*100:.2f}%)')
```
### Setting up a `DataModule`
- There are three main ways we can prepare the dataset for Lightning. We can
1. make the dataset part of the model;
2. set up the data loaders as usual and feed them to the fit method of a Lightning Trainer -- the Trainer is introduced in the next subsection;
3. create a LightningDataModule.
- Here, we are going to use approach 3, which is the most organized approach. The `LightningDataModule` consists of several self-explanatory methods as we can see below:
```
import os
from torch.utils.data.dataset import random_split
from torch.utils.data import DataLoader
from torchvision import transforms
class DataModule(pl.LightningDataModule):
def __init__(self, data_path='./'):
super().__init__()
self.data_path = data_path
def prepare_data(self):
datasets.CIFAR10(root=self.data_path,
download=True)
self.train_transform = torchvision.transforms.Compose([
# torchvision.transforms.Resize((70, 70)),
# torchvision.transforms.RandomCrop((64, 64)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
self.test_transform = torchvision.transforms.Compose([
# torchvision.transforms.Resize((70, 70)),
# torchvision.transforms.CenterCrop((64, 64)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
return
def setup(self, stage=None):
train = datasets.CIFAR10(root=self.data_path,
train=True,
transform=self.train_transform,
download=False)
self.test = datasets.CIFAR10(root=self.data_path,
train=False,
transform=self.test_transform,
download=False)
self.train, self.valid = random_split(train, lengths=[45000, 5000])
def train_dataloader(self):
train_loader = DataLoader(dataset=self.train,
batch_size=BATCH_SIZE,
drop_last=True,
shuffle=True,
num_workers=NUM_WORKERS)
return train_loader
def val_dataloader(self):
valid_loader = DataLoader(dataset=self.valid,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return valid_loader
def test_dataloader(self):
test_loader = DataLoader(dataset=self.test,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return test_loader
```
- Note that the `prepare_data` method is usually used for steps that only need to be executed once, for example, downloading the dataset; the `setup` method defines the the dataset loading -- if you run your code in a distributed setting, this will be called on each node / GPU.
- Next, lets initialize the `DataModule`; we use a random seed for reproducibility (so that the data set is shuffled the same way when we re-execute this code):
```
import torch
torch.manual_seed(1)
data_module = DataModule(data_path='./data')
```
## Training the model using the PyTorch Lightning Trainer class
- Next, we initialize our model.
- Also, we define a call back so that we can obtain the model with the best validation set performance after training.
- PyTorch Lightning offers [many advanced logging services](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) like Weights & Biases. Here, we will keep things simple and use the `CSVLogger`:
```
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import CSVLogger
pytorch_model = PyTorchVGG16(num_classes=10)
lightning_model = LightningModel(
pytorch_model, learning_rate=LEARNING_RATE)
callbacks = [ModelCheckpoint(
save_top_k=1, mode='max', monitor="valid_acc")] # save top 1 model
logger = CSVLogger(save_dir="logs/", name="my-model")
```
- Now it's time to train our model:
```
import time
trainer = pl.Trainer(
max_epochs=NUM_EPOCHS,
callbacks=callbacks,
progress_bar_refresh_rate=50, # recommended for notebooks
accelerator="auto", # Uses GPUs or TPUs if available
devices="auto", # Uses all available GPUs/TPUs if applicable
logger=logger,
log_every_n_steps=100)
start_time = time.time()
trainer.fit(model=lightning_model, datamodule=data_module)
runtime = (time.time() - start_time)/60
print(f"Training took {runtime:.2f} min in total.")
```
## Evaluating the model
- After training, let's plot our training ACC and validation ACC using pandas, which, in turn, uses matplotlib for plotting (you may want to consider a [more advanced logger](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) that does that for you):
```
import pandas as pd
metrics = pd.read_csv(f"{trainer.logger.log_dir}/metrics.csv")
aggreg_metrics = []
agg_col = "epoch"
for i, dfg in metrics.groupby(agg_col):
agg = dict(dfg.mean())
agg[agg_col] = i
aggreg_metrics.append(agg)
df_metrics = pd.DataFrame(aggreg_metrics)
df_metrics[["train_loss", "valid_loss"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='Loss')
df_metrics[["train_acc", "valid_acc"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='ACC')
```
- The `trainer` automatically saves the model with the best validation accuracy automatically for us, we which we can load from the checkpoint via the `ckpt_path='best'` argument; below we use the `trainer` instance to evaluate the best model on the test set:
```
trainer.test(model=lightning_model, datamodule=data_module, ckpt_path='best')
```
## Predicting labels of new data
- You can use the `trainer.predict` method on a new `DataLoader` or `DataModule` to apply the model to new data.
- Alternatively, you can also manually load the best model from a checkpoint as shown below:
```
path = trainer.checkpoint_callback.best_model_path
print(path)
lightning_model = LightningModel.load_from_checkpoint(
path, model=pytorch_model)
lightning_model.eval();
```
- Note that our PyTorch model, which is passed to the Lightning model requires input arguments. However, this is automatically being taken care of since we used `self.save_hyperparameters()` in our PyTorch model's `__init__` method.
- Now, below is an example applying the model manually. Here, pretend that the `test_dataloader` is a new data loader.
```
test_dataloader = data_module.test_dataloader()
all_true_labels = []
all_predicted_labels = []
for batch in test_dataloader:
features, labels = batch
with torch.no_grad():
logits = lightning_model(features)
predicted_labels = torch.argmax(logits, dim=1)
all_predicted_labels.append(predicted_labels)
all_true_labels.append(labels)
all_predicted_labels = torch.cat(all_predicted_labels)
all_true_labels = torch.cat(all_true_labels)
all_predicted_labels[:5]
```
Just as an internal check, if the model was loaded correctly, the test accuracy below should be identical to the test accuracy we saw earlier in the previous section.
```
test_acc = torch.mean((all_predicted_labels == all_true_labels).float())
print(f'Test accuracy: {test_acc:.4f} ({test_acc*100:.2f}%)')
```
## Inspecting Failure Cases
- In practice, it is often informative to look at failure cases like wrong predictions for particular training instances as it can give us some insights into the model behavior and dataset.
- Inspecting failure cases can sometimes reveal interesting patterns and even highlight dataset and labeling issues.
```
# Append the folder that contains the
# helper_data.py, helper_plotting.py, and helper_evaluate.py
# files so we can import from them
import sys
sys.path.append('../pytorch_ipynb')
from helper_data import UnNormalize
from helper_plotting import show_examples
class_dict = {0: 'airplane',
1: 'automobile',
2: 'bird',
3: 'cat',
4: 'deer',
5: 'dog',
6: 'frog',
7: 'horse',
8: 'ship',
9: 'truck'}
# We normalized each channel during training; here
# we are reverting the normalization so that we
# can plot them as images
unnormalizer = UnNormalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
show_examples(
model=lightning_model,
data_loader=test_dataloader,
unnormalizer=unnormalizer,
class_dict=class_dict)
from torchmetrics import ConfusionMatrix
cmat = ConfusionMatrix(num_classes=len(class_dict))
for x, y in test_dataloader:
pred = lightning_model(x)
cmat(pred, y)
cmat_tensor = cmat.compute()
from helper_plotting import plot_confusion_matrix
plot_confusion_matrix(
cmat_tensor.numpy(),
class_names=class_dict.values())
plt.show()
```
## Single-image usage
```
%matplotlib inline
import matplotlib.pyplot as plt
```
- Assume we have a single image as shown below:
```
from PIL import Image
image = Image.open('data/cifar10_pngs/90_airplane.png')
plt.imshow(image, cmap='Greys')
plt.show()
```
- Note that we have to use the same image transformation that we used earlier in the `DataModule`.
- While we didn't apply any image augmentation, we could use the `to_tensor` function from the torchvision library; however, as a general template that provides flexibility for more complex transformation chains, let's use the `Compose` class for this:
```
transform = data_module.train_transform
image_chw = transform(image)
```
- Note that `ToTensor` returns the image in the CHW format. CHW refers to the dimensions and stands for channel, height, and width.
```
print(image_chw.shape)
```
- However, the PyTorch / PyTorch Lightning model expectes images in NCHW format, where N stands for the number of images (e.g., in a batch).
- We can add the additional channel dimension via `unsqueeze` as shown below:
```
image_nchw = image_chw.unsqueeze(0)
print(image_nchw.shape)
```
- Now that we have the image in the right format, we can feed it to our classifier:
```
with torch.no_grad(): # since we don't need to backprop
logits = lightning_model(image_nchw)
probas = torch.softmax(logits, axis=1)
predicted_label = torch.argmax(probas)
int_to_str = {
0: 'airplane',
1: 'automobile',
2: 'bird',
3: 'cat',
4: 'deer',
5: 'dog',
6: 'frog',
7: 'horse',
8: 'ship',
9: 'truck'}
print(f'Predicted label: {int_to_str[predicted_label.item()]}')
print(f'Class-membership probability {probas[0][predicted_label]*100:.2f}%')
```
| github_jupyter |
## Drawing Edgeworth Box
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def Edgeworth_box(u1, u2, util1, util2, MRS1, MRS2, ttl, ttl_margin= 1, top=0.7, fname=None):
l1 = 0.00000001
x1 = np.arange(l1, u1, 0.01)
x2 = np.arange(l1, u2, 0.01)
X1, X2 = np.meshgrid(x1, x2)
V1 = util1(X1, X2)
V2 = util2(u1-X1, u2-X2)
x, y = contract_curve(u1,u2, MRS1,MRS2,num_indiff = 10)
xr = u1-x[::-1] # to use the same contours for Consumer 2.
yr = u2-y[::-1] # to use the same contours for Consumer 2.
clev1 = util1(x,y)
clev2 = util2(xr, yr)
Draw_Edgeworth_box(u1, u2, X1, X2, V1, V2, clev1,clev2, ttl, ttl_margin, top,contract1=x, contract2=y)
if fname != None:
plt.savefig(fname)
from scipy.optimize import fsolve
from scipy.optimize import fminbound
def contract_curve(u1, u2, MRS1, MRS2, num_indiff = 10):
xs = np.linspace(0,u1,num_indiff+2)
xs = xs[1:-1]
ys = []
for x in xs:
y = fminbound(lambda y: (MRS1(x,y) - MRS2(u1-x, u2-y))**2, 0, u2)
ys.append(min(max(np.asscalar(y),0),u2))
ys = np.asarray(ys)
return xs, ys
def Draw_Edgeworth_box(u1, u2,X1, X2, V1, V2, clev1,clev2, ttl=[], ttl_margin= 1, top=0.7,contract1=[], contract2=[]):
# u1: the total amount of good 1
# u2: the total amount of good 2
# V1: levels of utility for consumer 1
# V2: levels of utility for consumer 2
# clev1: levels of contours (for consumer 1) you are interested in
# clev2: levels of contours (for consumer 2) you are interested in
# ttl: a title
# ttl_margin: space added for the title
# top: a location where the graph starts (if top=1, the title and the graph will overlap)
xtcks = np.arange(0, u1+1)
ytcks = np.arange(0,u2+1)
# Adjustment of the title is bit annoying when we try to set xlabel on the top
if len(ttl)>0:
fig = plt.figure(figsize = (u1, u2/top))
ax1 = fig.add_subplot(1,1,1)
plt.subplots_adjust(top=top)
fig.suptitle(ttl)
#plt.title(ttl)
else:
fig = plt.figure(figsize = (u1, u2))
col1 = 'tab:red'
col2 = 'tab:blue'
plt.contour(X1, X2, V1,clev1, linewidths = 1, colors=col1, linestyles = 'dashed')
plt.contour(X1, X2, V2,clev2, linewidths = 1, colors = col2, linestyles = 'dashed')
plt.xlim([0,u1])
plt.ylim([0,u2])
plt.xlabel('$x_{1,1}$', color = col1, fontsize = 13)
plt.ylabel('$x_{1,2}$', color = col1, fontsize = 13)
plt.xticks(xtcks, color = col1)
plt.yticks(ytcks, color = col1)
xplt = np.linspace(0,u1,12)
yplt = np.linspace(0,u2,12)
if len(contract1)>0:
xplt[1:-1]=contract1
yplt[1:-1]=contract2
plt.plot(xplt,yplt, 'k--', alpha = 0.7)
ax1 = plt.gca()
ax2 = plt.twinx(ax1)
plt.ylabel('$x_{2,2}$', color=col2 , fontsize = 13)
plt.yticks(ytcks,ytcks[::-1], color = col2)
# It's a bit hacky, but the following looks an easy way.
ax3 = plt.twiny(ax1)
plt.xlabel('$x_{2,1}$', color=col2, fontsize = 13)
plt.xticks(xtcks,xtcks[::-1], color = col2)
return fig
```
### Edgeworth Box under Homothetic Preferences
- ##### Q: Is a contract curve of homothetic preferences always a diagonal line of the Edgeworth Box ?
- ##### A: If preferences are identical and strictly convex (i.e., strictly (quasi-) concave utility functions), Yes
##### Example with CES
(with the same weights on two goods. )
```
def CES(x,y,dlt):
return (x**dlt)/dlt + (y**dlt)/dlt
def Cobb_Douglas(x, y, al, bet):
return (x**al)*(y**bet)
def MRS_CES(x,y,dlt):
return (x**(dlt-1))/(y**(dlt-1))
def MRS_CD(x, y, al, bet):
return (al*y)/(bet*x)
dlt1 = 0.5
dlt2 = dlt1
Edgeworth_box(u1=6, u2=3,
util1 = lambda x,y: CES(x, y, dlt1),
util2 =lambda x,y: CES(x, y, dlt2),
MRS1 =lambda x, y: MRS_CES(x,y,dlt1),
MRS2 =lambda x, y: MRS_CES(x,y,dlt2),
ttl='$u_1=x^{\delta_1}/{\delta_1} + y^{\delta_1}/{\delta_1}$ : ${\delta_1}=$' + '{}\n'.format(dlt1)
+'$u_2=x^{\delta_2}/\delta_2 + y^{\delta_2}/\delta_2$ : ${\delta_2}=$' + '{}\n'.format(dlt2),
ttl_margin= 1, top=0.7, fname='Edgeworth_identical1.png')
dlt1 = 0.5
dlt2 = -3
Edgeworth_box(u1=6, u2=3,
util1 = lambda x,y: CES(x, y, dlt1),
util2 =lambda x,y: CES(x, y, dlt2),
MRS1 =lambda x, y: MRS_CES(x,y,dlt1),
MRS2 =lambda x, y: MRS_CES(x,y,dlt2),
ttl='$u_1=x^{\delta_1}/{\delta_1} + y^{\delta_1}/{\delta_1}$ : ${\delta_1}=$' + '{}\n'.format(dlt1)
+'$u_2=x^{\delta_2}/\delta_2 + y^{\delta_2}/\delta_2$ : ${\delta_2}=$' + '{}'.format(dlt2),
ttl_margin= 1, top=0.7, fname='Edgeworth_not_identical1.png')
```
In the above graph, weights on two goods are the same, but complementarity for each consumer are different. (Consumer 2 feels higher complementaryty than Consumer 1)
##### Example with Cobb-Douglas
(With different weights for two goods.)
```
al = 0.3
bet = 0.7
Edgeworth_box(u1=5, u2=5,
util1 = lambda x,y: Cobb_Douglas(x, y, al,bet),
util2 =lambda x,y: Cobb_Douglas(x, y, al,bet),
MRS1 =lambda x, y: MRS_CD(x,y, al,bet),
MRS2 =lambda x, y: MRS_CD(x,y, al,bet),
ttl=r'$u_1=x^{\alpha_1}y^{\beta_1} : {\alpha_1}$'+ '={}, '.format(al) + r'$\beta_1$'+ '={}\n'.format(bet)
+r'$u_2=x^{\alpha_2}y^{\beta_2} : {\alpha_2}$'+ '={}, '.format(al) + r'$\beta_2$'+ '={}'.format(bet),
ttl_margin= 1, top=0.8, fname='Edgeworth_identical2.png')
```
Even if goods 2 is more important, the contract curve is still diagonal since prefernces are identical and homothetic (and strictly convex)
If we use different weigts for two consumers, then we have a different contract curve.
```
al = 0.3
bet = 0.7
asym = 0.4
Edgeworth_box(u1=5, u2=5,
util1 = lambda x,y: Cobb_Douglas(x, y, al,bet),
util2 =lambda x,y: Cobb_Douglas(x, y, al+asym,bet-asym),
MRS1 =lambda x, y: MRS_CD(x,y, al,bet),
MRS2 =lambda x, y: MRS_CD(x,y, al+asym,bet-asym),
ttl=r'$u_1=x^{\alpha_1}y^{\beta_1} : {\alpha_1}$'+ '={:02.1f}, '.format(al) + r'$\beta_1$'+ '={:02.1f}\n'.format(bet)
+r'$u_2=x^{\alpha_2}y^{\beta_2} : {\alpha_2}$'+ '={:02.1f}, '.format(al+asym) + r'$\beta_2$'+ '={:02.1f}'.format(bet-asym),
ttl_margin= 1, top=0.8, fname='Edgeworth_not_identical2.png')
```
| github_jupyter |
#### ΠΡΟΣΟΧΗ:
Τα joblib dumps των τελικών `corpus_tf_idf.pkl` και `som.pkl` δεν περιέχονται στο zip file καθώς είχαν απαγορευτικά μεγάλο μέγεθος. Αυτό ΔΕΝ οφείλεται σε δική μας ελλιπή υλοποίηση, αλλά σε μια ιδιομορφία του corpus που μας αντιστοιχεί και αναγκάζει ορισμένους πίνακες να αντιστοιχίζονται αχρείαστα σε float64. Το πρόβλημα το έχει δει ο κ. Σιόλας, ο οποίος μας έδωσε την άδεια να ανεβάσουμε τα pickles σε ένα drive ώστε να έχετε πρόσβαση σε αυτά. Μας διαβεβαίωσε πως δε θα υπάρξει βαθμολογική ποινή. Τα links των αρχείων:
* `corpus_tf_idf.pkl` : https://drive.google.com/open?id=1q5G1fRPwNBhNUzkWNTAqvAZCzY1B0tJF
* `som.pkl` : https://drive.google.com/open?id=1V5Je-RfpvQyCgm-F5UDGaPD88gXbdad8
Για οποιαδήποτε απορία ή πρόβλημα στα Links επικοινωνήστε μαζί μας. Για το θέμα που προέκυψε μπορείτε να απευθυνείτε στον κ. Σιόλα.
# Neural Networks ECE NTUA Course 2019-20 ~ Team M.B.4
## Lab Assingment #2: Unsupervised Learning (Recommendation System & SOM)
### A. The Team
* Αβραμίδης Κλεάνθης ~ 03115117
* Κρατημένος Άγγελος ~ 03115025
* Πανίδης Κωνσταντίνος ~ 03113602
### Requested Imports
```
import pandas as pd, numpy as np, scipy as sp
import nltk, string, collections
import time, joblib
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.tokenize import word_tokenize
import somoclu, matplotlib
%matplotlib inline
```
## Εισαγωγή του Dataset
Το σύνολο δεδομένων με το οποίο θα δουλέψουμε είναι βασισμένο στο [Carnegie Mellon Movie Summary Corpus](http://www.cs.cmu.edu/~ark/personas/). Πρόκειται για ένα dataset με περίπου 40.000 περιγραφές ταινιών. Η περιγραφή κάθε ταινίας αποτελείται από τον τίτλο της, μια ή περισσότερες ετικέτες που χαρακτηρίζουν το είδος της ταινίας και τέλος τη σύνοψη της υπόθεσής της. Αρχικά εισάγουμε το dataset (χρησιμοποιήστε αυτούσιο τον κώδικα, δεν χρειάζεστε το αρχείο csv) στο dataframe `df_data_1`:
```
dataset_url = "https://drive.google.com/uc?export=download&id=1PdkVDENX12tQliCk_HtUnAUbfxXvnWuG"
df_data_1 = pd.read_csv(dataset_url, sep='\t', header=None, quoting=3, error_bad_lines=False)
```
Κάθε ομάδα θα δουλέψει σε ένα μοναδικό υποσύνολο 5.000 ταινιών (διαφορετικό dataset για κάθε ομάδα) ως εξής
1. Κάθε ομάδα μπορεί να βρει [εδώ](https://docs.google.com/spreadsheets/d/1oEr3yuPg22lmMeqDjFtWjJRzmGQ8N57YIuV-ZOvy3dM/edit?usp=sharing) τον μοναδικό αριθμό της "Seed" από 1 έως 78.
2. Το data frame `df_data_2` έχει 78 γραμμές (ομάδες) και 5.000 στήλες. Σε κάθε ομάδα αντιστοιχεί η γραμμή του πίνακα με το `team_seed_number` της. Η γραμμή αυτή θα περιλαμβάνει 5.000 διαφορετικούς αριθμούς που αντιστοιχούν σε ταινίες του αρχικού dataset.
3. Στο επόμενο κελί αλλάξτε τη μεταβλητή `team_seed_number` με το Seed της ομάδας σας από το Google Sheet.
4. Τρέξτε τον κώδικα. Θα προκύψουν τα μοναδικά για κάθε ομάδα titles, categories, catbins, summaries και corpus με τα οποία θα δουλέψετε.
```
# βάλτε το seed που αντιστοιχεί στην ομάδα σας
team_seed_number = 19
movie_seeds_url = "https://drive.google.com/uc?export=download&id=1RRoiOjhD0JB3l4oHNFOmPUqZHDphIdwL"
df_data_2 = pd.read_csv(movie_seeds_url, header=None, error_bad_lines=False)
# επιλέγεται
my_index = df_data_2.iloc[team_seed_number,:].values
titles = df_data_1.iloc[:, [2]].values[my_index] # movie titles (string)
categories = df_data_1.iloc[:, [3]].values[my_index] # movie categories (string)
bins = df_data_1.iloc[:, [4]]
catbins = bins[4].str.split(',', expand=True).values.astype(np.float)[my_index] # movie categories in binary form
summaries = df_data_1.iloc[:, [5]].values[my_index] # movie summaries (string)
corpus = summaries[:,0].tolist() # list form of summaries
```
- Ο πίνακας **titles** περιέχει τους τίτλους των ταινιών. Παράδειγμα: 'Sid and Nancy'.
- O πίνακας **categories** περιέχει τις κατηγορίες (είδη) της ταινίας υπό τη μορφή string. Παράδειγμα: '"Tragedy", "Indie", "Punk rock", "Addiction Drama", "Cult", "Musical", "Drama", "Biopic \[feature\]", "Romantic drama", "Romance Film", "Biographical film"'. Παρατηρούμε ότι είναι μια comma separated λίστα strings, με κάθε string να είναι μια κατηγορία.
- Ο πίνακας **catbins** περιλαμβάνει πάλι τις κατηγορίες των ταινιών αλλά σε δυαδική μορφή ([one hot encoding](https://hackernoon.com/what-is-one-hot-encoding-why-and-when-do-you-have-to-use-it-e3c6186d008f)). Έχει διαστάσεις 5.000 x 322 (όσες οι διαφορετικές κατηγορίες). Αν η ταινία ανήκει στο συγκεκριμένο είδος η αντίστοιχη στήλη παίρνει την τιμή 1, αλλιώς παίρνει την τιμή 0.
- Ο πίνακας **summaries** και η λίστα **corpus** περιλαμβάνουν τις συνόψεις των ταινιών (η corpus είναι απλά ο summaries σε μορφή λίστας). Κάθε σύνοψη είναι ένα (συνήθως μεγάλο) string. Παράδειγμα: *'The film is based on the real story of a Soviet Internal Troops soldier who killed his entire unit as a result of Dedovschina. The plot unfolds mostly on board of the prisoner transport rail car guarded by a unit of paramilitary conscripts.'*
- Θεωρούμε ως **ID** της κάθε ταινίας τον αριθμό γραμμής της ή το αντίστοιχο στοιχείο της λίστας. Παράδειγμα: για να τυπώσουμε τη σύνοψη της ταινίας με `ID=99` (την εκατοστή) θα γράψουμε `print(corpus[99])`.
```
ID = 99
print(titles[ID])
print(categories[ID])
print(catbins[ID])
print(corpus[ID])
```
# Υλοποίηση συστήματος συστάσεων ταινιών βασισμένο στο περιεχόμενο
<img src="http://clture.org/wp-content/uploads/2015/12/Netflix-Streaming-End-of-Year-Posts.jpg" width="70%">
Η πρώτη εφαρμογή που θα αναπτύξετε θα είναι ένα [σύστημα συστάσεων](https://en.wikipedia.org/wiki/Recommender_system) ταινιών βασισμένο στο περιεχόμενο (content based recommender system). Τα συστήματα συστάσεων στοχεύουν στο να προτείνουν αυτόματα στο χρήστη αντικείμενα από μια συλλογή τα οποία ιδανικά θέλουμε να βρει ενδιαφέροντα ο χρήστης. Η κατηγοριοποίηση των συστημάτων συστάσεων βασίζεται στο πώς γίνεται η επιλογή (filtering) των συστηνόμενων αντικειμένων. Οι δύο κύριες κατηγορίες είναι η συνεργατική διήθηση (collaborative filtering) όπου το σύστημα προτείνει στο χρήστη αντικείμενα που έχουν αξιολογηθεί θετικά από χρήστες που έχουν παρόμοιο με αυτόν ιστορικό αξιολογήσεων και η διήθηση με βάση το περιεχόμενο (content based filtering), όπου προτείνονται στο χρήστη αντικείμενα με παρόμοιο περιεχόμενο (με βάση κάποια χαρακτηριστικά) με αυτά που έχει προηγουμένως αξιολογήσει θετικά.
Το σύστημα συστάσεων που θα αναπτύξετε θα βασίζεται στο **περιεχόμενο** και συγκεκριμένα στις συνόψεις των ταινιών (corpus).
## Προσθήκη stop words που βρέθηκαν εμπειρικά οτι βελτιώνουν τις συστάσεις
Προστέθηκαν ονόματα και συχνές λέξεις στην περιγραφή ταινιών (πχ plot,story,film) που δεν προσθέτουν αξία στο περιεχόμενο της περιγραφής:
```
nltk.download("stopwords")
name_file = open("stopwords.txt",'r')
names = [line.split(',') for line in name_file.readlines()]
name_stopwords = names[0]
for i in range(len(name_stopwords)): name_stopwords[i]=name_stopwords[i].strip()
movie_words=["story","film","plot","about","movie",'000','mother','father','sister','brother','daughter','son','village'
'10', '12', '15', '20','00', '01', '02', '04', '05', '06', '07', '08', '09', '100', '1000', '10th', '11',
'120', '13', '13th', '14', '14th', '150', '15th', '16', '16th', '17', '18']
my_stopwords = stopwords.words('english') + movie_words + name_stopwords
```
## Stemming & TF-IDF
Προχωρούμε σε περαιτέρω επεξεργασία του corpus μας εστιάζοντας στο stem των λέξεων και αγνοώντας σχετικά λήμματα, προς αύξηση της αποδοτικότητας. Στη συνέχεια κάνουμε τη ζητούμενη μετατροπή σε tf-idf:
```
def thorough_filter(words):
filtered_words = []
for word in words:
pun = []
for letter in word: pun.append(letter in string.punctuation)
if not all(pun): filtered_words.append(word)
return filtered_words
def preprocess_document(document):
words = nltk.word_tokenize(document.lower())
porter_stemmer = PorterStemmer()
stemmed_words = [porter_stemmer.stem(word) for word in words]
return (" ".join(stemmed_words))
# απαραίτητα download για τους stemmer/lemmatizer/tokenizer
nltk.download('wordnet')
nltk.download('rslp')
nltk.download('punkt')
stemmed_corpus = [preprocess_document(corp) for corp in corpus]
vectorizer = TfidfVectorizer(max_df=0.2, min_df=0.01, analyzer='word', stop_words = my_stopwords, ngram_range=(1,1))
corpus_tf_idf = vectorizer.fit_transform(stemmed_corpus).toarray()
print("corpus after tf-idf",corpus_tf_idf.shape)
joblib.dump(corpus_tf_idf, 'corpus_tf_idf.pkl')
```
Η συνάρτηση [TfidfVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) όπως καλείται εδώ **είναι βελτιστοποιημένη**. Οι επιλογές των μεθόδων και παραμέτρων που κάναμε έχουν **σημαντική επίδραση στην ποιότητα των συστάσεων, στη διαστατικότητα και τον όγκο των δεδομένων, κατά συνέπεια και στους χρόνους εκπαίδευσης**.
## Υλοποίηση του συστήματος συστάσεων
Το σύστημα συστάσεων που θα παραδώσετε θα είναι μια συνάρτηση `content_recommender` με δύο ορίσματα `target_movie` και `max_recommendations`. Στην `target_movie` περνάμε το ID μιας ταινίας-στόχου για την οποία μας ενδιαφέρει να βρούμε παρόμοιες ως προς το περιεχόμενο (τη σύνοψη) ταινίες, `max_recommendations` στο πλήθος.
Υλοποιήστε τη συνάρτηση ως εξής:
- για την ταινία-στόχο, από το `corpus_tf_idf` υπολογίστε την [ομοιότητα συνημιτόνου](https://en.wikipedia.org/wiki/Cosine_similarity) της με όλες τις ταινίες της συλλογής σας
- με βάση την ομοιότητα συνημιτόνου που υπολογίσατε, δημιουργήστε ταξινομημένο πίνακα από το μεγαλύτερο στο μικρότερο, με τα indices (`ID`) των ταινιών. Παράδειγμα: αν η ταινία με index 1 έχει ομοιότητα συνημιτόνου με 3 ταινίες \[0.2 1 0.6\] (έχει ομοιότητα 1 με τον εαύτό της) ο ταξινομημένος αυτός πίνακας indices θα είναι \[1 2 0\].
- Για την ταινία-στόχο εκτυπώστε: id, τίτλο, σύνοψη, κατηγορίες (categories)
- Για τις `max_recommendations` ταινίες (πλην της ίδιας της ταινίας-στόχου που έχει cosine similarity 1 με τον εαυτό της) με τη μεγαλύτερη ομοιότητα συνημιτόνου (σε φθίνουσα σειρά), τυπώστε σειρά σύστασης (1 πιο κοντινή, 2 η δεύτερη πιο κοντινή κλπ), id, τίτλο, σύνοψη, κατηγορίες (categories)
```
def movie_info(movie_id):
print(*titles[movie_id].flatten(),"~ ID:",movie_id)
print("Category: ",*categories[movie_id].flatten())
def get_distances(target_movie_id,corpus):
distances = np.zeros((corpus.shape[0]))
for i in range(corpus.shape[0]): distances[i]=sp.spatial.distance.cosine(corpus[target_movie_id],corpus[i])
return distances
def content_recommender(target_movie,max_recomendations):
distances = get_distances(target_movie,corpus_tf_idf)
similarity = np.argsort(distances) # similarity = 1-distance
similarity = similarity[:max_recomendations+1]
for i in similarity:
if i==target_movie:
print("Target Movie:",end=" ")
movie_info(i)
print("\nRecommendations:\n")
else: movie_info(i); print()
```
## Βελτιστοποίηση
Αφού υλοποιήσετε τη συνάρτηση `content_recommender` χρησιμοποιήστε τη για να βελτιστοποιήσετε την `TfidfVectorizer`. Συγκεκριμένα, αρχικά μπορείτε να δείτε τι επιστρέφει το σύστημα για τυχαίες ταινίες-στόχους και για ένα μικρό `max_recommendations` (2 ή 3). Αν σε κάποιες ταινίες το σύστημα μοιάζει να επιστρέφει σημασιολογικά κοντινές ταινίες σημειώστε το `ID` τους. Δοκιμάστε στη συνέχεια να βελτιστοποιήσετε την `TfidfVectorizer` για τα συγκεκριμένα `ID` ώστε να επιστρέφονται σημασιολογικά κοντινές ταινίες για μεγαλύτερο αριθμό `max_recommendations`. Παράλληλα, όσο βελτιστοποιείτε την `TfidfVectorizer`, θα πρέπει να λαμβάνετε καλές συστάσεις για μεγαλύτερο αριθμό τυχαίων ταινιών. Μπορείτε επίσης να βελτιστοποιήσετε τη συνάρτηση παρατηρώντας πολλά φαινόμενα που το σύστημα εκλαμβάνει ως ομοιότητα περιεχομένου ενώ επί της ουσίας δεν είναι επιθυμητό να συνυπολογίζονται (δείτε σχετικά το [FAQ](https://docs.google.com/document/d/1-E4eQkVnTxa3Jb0HL9OAs11bugYRRZ7RNWpu7yh9G4s/edit?usp=sharing)). Ταυτόχρονα, μια άλλη κατεύθυνση της βελτιστοποίησης είναι να χρησιμοποιείτε τις παραμέτρους του `TfidfVectorizer` έτσι ώστε να μειώνονται οι διαστάσεις του Vector Space Model μέχρι το σημείο που θα αρχίσει να εμφανίζονται ποιοτικές επιπτώσεις.
```
corpus_tf_idf = joblib.load('corpus_tf_idf.pkl')
content_recommender(120,5)
```
## Επεξήγηση επιλογών και ποιοτική ερμηνεία
Περιγράψτε πώς προχωρήσατε στις επιλογές σας για τη βελτιστοποίηση της `TfidfVectorizer`. Δώστε 10 παραδείγματα (IDs) που επιστρέφουν καλά αποτελέσματα μέχρι `max_recommendations` (5 και παραπάνω) και σημειώστε ποια είναι η θεματική που ενώνει τις ταινίες.
### Διαδικασία βελτιστοποίησης
Ακολουθήσαμε την πρόταση της εκφώνησης της εργασίας και ξεκινήσαμε την τροποποίηση των παραμέτρων του TfidfVectorizer αρχικά για 2 προτάσεις, και στην συνέχεια για 5 προτάσεις.
Αρχικά παρατηρήσαμε οτι η παράμετρος max_df του TfidfVectorizer δεν αφαιρούσε περισσότερα features για τιμές άνω του 0.4, και για τιμές χαμηλότερες απο αυτό μείωνε την ποιότητα των προτάσεων. Επίσης η παράμετρος ngram_range δοκιμάστηκε με τιμές (1,1)-only unigrams , (1,2)-unigrams and bigrams , (1,3)-unigrams bigrams and trigrams αλλα δεν επηρέασε θετικά τα αποτελέσματα οπότε αφέθηκε στην τιμη (1,1). Οπότε, η βελτιστοποίηση του TfidfVectorizer έγινε με την τροποποίηση της παραμέτρου min_df και την εισαγωγή stop words με ταυτόχρονη παρατήρηση των προτάσεων που προκύπτουν. Τα 10 ζητούμενα παραδείγματα:
```
movie_list = [10,20,21,100,120,222,540,2020,2859,3130]
cats = ["Crime Fiction","Action/Adventure","Drama","Crime Fiction","Adventure",
"Thriller","Horror","Science Fiction","Drama","Science Fiction"]
for i in range(10):
content_recommender(movie_list[i],5)
print("Common category:",cats[i]," -------------------------------------------------------------\n")
```
# Τοπολογική και σημασιολογική απεικόνιση της ταινιών με χρήση SOM
<img src="https://drive.google.com/uc?export=download&id=1R1R7Ds9UEfhjOY_fk_3wcTjsM0rI4WLl" width="60%">
## Δημιουργία dataset
Στη δεύτερη εφαρμογή θα βασιστούμε στις τοπολογικές ιδιότητες των Self Organizing Maps (SOM) για να φτιάξουμε ενά χάρτη (grid) δύο διαστάσεων όπου θα απεικονίζονται όλες οι ταινίες της συλλογής της ομάδας με τρόπο χωρικά συνεκτικό ως προς το περιεχόμενο και κυρίως το είδος τους.
Η `build_final_set` αρχικά μετατρέπει την αραιή αναπαράσταση tf-idf της εξόδου της `TfidfVectorizer()` σε πυκνή (η [αραιή αναπαράσταση](https://en.wikipedia.org/wiki/Sparse_matrix) έχει τιμές μόνο για τα μη μηδενικά στοιχεία). Στη συνέχεια ενώνει την πυκνή `dense_tf_idf` αναπαράσταση και τις binarized κατηγορίες `catbins` των ταινιών ως επιπλέον στήλες (χαρακτηριστικά). Συνεπώς, κάθε ταινία αναπαρίσταται στο Vector Space Model από τα χαρακτηριστικά του TFIDF και τις κατηγορίες της. Τέλος, δέχεται ένα ορισμα για το πόσες ταινίες να επιστρέψει, με default τιμή όλες τις ταινίες (5000). Αυτό είναι χρήσιμο για να μπορείτε αν θέλετε να φτιάχνετε μικρότερα σύνολα δεδομένων ώστε να εκπαιδεύεται ταχύτερα το SOM.
```
def build_final_set(doc_limit=5000, tf_idf_only=False):
# convert sparse tf_idf to dense tf_idf representation
dense_tf_idf = corpus_tf_idf[0:doc_limit,:]
if tf_idf_only:
# use only tf_idf
final_set = dense_tf_idf
else:
# append the binary categories features horizontaly to the (dense) tf_idf features
final_set = np.hstack((dense_tf_idf, catbins[0:doc_limit,:]))
# η somoclu θέλει δεδομένα σε float32
return np.array(final_set, dtype=np.float32)
final_set = build_final_set()
```
Τυπώνουμε τις διαστάσεις του τελικού dataset μας. Χωρίς βελτιστοποίηση του TFIDF θα έχουμε περίπου 50.000 χαρακτηριστικά.
```
final_set.shape
```
Με βάση την εμπειρία σας στην προετοιμασία των δεδομένων στην επιβλεπόμενη μάθηση, υπάρχει κάποιο βήμα προεπεξεργασίας που θα μπορούσε να εφαρμοστεί σε αυτό το dataset;
>Θα μπορούσαμε με PCA να μειώσουμε τις διαστάσεις:
```
pca = PCA(n_components=0.97)
pca_final_set = pca.fit_transform(final_set)
print(pca_final_set.shape)
print("Decrease of components:", round ((1-pca_final_set.shape[1]/final_set.shape[1])*100, 2 ),"%")
```
Παρατηρούμε οτι με διατήρηση 97% της διασποράς των χαρακτηριστικών μπορούμε να μειώσουμε τις διαστάσεις πάνω από 40%.
## Εκπαίδευση χάρτη SOM
Θα δουλέψουμε με τη βιβλιοθήκη SOM ["Somoclu"](http://somoclu.readthedocs.io/en/stable/index.html). Καταρχάς διαβάστε το [function reference](http://somoclu.readthedocs.io/en/stable/reference.html) του somoclu. Θα δoυλέψουμε με χάρτη τύπου planar, παραλληλόγραμμου σχήματος νευρώνων με τυχαία αρχικοποίηση (όλα αυτά είναι default). Μπορείτε να δοκιμάσετε διάφορα μεγέθη χάρτη ωστόσο όσο ο αριθμός των νευρώνων μεγαλώνει, μεγαλώνει και ο χρόνος εκπαίδευσης. Για το training δεν χρειάζεται να ξεπεράσετε τα 100 epochs. Σε γενικές γραμμές μπορούμε να βασιστούμε στις default παραμέτρους μέχρι να έχουμε τη δυνατότητα να οπτικοποιήσουμε και να αναλύσουμε ποιοτικά τα αποτελέσματα. Ξεκινήστε με ένα χάρτη 10 x 10, 100 epochs training και ένα υποσύνολο των ταινιών (π.χ. 2000). Χρησιμοποιήστε την `time` για να έχετε μια εικόνα των χρόνων εκπαίδευσης. Ενδεικτικά, με σωστή κωδικοποίηση tf-idf, μικροί χάρτες για λίγα δεδομένα (1000-2000) παίρνουν γύρω στο ένα λεπτό ενώ μεγαλύτεροι χάρτες με όλα τα δεδομένα μπορούν να πάρουν 10-15 λεπτά ή και περισσότερο.
```
n_rows, n_columns = 30, 30
som = somoclu.Somoclu(n_columns, n_rows, compactsupport=False)
%time som.train(final_set,epochs=100)
```
Λόγω του προβλήματος στο training των μοντέλων Somoclu που αναφέρεται στο import του μοντέλου, η εκπαίδευση έγινε στο Colab. Αποθηκεύουμε το προκύπτον μοντέλο και το λαμβάνουμε εδώ:
```
som = joblib.load("som.pkl")
```
## Best matching units
Μετά από κάθε εκπαίδευση αποθηκεύστε σε μια μεταβλητή τα best matching units (bmus) για κάθε ταινία. Τα bmus μας δείχνουν σε ποιο νευρώνα ανήκει η κάθε ταινία. Προσοχή: η σύμβαση των συντεταγμένων των νευρώνων είναι (στήλη, γραμμή) δηλαδή το ανάποδο από την Python. Με χρήση της [np.unique](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.unique.html) (μια πολύ χρήσιμη συνάρτηση στην άσκηση) αποθηκεύστε τα μοναδικά best matching units και τους δείκτες τους (indices) προς τις ταινίες. Σημειώστε ότι μπορεί να έχετε λιγότερα μοναδικά bmus από αριθμό νευρώνων γιατί μπορεί σε κάποιους νευρώνες να μην έχουν ανατεθεί ταινίες. Ως αριθμό νευρώνα θα θεωρήσουμε τον αριθμό γραμμής στον πίνακα μοναδικών bmus.
```
bmus, indices = np.unique(som.bmus,axis=0,return_index=True)
```
## Ομαδοποίηση (clustering)
Τυπικά, η ομαδοποίηση σε ένα χάρτη SOM προκύπτει από το unified distance matrix (U-matrix): για κάθε κόμβο υπολογίζεται η μέση απόστασή του από τους γειτονικούς κόμβους. Εάν χρησιμοποιηθεί μπλε χρώμα στις περιοχές του χάρτη όπου η τιμή αυτή είναι χαμηλή (μικρή απόσταση) και κόκκινο εκεί που η τιμή είναι υψηλή (μεγάλη απόσταση), τότε μπορούμε να πούμε ότι οι μπλε περιοχές αποτελούν clusters και οι κόκκινες αποτελούν σύνορα μεταξύ clusters.
To somoclu δίνει την επιπρόσθετη δυνατότητα να κάνουμε ομαδοποίηση των νευρώνων χρησιμοποιώντας οποιονδήποτε αλγόριθμο ομαδοποίησης του scikit-learn. Στην άσκηση θα χρησιμοποιήσουμε τον k-Means. Για τον αρχικό σας χάρτη δοκιμάστε ένα k=20 ή 25. Οι δύο προσεγγίσεις ομαδοποίησης είναι διαφορετικές, οπότε περιμένουμε τα αποτελέσματα να είναι κοντά αλλά όχι τα ίδια.
```
som.cluster(KMeans(n_clusters=30))
```
## Αποθήκευση του SOM
Επειδή η αρχικοποίηση του SOM γίνεται τυχαία και το clustering είναι και αυτό στοχαστική διαδικασία, οι θέσεις και οι ετικέτες των νευρώνων και των clusters θα είναι διαφορετικές κάθε φορά που τρέχετε τον χάρτη, ακόμα και με τις ίδιες παραμέτρους. Για να αποθηκεύσετε ένα συγκεκριμένο som και clustering χρησιμοποιήστε και πάλι την `joblib`. Μετά την ανάκληση ενός SOM θυμηθείτε να ακολουθήσετε τη διαδικασία για τα bmus.
```
joblib.dump(som,'som.pkl')
```
## Οπτικοποίηση U-matrix, clustering και μέγεθος clusters
Για την εκτύπωση του U-matrix χρησιμοποιήστε τη `view_umatrix` με ορίσματα `bestmatches=True` και `figsize=(15, 15)` ή `figsize=(20, 20)`. Τα διαφορετικά χρώματα που εμφανίζονται στους κόμβους αντιπροσωπεύουν τα διαφορετικά clusters που προκύπτουν από τον k-Means. Μπορείτε να εμφανίσετε τη λεζάντα του U-matrix με το όρισμα `colorbar`. Μην τυπώνετε τις ετικέτες (labels) των δειγμάτων, είναι πολύ μεγάλος ο αριθμός τους.
```
som.view_umatrix(bestmatches=True, colorbar=True,figsize=(15, 15)); matplotlib.pyplot.show()
```
Για μια δεύτερη πιο ξεκάθαρη οπτικοποίηση του clustering τυπώστε απευθείας τη μεταβλητή `clusters`.
```
print(som.clusters)
```
Τέλος, χρησιμοποιώντας πάλι την `np.unique` (με διαφορετικό όρισμα) και την `np.argsort` (υπάρχουν και άλλοι τρόποι υλοποίησης) εκτυπώστε τις ετικέτες των clusters (αριθμοί από 0 έως k-1) και τον αριθμό των νευρώνων σε κάθε cluster, με φθίνουσα ή αύξουσα σειρά ως προς τον αριθμό των νευρώνων. Ουσιαστικά είναι ένα εργαλείο για να βρίσκετε εύκολα τα μεγάλα και μικρά clusters.
```
print("\nClusters sorted by increasing number of neurons: (cluster_index,num_neurons)")
values,counts = np.unique(som.clusters,return_counts=True)
sorted_counts = np.argsort(counts)
print(np.array([list(values[sorted_counts]),list(counts[sorted_counts])]))
```
## Σημασιολογική ερμηνεία των clusters
Προκειμένου να μελετήσουμε τις τοπολογικές ιδιότητες του SOM και το αν έχουν ενσωματώσει σημασιολογική πληροφορία για τις ταινίες διαμέσου της διανυσματικής αναπαράστασης με το tf-idf και των κατηγοριών, χρειαζόμαστε ένα κριτήριο ποιοτικής επισκόπησης των clusters. Θα υλοποιήσουμε το εξής κριτήριο: Λαμβάνουμε όρισμα έναν αριθμό (ετικέτα) cluster. Για το cluster αυτό βρίσκουμε όλους τους νευρώνες που του έχουν ανατεθεί από τον k-Means. Για όλους τους νευρώνες αυτούς βρίσκουμε όλες τις ταινίες που τους έχουν ανατεθεί (για τις οποίες αποτελούν bmus). Για όλες αυτές τις ταινίες τυπώνουμε ταξινομημένη τη συνολική στατιστική όλων των ειδών (κατηγοριών) και τις συχνότητές τους. Αν το cluster διαθέτει καλή συνοχή και εξειδίκευση, θα πρέπει κάποιες κατηγορίες να έχουν σαφώς μεγαλύτερη συχνότητα από τις υπόλοιπες. Θα μπορούμε τότε να αναθέσουμε αυτήν/ές την/τις κατηγορία/ες ως ετικέτες κινηματογραφικού είδους στο cluster.
Μπορείτε να υλοποιήσετε τη συνάρτηση αυτή όπως θέλετε. Μια πιθανή διαδικασία θα μπορούσε να είναι η ακόλουθη:
1. Ορίζουμε συνάρτηση `print_categories_stats` που δέχεται ως είσοδο λίστα με ids ταινιών. Δημιουργούμε μια κενή λίστα συνολικών κατηγοριών. Στη συνέχεια, για κάθε ταινία επεξεργαζόμαστε το string `categories` ως εξής: δημιουργούμε μια λίστα διαχωρίζοντας το string κατάλληλα με την `split` και αφαιρούμε τα whitespaces μεταξύ ετικετών με την `strip`. Προσθέτουμε τη λίστα αυτή στη συνολική λίστα κατηγοριών με την `extend`. Τέλος χρησιμοποιούμε πάλι την `np.unique` για να μετρήσουμε συχνότητα μοναδικών ετικετών κατηγοριών και ταξινομούμε με την `np.argsort`. Τυπώνουμε τις κατηγορίες και τις συχνότητες εμφάνισης ταξινομημένα. Χρήσιμες μπορεί να σας φανούν και οι `np.ravel`, `np.nditer`, `np.array2string` και `zip`.
```
def print_categories_stats(ID_list):
total_categories = []
for i in ID_list:
cat = [category.strip(" ").strip('"') for category in categories[i][0].split(",")]
total_categories.extend(cat)
result,counts = np.unique(total_categories,return_counts=True)
sorted_counts = np.argsort(-counts)
final = [(result[n],counts[n]) for n in sorted_counts]
print("Overall Cluster Genres stats:")
print(final); return
```
2. Ορίζουμε τη βασική μας συνάρτηση `print_cluster_neurons_movies_report` που δέχεται ως όρισμα τον αριθμό ενός cluster. Με τη χρήση της `np.where` μπορούμε να βρούμε τις συντεταγμένες των bmus που αντιστοιχούν στο cluster και με την `column_stack` να φτιάξουμε έναν πίνακα bmus για το cluster. Προσοχή στη σειρά (στήλη - σειρά) στον πίνακα bmus. Για κάθε bmu αυτού του πίνακα ελέγχουμε αν υπάρχει στον πίνακα μοναδικών bmus που έχουμε υπολογίσει στην αρχή συνολικά και αν ναι προσθέτουμε το αντίστοιχο index του νευρώνα σε μια λίστα. Χρήσιμες μπορεί να είναι και οι `np.rollaxis`, `np.append`, `np.asscalar`. Επίσης πιθανώς να πρέπει να υλοποιήσετε ένα κριτήριο ομοιότητας μεταξύ ενός bmu και ενός μοναδικού bmu από τον αρχικό πίνακα bmus.
```
def where_is_same(a,b):
return np.where(np.all(a==b,axis=1))[0]
def print_cluster_neurons_movies_report(cluster):
cluster_bmus = np.column_stack(np.where(som.clusters==cluster)[::-1])
li = [indices[where_is_same(bmu,bmus)] for bmu in cluster_bmus if bmu in bmus]
li = [ind[0] for ind in li if len(ind)]
return cluster_bmus
```
3. Υλοποιούμε μια βοηθητική συνάρτηση `neuron_movies_report`. Λαμβάνει ένα σύνολο νευρώνων από την `print_cluster_neurons_movies_report` και μέσω της `indices` φτιάχνει μια λίστα με το σύνολο ταινιών που ανήκουν σε αυτούς τους νευρώνες. Στο τέλος καλεί με αυτή τη λίστα την `print_categories_stats` που τυπώνει τις στατιστικές των κατηγοριών.
```
def neuron_movies_report(neurons):
id_list = []
index = [where_is_same(som.bmus,neuron) for neuron in neurons]
index = [list(i) for i in index if len(i)]
for i in index: id_list += i
return id_list
```
Μπορείτε βέβαια να προσθέσετε οποιαδήποτε επιπλέον έξοδο σας βοηθάει. Μια χρήσιμη έξοδος είναι πόσοι νευρώνες ανήκουν στο cluster και σε πόσους και ποιους από αυτούς έχουν ανατεθεί ταινίες. Θα επιτελούμε τη σημασιολογική ερμηνεία του χάρτη καλώντας την `print_cluster_neurons_movies_report` με τον αριθμό ενός cluster που μας ενδιαφέρει. Παράδειγμα εξόδου για ένα cluster (μη βελτιστοποιημένος χάρτης):
```
Overall Cluster Genres stats:
[('"Horror"', 86), ('"Science Fiction"', 24), ('"B-movie"', 16), ('"Monster movie"', 10), ('"Creature Film"', 10), ('"Indie"', 9), ('"Zombie Film"', 9), ('"Slasher"', 8), ('"World cinema"', 8), ('"Sci-Fi Horror"', 7), ('"Natural horror films"', 6), ('"Supernatural"', 6), ('"Thriller"', 6), ('"Cult"', 5), ('"Black-and-white"', 5), ('"Japanese Movies"', 4), ('"Short Film"', 3), ('"Drama"', 3), ('"Psychological thriller"', 3), ('"Crime Fiction"', 3), ('"Monster"', 3), ('"Comedy"', 2), ('"Western"', 2), ('"Horror Comedy"', 2), ('"Archaeology"', 2), ('"Alien Film"', 2), ('"Teen"', 2), ('"Mystery"', 2), ('"Adventure"', 2), ('"Comedy film"', 2), ('"Combat Films"', 1), ('"Chinese Movies"', 1), ('"Action/Adventure"', 1), ('"Gothic Film"', 1), ('"Costume drama"', 1), ('"Disaster"', 1), ('"Docudrama"', 1), ('"Film adaptation"', 1), ('"Film noir"', 1), ('"Parody"', 1), ('"Period piece"', 1), ('"Action"', 1)]```
```
cluster = 7
cluster_neurons = print_cluster_neurons_movies_report(cluster)
id_list = neuron_movies_report(cluster_neurons)
print_categories_stats(id_list)
```
Βλέπουμε πως το 7ο cluster συγκεντρώνει κυρίως ταινίες δράματος, με τις υπόλοιπες κατηγορίες να είναι αρκετά σπανιότερες.
## Tips για το SOM και το clustering
- Για την ομαδοποίηση ένα U-matrix καλό είναι να εμφανίζει και μπλε-πράσινες περιοχές (clusters) και κόκκινες περιοχές (ορίων). Παρατηρήστε ποια σχέση υπάρχει μεταξύ αριθμού ταινιών στο final set, μεγέθους grid και ποιότητας U-matrix.
- Για το k του k-Means προσπαθήστε να προσεγγίζει σχετικά τα clusters του U-matrix (όπως είπαμε είναι διαφορετικοί μέθοδοι clustering). Μικρός αριθμός k δεν θα σέβεται τα όρια. Μεγάλος αριθμός θα δημιουργεί υπο-clusters εντός των clusters που φαίνονται στο U-matrix. Το τελευταίο δεν είναι απαραίτητα κακό, αλλά μεγαλώνει τον αριθμό clusters που πρέπει να αναλυθούν σημασιολογικά.
- Σε μικρούς χάρτες και με μικρά final sets δοκιμάστε διαφορετικές παραμέτρους για την εκπαίδευση του SOM. Σημειώστε τυχόν παραμέτρους που επηρεάζουν την ποιότητα του clustering για το dataset σας ώστε να τις εφαρμόσετε στους μεγάλους χάρτες.
- Κάποια τοπολογικά χαρακτηριστικά εμφανίζονται ήδη σε μικρούς χάρτες. Κάποια άλλα χρειάζονται μεγαλύτερους χάρτες. Δοκιμάστε μεγέθη 20x20, 25x25 ή και 30x30 και αντίστοιχη προσαρμογή των k. Όσο μεγαλώνουν οι χάρτες, μεγαλώνει η ανάλυση του χάρτη αλλά μεγαλώνει και ο αριθμός clusters που πρέπει να αναλυθούν.
## Ανάλυση τοπολογικών ιδιοτήτων χάρτη SOM
Μετά το πέρας της εκπαίδευσης και του clustering θα έχετε ένα χάρτη με τοπολογικές ιδιότητες ως προς τα είδη των ταίνιών της συλλογής σας, κάτι αντίστοιχο με την εικόνα στην αρχή της Εφαρμογής 2 αυτού του notebook (η συγκεκριμένη εικόνα είναι μόνο για εικονογράφιση, δεν έχει καμία σχέση με τη συλλογή δεδομένων και τις κατηγορίες μας).
Για τον τελικό χάρτη SOM που θα παράξετε για τη συλλογή σας, αναλύστε σε markdown με συγκεκριμένη αναφορά σε αριθμούς clusters και τη σημασιολογική ερμηνεία τους τις εξής τρεις τοπολογικές ιδιότητες του SOM:
1. *Δεδομένα που έχουν μεγαλύτερη πυκνότητα πιθανότητας στο χώρο εισόδου τείνουν να απεικονίζονται με περισσότερους νευρώνες στο χώρο μειωμένης διαστατικότητας. Δώστε παραδείγματα από συχνές και λιγότερο συχνές κατηγορίες ταινιών. Χρησιμοποιήστε τις στατιστικές των κατηγοριών στη συλλογή σας και τον αριθμό κόμβων που χαρακτηρίζουν.*
```
def categories_in_neuron(neuron):
ID_list = neuron_movies_report([neuron])
total_categories = []
for i in ID_list:
cat = [category.strip(" ").strip('"') for category in categories[i][0].split(",")]
total_categories.extend(cat)
result = np.unique(total_categories)
sorted_counts = np.argsort(counts)
return result
categories_in_neurons = np.asarray([categories_in_neuron(i) for i in range(20)])
def number_of_neurons_in_category(category):
i=0
for c in categories_in_neurons:
if category in c: i+=1
return i
# list of all categories
distinct_categories=[]
for i in range(5000):
cat = [category.strip(" ").strip('"') for category in categories[i][0].split(",")]
distinct_categories.extend(cat)
distinct_categories=np.asarray(distinct_categories)
distinct_categories=np.unique(distinct_categories)
# all categories and their number of neurons
ncategories=[]
for c in distinct_categories:
k=number_of_neurons_in_category(c)
ncategories.append([c,k])
ncategories=np.asarray(ncategories)
sorted_indices = np.argsort(ncategories[:,1])
ncategories = ncategories[sorted_indices]
print("Most popular categories and their number of neurons:\n", ncategories[::-1][:5])
print("\nLeast popular categories and their number of neurons:\n", ncategories[::-1][-5:])
```
Είναι εμφανές πως οι περισσότερο συχνές και οικείες κατηγορίες έχουν αντιστοιχηθεί σε μεγαλύτερο αριθμό νευρώνων.
2. *Μακρινά πρότυπα εισόδου τείνουν να απεικονίζονται απομακρυσμένα στο χάρτη. Υπάρχουν χαρακτηριστικές κατηγορίες ταινιών που ήδη από μικρούς χάρτες τείνουν να τοποθετούνται σε διαφορετικά ή απομονωμένα σημεία του χάρτη.*
```
cluster = 16
cluster_neurons = print_cluster_neurons_movies_report(cluster)
id_list = neuron_movies_report(cluster_neurons)
print_categories_stats(id_list)
```
Βλέπουμε πως για τα δύο μακρινά clusters 7, 16 όπως φαίνεται στον παραπάνω πίνακα των clusters, το είδος των ταινιών που αντιπροσωπεύεται διαφέρει σημαντικά (δράμα και κωμωδία).
3. *Κοντινά πρότυπα εισόδου τείνουν να απεικονίζονται κοντά στο χάρτη. Σε μεγάλους χάρτες εντοπίστε είδη ταινιών και κοντινά τους υποείδη.*
```
cluster = 11
cluster_neurons = print_cluster_neurons_movies_report(cluster)
id_list = neuron_movies_report(cluster_neurons)
print_categories_stats(id_list)
```
Παρατηρούμε όντως πως το cluster 11 που είναι γειτονικό του 16 έχει επίσης κλίση προς τηνκατηγορία "Κωμωδία" ενώ διευρύνει το είδος και προς τη συγγενή κατηγορία των οικογνειακών ταινιών. Προφανώς τοποθέτηση σε 2 διαστάσεις που να σέβεται μια απόλυτη τοπολογία δεν είναι εφικτή, αφενός γιατί δεν υπάρχει κάποια απόλυτη εξ ορισμού για τα κινηματογραφικά είδη ακόμα και σε πολλές διαστάσεις, αφετέρου γιατί πραγματοποιούμε μείωση διαστατικότητας. Εντοπίστε μεγάλα clusters και μικρά clusters που δεν έχουν σαφή χαρακτηριστικά.
*Εντοπίστε clusters συγκεκριμένων ειδών που μοιάζουν να μην έχουν τοπολογική συνάφεια με γύρω περιοχές. Προτείνετε πιθανές ερμηνείες. Τέλος, εντοπίστε clusters που έχουν κατά την άποψή σας ιδιαίτερο ενδιαφέρον στη συλλογή της ομάδας σας και σχολιάστε:*
```
cluster = 14
cluster_neurons = print_cluster_neurons_movies_report(cluster)
id_list = neuron_movies_report(cluster_neurons)
print_categories_stats(id_list)
```
Παραπάνω έχουμε ένα παράδειγμα cluster (14) κατηγοριών κυρίως Ντοκιμαντέρ και κωμωδίες που δεν έχουν τόση σχέση με "Δράμα" ενώ βρίσκονται γειτονικά στον αντίστοιχο χάρτη (7). Η ερμηνεία που δίνουμε για την τοπολογική τους συνάφεια είναι η δευτερεύουσα παρουσία ταινιών "Action", "Family" και στα 2 cluster που γεφυρώνει το χάσμα τους. Σύμφωνα με την ερώτηση 1, ένα ενδιαφέρον cluster για εμάς θα ήταν αυτό που αντιπροσωπεύει το "Crime fiction". Με inspection επιβεβαιώσαμε πως πρόκειται για τo cluster 27 που ωστόσο δεν έχει απόλυτη τοπολογική συνάφεια με τη γειτονιά του, ούτε μεγάλη έκταση. Εκτιμούμε πως αυτό οφείλεται στην ιδιότητα του είδους να υπάγεται σε μεγαλύτερες κατηγορίες όπως Drama, Thriller κλπ, κατηγορίες που είναι τοπολογικά κοντινές στα συγκεκριμένα clusters.
```
cluster = 27
cluster_neurons = print_cluster_neurons_movies_report(cluster)
id_list = neuron_movies_report(cluster_neurons)
print_categories_stats(id_list)
```
## Παράδοση Άσκησης
Στο zip file περιέχονται, εκτός από το παρόν notebook, ο κώδικας σε .py script καθώς και το αρχείο `stopwords.txt` που απαιτήθηκε κατά τη βελτιστοποίηση του tf-idf. Επίσης, παραδίδουμε και μια HTML μορφή του notebook καθώς πιθανότατα σε επόμενο τρέξιμο του clustering, τα νέα clusters που θα προκύψουν να μην ανταποκρίνονται σε αυτό που περιγράφουμε στην αναφορά.
| github_jupyter |
### Abstract
This is an example to show to use use the basic API of TensorFlow, to construct a linear regression model.
This notebook is an exercise adapted from [the Medium.com blog](https://medium.com/@saxenarohan97/intro-to-tensorflow-solving-a-simple-regression-problem-e87b42fd4845).
Note that recent version of TensorFlow does have more advanced API such like LinearClassifier that provides the scikit-learn alike machine learning API.
```
import tensorflow as tf
import numpy as np
from sklearn.datasets import load_boston
from sklearn.preprocessing import scale
from matplotlib import pyplot as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15,6
```
Split the data into training, validation and test sets.
```
# Retrieve the data
bunch = load_boston()
print('total data shape:', bunch.data.shape)
total_features = bunch.data[:, range(12)]
total_prices = bunch.data[:, [12]]
print('features shape:', total_features.shape, 'targe shape:', total_prices.shape)
# new in 0.18 version
# total_features, total_prices = load_boston(True)
# Keep 300 samples for training
train_features = scale(total_features[:300])
train_prices = total_prices[:300]
print('training dataset:', len(train_features))
print('feature example:', train_features[0:1])
print('mean of feature 0:', np.asarray(train_features[:, 0]).mean())
# Keep 100 samples for validation
valid_features = scale(total_features[300:400])
valid_prices = total_prices[300:400]
print('validation dataset:', len(valid_features))
# Keep remaining samples as test set
test_features = scale(total_features[400:])
test_prices = total_prices[400:]
print('test dataset:', len(test_features))
```
#### Linear Regression Model
```
w = tf.Variable(tf.truncated_normal([12, 1], mean=0.0, stddev=1.0, dtype=tf.float64))
b = tf.Variable(tf.zeros(1, dtype = tf.float64))
def calc(x, y):
'''
linear regression model that return (prediction, L2_error)
'''
# Returns predictions and error
predictions = tf.add(b, tf.matmul(x, w))
error = tf.reduce_mean(tf.square(y - predictions))
return [ predictions, error ]
y, cost = calc(train_features, train_prices)
# augment the model with the regularisation
L1_regu_cost = tf.add(cost, tf.reduce_mean(tf.abs(w)))
L2_regu_cost = tf.add(cost, tf.reduce_mean(tf.square(w)))
def train(cost, learning_rate=0.025, epochs=300):
'''
run the cost computation graph with gradient descent optimizer.
'''
errors = [[], []]
init = tf.global_variables_initializer()
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
with sess:
sess.run(init)
for i in range(epochs):
sess.run(optimizer)
errors[0].append(i+1)
errors[1].append(sess.run(cost))
# Get the parameters of the linear regression model.
print('weights:\n', sess.run(w))
print('bias:', sess.run(b))
valid_cost = calc(valid_features, valid_prices)[1]
print('Validation error =', sess.run(valid_cost), '\n')
test_cost = calc(test_features, test_prices)[1]
print('Test error =', sess.run(test_cost), '\n')
return errors
# with L1 regularisation, the testing error is slightly improved, i.e. 75 vs. 76
# similarly with L1 regularisation, the L2 regularisation improves the testing error to 75 as well.
epochs = 500
errors_lr_005 = train(cost, learning_rate=0.005, epochs=epochs)
errors_lr_025 = train(cost, learning_rate=0.025, epochs=epochs)
ax = plt.subplot(111)
plt.plot(errors_lr_005[1], color='green', label='learning rate 0.005')
plt.plot(errors_lr_025[1], color='red', label='learning rate 0.025')
#ax = plt.plot(errors[0], errors[1], 'r--')
plt.axis([0, epochs, 0, 200])
plt.title('Evolution of L2 errors along each epoch')
plt.xlabel('epoch')
plt.ylabel('L2 error')
_ = plt.legend(loc='best')
plt.show()
```
The **higher** the learning rate, the **faster** that the model converges. But if the learning rate is too large, it could also prevent the model from convergence.
| github_jupyter |
# [Module 2.1] 세이지 메이커 로컬 모드 및 스크립트 모드로 훈련
본 워크샵의 모든 노트북은 **<font color="red">conda_tensorflow2_p36</font>** 를 사용합니다.
이 노트북은 아래와 같은 작업을 합니다.
- 1. 기본 환경 세팅
- 2. 노트북에서 세이지 메이커 스크립트 모드 스타일로 코드 변경
- 3. 세이지 메이커 로컬 모드로 훈련
- 4. 세이지 메이커의 호스트 모드로 훈련
- 5. 모델 아티펙트 경로 저장
---
# 1. 기본 환경 세팅
사용하는 패키지는 import 시점에 다시 재로딩 합니다.
```
%load_ext autoreload
%autoreload 2
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "sagemaker/DEMO-pytorch-cnn-cifar10"
role = sagemaker.get_execution_role()
import tensorflow as tf
print("tensorflow version: ", tf.__version__)
%store -r train_dir
%store -r validation_dir
%store -r eval_dir
%store -r data_dir
```
# 2. 노트북에서 세이지 메이커 스크립트 모드 스타일로 코드 변경
- Keras 버전의 스크래치 코드에서 세이지 메이커의 코드 변경을 참고 하세요.
- `1.2.Train_Keras_Local_Script_Mode.ipynb` 참고
```
# !pygmentize src/cifar10_tf2_sm.py
```
# 3. 세이지 메이커 로컬 모드로 훈련
본격적으로 학습을 시작하기 전에 로컬 모드를 사용하여 디버깅을 먼저 수행합니다. 로컬 모드는 학습 인스턴스를 생성하는 과정이 없이 로컬 인스턴스로 컨테이너를 가져온 후 곧바로 학습을 수행하기 때문에 코드를 보다 신속히 검증할 수 있습니다.
Amazon SageMaker Python SDK의 로컬 모드는 TensorFlow 또는 MXNet estimator서 단일 인자값을 변경하여 CPU (단일 및 다중 인스턴스) 및 GPU (단일 인스턴스) SageMaker 학습 작업을 에뮬레이션(enumlate)할 수 있습니다.
로컬 모드 학습을 위해서는 docker-compose 또는 nvidia-docker-compose (GPU 인스턴스인 경우)의 설치가 필요합니다. 아래 코드 셀을 통해 본 노트북 환경에 docker-compose 또는 nvidia-docker-compose를 설치하고 구성합니다.
로컬 모드의 학습을 통해 여러분의 코드가 현재 사용 중인 하드웨어를 적절히 활용하고 있는지 확인하기 위한 GPU 점유와 같은 지표(metric)를 쉽게 모니터링할 수 있습니다.
### 로컬 모드로 훈련 실행
- 아래의 두 라인이 로컬모드로 훈련을 지시 합니다.
```python
instance_type=instance_type, # local_gpu or local 지정
session = sagemaker.LocalSession(), # 로컬 세션을 사용합니다.
```
#### 로컬의 GPU, CPU 여부로 instance_type 결정
```
import os
import subprocess
instance_type = "local_gpu" # GPU 사용을 가정 합니다. CPU 사용시에 'local' 로 정의 합니다.
print("Instance type = " + instance_type)
```
학습 작업을 시작하기 위해 `estimator.fit() ` 호출 시, Amazon ECS에서 Amazon SageMaker TensorFlow 컨테이너를 로컬 노트북 인스턴스로 다운로드합니다.
`sagemaker.tensorflow` 클래스를 사용하여 SageMaker Python SDK의 Tensorflow Estimator 인스턴스를 생성합니다.
인자값으로 하이퍼파라메터와 다양한 설정들을 변경할 수 있습니다.
자세한 내용은 [documentation](https://sagemaker.readthedocs.io/en/stable/using_tf.html#training-with-tensorflow-estimator)을 확인하시기 바랍니다.
```
hyperparameters = {
'epochs' : 1,
'learning-rate' : 0.001,
'print-interval' : 100,
'train-batch-size': 256,
'eval-batch-size': 512,
'validation-batch-size': 512,
}
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_tf2_sm.py',
source_dir='src',
role=role,
framework_version='2.4.1',
py_version='py37',
script_mode=True,
hyperparameters= hyperparameters,
train_instance_count=1,
train_instance_type= instance_type)
%%time
estimator.fit({'train': f'file://{train_dir}',
'validation': f'file://{validation_dir}',
'eval': f'file://{eval_dir}'})
```
# 4. 세이지 메이커의 호스트 모드로 훈련
### 데이터 세트를 S3에 업로드
```
dataset_location = sagemaker_session.upload_data(path=data_dir, key_prefix='data/DEMO-cifar10')
display(dataset_location)
hyperparameters = {
'epochs' : 20,
'learning-rate' : 0.001,
'print-interval' : 100,
'train-batch-size': 256,
'eval-batch-size': 512,
'validation-batch-size': 512,
}
from sagemaker.tensorflow import TensorFlow
instance_type='ml.p3.8xlarge'
sm_estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_tf2_sm.py',
source_dir='src',
role=role,
framework_version='2.4.1',
py_version='py37',
script_mode=True,
hyperparameters= hyperparameters,
train_instance_count=1,
train_instance_type= instance_type)
```
## SageMaker Host Mode 로 훈련
- `cifar10_estimator.fit(inputs, wait=False)`
- 입력 데이터를 inputs로서 S3 의 경로를 제공합니다.
- wait=False 로 지정해서 async 모드로 훈련을 실행합니다.
- 실행 경과는 아래의 cifar10_estimator.logs() 에서 확인 합니다.
```
%%time
sm_estimator.fit({'train':'{}/train'.format(dataset_location),
'validation':'{}/validation'.format(dataset_location),
'eval':'{}/eval'.format(dataset_location)}, wait=False)
sm_estimator.logs()
```
# 5. 모델 아티펙트 저장
- S3 에 저장된 모델 아티펙트를 저장하여 추론시 사용합니다.
```
tf2_script_artifact_path = sm_estimator.model_data
print("script_tf_artifact_path: ", tf2_script_artifact_path)
%store tf2_script_artifact_path
```
| github_jupyter |
**This notebook is an exercise in the [AI Ethics](https://www.kaggle.com/learn/ai-ethics) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/ai-fairness).**
---
In the tutorial, you learned about different ways of measuring fairness of a machine learning model. In this exercise, you'll train a few models to approve (or deny) credit card applications and analyze fairness. Don't worry if you're new to coding: this exercise assumes no programming knowledge.
# Introduction
We work with a **synthetic** dataset of information submitted by credit card applicants.
To load and preview the data, run the next code cell. When the code finishes running, you should see a message saying the data was successfully loaded, along with a preview of the first five rows of the data.
```
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.ethics.ex4 import *
import pandas as pd
from sklearn.model_selection import train_test_split
# Load the data, separate features from target
data = pd.read_csv("../input/synthetic-credit-card-approval/synthetic_credit_card_approval.csv")
X = data.drop(["Target"], axis=1)
y = data["Target"]
# Break into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
# Preview the data
print("Data successfully loaded!\n")
X_train.head()
```
The dataset contains, for each applicant:
- income (in the `Income` column),
- the number of children (in the `Num_Children` column),
- whether the applicant owns a car (in the `Own_Car` column, the value is `1` if the applicant owns a car, and is else `0`), and
- whether the applicant owns a home (in the `Own_Housing` column, the value is `1` if the applicant owns a home, and is else `0`)
When evaluating fairness, we'll check how the model performs for users in different groups, as identified by the `Group` column:
- The `Group` column breaks the users into two groups (where each group corresponds to either `0` or `1`).
- For instance, you can think of the column as breaking the users into two different races, ethnicities, or gender groupings. If the column breaks users into different ethnicities, `0` could correspond to a non-Hispanic user, while `1` corresponds to a Hispanic user.
Run the next code cell without changes to train a simple model to approve or deny individuals for a credit card. The output shows the performance of the model.
```
from sklearn import tree
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Train a model and make predictions
model_baseline = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_baseline.fit(X_train, y_train)
preds_baseline = model_baseline.predict(X_test)
# Function to plot confusion matrix
def plot_confusion_matrix(estimator, X, y_true, y_pred, display_labels=["Deny", "Approve"],
include_values=True, xticks_rotation='horizontal', values_format='',
normalize=None, cmap=plt.cm.Blues):
cm = confusion_matrix(y_true, y_pred, normalize=normalize)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=display_labels)
return cm, disp.plot(include_values=include_values, cmap=cmap, xticks_rotation=xticks_rotation,
values_format=values_format)
# Function to evaluate the fairness of the model
def get_stats(X, y, model, group_one, preds):
y_zero, preds_zero, X_zero = y[group_one==False], preds[group_one==False], X[group_one==False]
y_one, preds_one, X_one = y[group_one], preds[group_one], X[group_one]
print("Total approvals:", preds.sum())
print("Group A:", preds_zero.sum(), "({}% of approvals)".format(round(preds_zero.sum()/sum(preds)*100, 2)))
print("Group B:", preds_one.sum(), "({}% of approvals)".format(round(preds_one.sum()/sum(preds)*100, 2)))
print("\nOverall accuracy: {}%".format(round((preds==y).sum()/len(y)*100, 2)))
print("Group A: {}%".format(round((preds_zero==y_zero).sum()/len(y_zero)*100, 2)))
print("Group B: {}%".format(round((preds_one==y_one).sum()/len(y_one)*100, 2)))
cm_zero, disp_zero = plot_confusion_matrix(model, X_zero, y_zero, preds_zero)
disp_zero.ax_.set_title("Group A")
cm_one, disp_one = plot_confusion_matrix(model, X_one, y_one, preds_one)
disp_one.ax_.set_title("Group B")
print("\nSensitivity / True positive rate:")
print("Group A: {}%".format(round(cm_zero[1,1] / cm_zero[1].sum()*100, 2)))
print("Group B: {}%".format(round(cm_one[1,1] / cm_one[1].sum()*100, 2)))
# Evaluate the model
get_stats(X_test, y_test, model_baseline, X_test["Group"]==1, preds_baseline)
```
The confusion matrices above show how the model performs on some test data. We also print additional information (calculated from the confusion matrices) to assess fairness of the model. For instance,
- The model approved 38246 people for a credit card. Of these individuals, 8028 belonged to Group A, and 30218 belonged to Group B.
- The model is 94.56% accurate for Group A, and 95.02% accurate for Group B. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the accuracy is (39723+7528)/(39723+500+2219+7528).
- The true positive rate (TPR) for Group A is 77.23%, and the TPR for Group B is 98.03%. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the TPR is 7528/(7528+2219).
# 1) Varieties of fairness
Consider three different types of fairness covered in the tutorial:
- **Demographic parity**: Which group has an unfair advantage, with more representation in the group of approved applicants? (Roughly 50% of applicants are from Group A, and 50% of applicants are from Group B.)
- **Equal accuracy**: Which group has an unfair advantage, where applicants are more likely to be correctly classified?
- **Equal opportunity**: Which group has an unfair advantage, with a higher true positive rate?
```
# Check your answer (Run this code cell to get credit!)
q_1.check()
```
Run the next code cell without changes to visualize the model.
```
def visualize_model(model, feature_names, class_names=["Deny", "Approve"], impurity=False):
plot_list = tree.plot_tree(model, feature_names=feature_names, class_names=class_names, impurity=impurity)
[process_plot_item(item) for item in plot_list]
def process_plot_item(item):
split_string = item.get_text().split("\n")
if split_string[0].startswith("samples"):
item.set_text(split_string[-1])
else:
item.set_text(split_string[0])
plt.figure(figsize=(20, 6))
plot_list = visualize_model(model_baseline, feature_names=X_train.columns)
```
The flowchart shows how the model makes decisions:
- `Group <= 0.5` checks what group the applicant belongs to: if the applicant belongs to Group A, then `Group <= 0.5` is true.
- Entries like `Income <= 80210.5` check the applicant's income.
To follow the flow chart, we start at the top and trace a path depending on the details of the applicant. If the condition is true at a split, then we move down and to the left branch. If it is false, then we move to the right branch.
For instance, consider an applicant in Group B, who has an income of 75k. Then,
- We start at the top of the flow chart. the applicant has an income of 75k, so `Income <= 80210.5` is true, and we move to the left.
- Next, we check the income again. Since `Income <= 71909.5` is false, we move to the right.
- The last thing to check is what group the applicant belongs to. The applicant belongs to Group B, so `Group <= 0.5` is false, and we move to the right, where the model has decided to approve the applicant.
# 2) Understand the baseline model
Based on the visualization, how can you explain one source of unfairness in the model?
**Hint**: Consider the example applicant, but change the group membership from Group B to Group A (leaving all other characteristics the same). Is this slightly different applicant approved or denied by the model?
```
# Check your answer (Run this code cell to get credit!)
q_2.check()
```
Next, you decide to remove group membership from the training data and train a new model. Do you think this will make the model treat the groups more equally?
Run the next code cell to see how this new **group unaware** model performs.
```
# Create new dataset with gender removed
X_train_unaware = X_train.drop(["Group"],axis=1)
X_test_unaware = X_test.drop(["Group"],axis=1)
# Train new model on new dataset
model_unaware = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_unaware.fit(X_train_unaware, y_train)
# Evaluate the model
preds_unaware = model_unaware.predict(X_test_unaware)
get_stats(X_test_unaware, y_test, model_unaware, X_test["Group"]==1, preds_unaware)
```
# 3) Varieties of fairness, part 2
How does this model compare to the first model you trained, when you consider **demographic parity**, **equal accuracy**, and **equal opportunity**? Once you have an answer, run the next code cell.
```
# Check your answer (Run this code cell to get credit!)
q_3.check()
```
You decide to train a third potential model, this time with the goal of having each group have even representation in the group of approved applicants. (This is an implementation of group thresholds, which you can optionally read more about [here](https://pair-code.github.io/what-if-tool/ai-fairness.html).)
Run the next code cell without changes to evaluate this new model.
```
# Change the value of zero_threshold to hit the objective
zero_threshold = 0.11
one_threshold = 0.99
# Evaluate the model
test_probs = model_unaware.predict_proba(X_test_unaware)[:,1]
preds_approval = (((test_probs>zero_threshold)*1)*[X_test["Group"]==0] + ((test_probs>one_threshold)*1)*[X_test["Group"]==1])[0]
get_stats(X_test, y_test, model_unaware, X_test["Group"]==1, preds_approval)
```
# 4) Varieties of fairness, part 3
How does this final model compare to the previous models, when you consider **demographic parity**, **equal accuracy**, and **equal opportunity**?
```
# Check your answer (Run this code cell to get credit!)
q_4.check()
```
This is only a short exercise to explore different types of fairness, and to illustrate the tradeoff that can occur when you optimize for one type of fairness over another. We have focused on model training here, but in practice, to really mitigate bias, or to make ML systems fair, we need to take a close look at every step in the process, from data collection to releasing a final product to users.
For instance, if you take a close look at the data, you'll notice that on average, individuals from Group B tend to have higher income than individuals from Group A, and are also more likely to own a home or a car. Knowing this will prove invaluable to deciding what fairness criterion you should use, and to inform ways to achieve fairness. (*For instance, it would likely be a bad aproach, if you did not remove the historical bias in the data and then train the model to get equal accuracy for each group.*)
In this course, we intentionally avoid taking an opinionated stance on how exactly to minimize bias and ensure fairness in specific projects. This is because the correct answers continue to evolve, since AI fairness is an active area of research. This lesson was a hands-on introduction to the topic, and you can continue your learning by reading blog posts from the [Partnership on AI](https://www.partnershiponai.org/research-lander/) or by following conferences like the [ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)](https://facctconference.org/).
# Keep going
Continue to **[learn how to use model cards](https://www.kaggle.com/var0101/model-cards)** to make machine learning models transparent to large audiences.
| github_jupyter |
# Parsing Text and the LDA output
## a.1) Opening pdfs and extracting their text
Under the material for Lecture 3 I have added a folder called FOMC_pdf. This folder contains the transcripts of all the meetings that took place during the [Greenspan](https://en.wikipedia.org/wiki/Alan_Greenspan) era (August 11, 1987 to January 31st, 2006).
We are some lines of code to parse those pdfs.
```
#load operating system module
import os
```
This module is used to conduct operating system like tasks (such as opening a file, or listing the contents of a directory).
```
#Define the base directory containing the FOMC statements
base_directory = "../../../../collection/python/data/transcript_raw_text"
#Return a list containing the name of the files in the directory
raw_doc = os.listdir(base_directory)
#Sort the list in ascending order
filelist = sorted(raw_doc)[:-1]
filelist
```
To parse the text in the pdfs I will use the PyPDF2 module (but there are other ways to do it. See for example the Tika module).
Warning: Depending on your Python configuration, you might not be able to use PyPDF2 directly. I downloaded Python using the Anaconda distribution and I needed to type directly on the terminal
conda -forge pypdf2
After doing this, I was able to upload the PyPDF2 module
## a.2) Organizing the information in a data frame
```
import pandas as pd
#load re to split the content of the pdfs by the occurrence of a pattern
import re
#Creates a data frame containing the date|s of the FOMC meetings
date = pd.Series(data=filelist).apply(lambda x: x[0:10])
print(date)
documents = []
for doc in filelist:
with open("{}/{}".format(base_directory,doc),"r") as f:
documents += f.read()
for i in range(len(document)):
interjections = re.split('MR. |MS. |CHAIRMAN |VICE CHAIRMAN ', document[i])
#Split the doc by interjections
temp_df = pd.DataFrame(columns=['Date','Speaker','content'],index=range(len(interjections)))
#Temporary data frame
for j in range(len(interjections)):
interjection = interjections[j].replace('\n',' ')
#Replace page break (\n) with space
temp_df['Date'].loc[j] = date[i]
temp_df['Speaker'].loc[j] = interjection.split('.')[0]
temp_df['content'].loc[j] = ''.join(interjection.split('.')[1:])
parsed_text = pd.concat([parsed_text,temp_df],ignore_index=True)
parsed_text
```
We will focus only on Greenspan's interjections.
```
Greenspan_text = parsed_text.loc[parsed_text['Speaker'] == 'GREENSPAN']
Greenspan_text.index = range(sum(parsed_text['Speaker'] == 'GREENSPAN'))
Greenspan_text
```
## a.3) Bag of Words
```
Greenspan_corpus = list(Greenspan_text['content'])
len(Greenspan_corpus)
Greenspan_corpus[0]
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
term_doc_matrix = vectorizer.fit_transform(Greenspan_corpus[0:1]).todense()
vectorizer.get_feature_names()
term_doc_matrix
```
## a.4) Cloud of words
If you got the Anaconda distribution of Python go the terminal and type
conda install -c conda-forge wordcloud
Here is a good tutorial on how to generate words of clouds:
https://www.datacamp.com/community/tutorials/wordcloud-python
```
from wordcloud import WordCloud
wordcloud = WordCloud(background_color='white', font_step = 3, stopwords='None', relative_scaling=1).generate(Greenspan_text['content'].loc[0])
Greenspan_text['content'].loc[0]
Greenspan_text.content[0]
```
Display the generated image
```
import matplotlib.pyplot as plt
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
```
Let's do it for the whole Greenspan interjections
```
text_aux = " ".join(interjection for interjection in Greenspan_text.content)
len(text_aux)
wordcloudG = WordCloud(background_color='white', font_step = 3, relative_scaling=1).generate(text_aux)
plt.imshow(wordcloudG, interpolation='bilinear')
plt.axis("off")
plt.show()
```
## a.5) LDA
### Tokenize
```
import nltk
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
tokens_example = tokenizer.tokenize(Greenspan_text['content'].loc[0])
Greenspan_text['content'].loc[0]
tokens_example
```
### Remove Stop Words
```
from nltk.corpus import stopwords
nltk.download('stopwords')
stopwords = stopwords.words('english')
len(stopwords)
stopped_tokens = [i for i in tokens_example if not i in stopwords]
stopped_tokens
```
### Stems
```
from nltk.stem.porter import PorterStemmer
# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()
p_stemmer.*?
texts_G = [p_stemmer.stem(i) for i in stopped_tokens]
texts_G
```
### Loop to Tokenize, Remove Stop Words, Stem
```
texts = []
for i in range(0,len(Greenspan_text['content'])):
tokens = tokenizer.tokenize(Greenspan_text['content'].loc[i])
stopped_tokens = [j for j in tokens if not j in stopwords]
texts.append([p_stemmer.stem(j) for j in stopped_tokens])
Greenspan_text['content'].loc[1]
texts
len(Greenspan_text['content'])
```
### LDA
conda install -c anaconda gensim
```
import gensim
from gensim import corpora, models
dictionary = corpora.Dictionary(texts)
dictionary.*?
corpus = [dictionary.doc2bow(text) for text in texts]
corpus[2]
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=2, id2word = dictionary, passes=20)
for t in range(2):
plt.figure()
plt.imshow(WordCloud(background_color='white').fit_words(dict(ldamodel.show_topic(t,200))),interpolation='bilinear')
plt.axis('off')
plt.show
```
| github_jupyter |
## 03 Intro to PyTorch
*special thanks to YSDA team for provided materials*
What comes today:
- Introduction to PyTorch
- Automatic gradient computation
- Logistic regression (it's a neural network, actually ;) )

__This notebook__ will teach you to use pytorch low-level core. You can install it [here](http://pytorch.org/).
__Pytorch feels__ differently than other frameworks (like tensorflow/theano) on almost every level. TensorFlow makes your code live in two "worlds" simultaneously: symbolic graphs and actual tensors. First you declare a symbolic "recipe" of how to get from inputs to outputs, then feed it with actual minibatches of data. In pytorch, __there's only one world__: all tensors have a numeric value.
You compute outputs on the fly without pre-declaring anything. The code looks exactly as in pure numpy with one exception: pytorch computes gradients for you. And can run stuff on GPU. And has a number of pre-implemented building blocks for your neural nets. [And a few more things.](https://medium.com/towards-data-science/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b)
Let's dive into it!
```
# !wget hhttps://raw.githubusercontent.com/neychev/harbour_ml2020/master/day03_Linear_classification/notmnist.py
import numpy as np
import torch
print(torch.__version__)
# numpy world
x = np.arange(16).reshape(4,4)
print("X :\n%s\n" % x)
print("X.shape : %s\n" % (x.shape,))
print("add 5 :\n%s\n" % (x + 5))
print("X*X^T :\n%s\n" % np.dot(x,x.T))
print("mean over cols :\n%s\n" % (x.mean(axis=-1)))
print("cumsum of cols :\n%s\n" % (np.cumsum(x,axis=0)))
# pytorch world
x = np.arange(16).reshape(4,4)
x = torch.tensor(x, dtype=torch.float32) #or torch.arange(0,16).view(4,4)
print ("X :\n%s" % x)
print("X.shape : %s\n" % (x.shape,))
print ("add 5 :\n%s" % (x + 5))
print ("X*X^T :\n%s" % torch.matmul(x,x.transpose(1,0))) #short: x.mm(x.t())
print ("mean over cols :\n%s" % torch.mean(x,dim=-1))
print ("cumsum of cols :\n%s" % torch.cumsum(x,dim=0))
```
#### NumPy and Pytorch
As you can notice, pytorch allows you to hack stuff much the same way you did with numpy. This means that you can _see the numeric value of any tensor at any moment of time_. Debugging such code can be done with by printing tensors or using any debug tool you want (e.g. [gdb](https://wiki.python.org/moin/DebuggingWithGdb)).
You could also notice the a few new method names and a different API. So no, there's no compatibility with numpy [yet](https://github.com/pytorch/pytorch/issues/2228) and yes, you'll have to memorize all the names again. Get excited!

For example,
* If something takes a list/tuple of axes in numpy, you can expect it to take *args in pytorch
* `x.reshape([1,2,8]) -> x.view(1,2,8)`
* You should swap _axis_ for _dim_ in operations like mean or cumsum
* `x.sum(axis=-1) -> x.sum(dim=-1)`
* most mathematical operations are the same, but types an shaping is different
* `x.astype('int64') -> x.type(torch.LongTensor)`
To help you acclimatize, there's a [table](https://github.com/torch/torch7/wiki/Torch-for-Numpy-users) covering most new things. There's also a neat [documentation page](http://pytorch.org/docs/master/).
Finally, if you're stuck with a technical problem, we recommend searching [pytorch forumns](https://discuss.pytorch.org/). Or just googling, which usually works just as efficiently.
If you feel like you almost give up, remember two things: __GPU__ and __free gradients__. Besides you can always jump back to numpy with x.numpy()
### Warmup: trigonometric knotwork
_inspired by [this post](https://www.quora.com/What-are-the-most-interesting-equation-plots)_
There are some simple mathematical functions with cool plots. For one, consider this:
$$ x(t) = t - 1.5 * cos( 15 t) $$
$$ y(t) = t - 1.5 * sin( 16 t) $$
```
import matplotlib.pyplot as plt
%matplotlib inline
t = torch.linspace(-10, 10, steps = 10000)
# compute x(t) and y(t) as defined above
x = <your_code_here>
y = <your_code_here>
plt.plot(x.numpy(), y.numpy())
```
if you're done early, try adjusting the formula and seing how it affects the function
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
## Automatic gradients
Any self-respecting DL framework must do your backprop for you. Torch handles this with the `autograd` module.
The general pipeline looks like this:
* When creating a tensor, you mark it as `requires_grad`:
* __```torch.zeros(5, requires_grad=True)```__
* torch.tensor(np.arange(5), dtype=torch.float32, requires_grad=True)
* Define some differentiable `loss = arbitrary_function(a)`
* Call `loss.backward()`
* Gradients are now available as ```a.grads```
__Here's an example:__ let's fit a linear regression on Boston house prices
```
from sklearn.datasets import load_boston
boston = load_boston()
plt.scatter(boston.data[:, -1], boston.target)
from torch.autograd import Variable
w = torch.zeros(1, requires_grad=True)
b = torch.zeros(1, requires_grad=True)
x = torch.tensor(boston.data[:,-1] / 10, dtype=torch.float32)
y = torch.tensor(boston.target, dtype=torch.float32)
y_pred = w * x + b
loss = torch.mean( (y_pred - y)**2 )
# propagete gradients
loss.backward()
```
The gradients are now stored in `.grad` of those variables that require them.
```
print("dL/dw = {}\n".format(w.grad))
print("dL/db = {}\n".format(b.grad))
```
If you compute gradient from multiple losses, the gradients will add up at variables, therefore it's useful to __zero the gradients__ between iteratons.
```
from IPython.display import clear_output
for i in range(100):
y_pred = w * x + b
loss = torch.mean( (y_pred - y)**2 )
loss.backward()
w.data -= 0.05 * w.grad.data
b.data -= 0.05 * b.grad.data
#zero gradients
w.grad.data.zero_()
b.grad.data.zero_()
# the rest of code is just bells and whistles
if (i+1)%5==0:
clear_output(True)
plt.scatter(x.data.numpy(), y.data.numpy())
plt.scatter(x.data.numpy(), y_pred.data.numpy(), color='orange', linewidth=5)
plt.show()
print("loss = ", loss.data.numpy())
if loss.data.numpy() < 0.5:
print("Done!")
break
```
__Quest__: try implementing and writing some nonlinear regression. You can try quadratic features or some trigonometry, or a simple neural network. The only difference is that now you have more variables and a more complicated `y_pred`.
**Remember!**

When dealing with more complex stuff like neural network, it's best if you use tensors the way samurai uses his sword.
# High-level pytorch
So far we've been dealing with low-level torch API. While it's absolutely vital for any custom losses or layers, building large neura nets in it is a bit clumsy.
Luckily, there's also a high-level torch interface with a pre-defined layers, activations and training algorithms.
We'll cover them as we go through a simple image recognition problem: classifying letters into __"A"__ vs __"B"__.
```
from notmnist import load_notmnist
X_train, y_train, X_test, y_test = load_notmnist(letters='AB')
X_train, X_test = X_train.reshape([-1, 784]), X_test.reshape([-1, 784])
print("Train size = %i, test_size = %i"%(len(X_train),len(X_test)))
for i in [0,1]:
plt.subplot(1, 2, i + 1)
plt.imshow(X_train[i].reshape([28,28]))
plt.title(str(y_train[i]))
```
Let's start with layers. The main abstraction here is __`torch.nn.Module`__
```
from torch import nn
import torch.nn.functional as F
print(nn.Module.__doc__)
```
There's a vast library of popular layers and architectures already built for ya'.
This is a binary classification problem, so we'll train a __Logistic Regression with sigmoid__.
$$P(y_i | X_i) = \sigma(W \cdot X_i + b) ={ 1 \over {1+e^{- [W \cdot X_i + b]}} }$$
```
# create a network that stacks layers on top of each other
model = nn.Sequential()
# add first "dense" layer with 784 input units and 1 output unit.
model.add_module('l1', nn.Linear(784, 1))
# add softmax activation for probabilities. Normalize over axis 1
# note: layer names must be unique
model.add_module('l2', nn.Sigmoid())
print("Weight shapes:", [w.shape for w in model.parameters()])
# create dummy data with 3 samples and 784 features
x = torch.tensor(X_train[:3], dtype=torch.float32)
y = torch.tensor(y_train[:3], dtype=torch.float32)
# compute outputs given inputs, both are variables
y_predicted = model(x)[:, 0]
y_predicted # display what we've got
```
Let's now define a loss function for our model.
The natural choice is to use binary crossentropy (aka logloss, negative llh):
$$ L = {1 \over N} \underset{X_i,y_i} \sum - [ y_i \cdot log P(y_i | X_i) + (1-y_i) \cdot log (1-P(y_i | X_i)) ]$$
```
crossentropy = ### YOUR CODE
loss = ### YOUR CODE
assert tuple(crossentropy.size()) == (3,), "Crossentropy must be a vector with element per sample"
assert tuple(loss.size()) == (1,), "Loss must be scalar. Did you forget the mean/sum?"
assert loss.data.numpy()[0] > 0, "Crossentropy must non-negative, zero only for perfect prediction"
assert loss.data.numpy()[0] <= np.log(3), "Loss is too large even for untrained model. Please double-check it."
```
__Note:__ you can also find many such functions in `torch.nn.functional`, just type __`F.<tab>`__.
__Torch optimizers__
When we trained Linear Regression above, we had to manually .zero_() gradients on both our variables. Imagine that code for a 50-layer network.
Again, to keep it from getting dirty, there's `torch.optim` module with pre-implemented algorithms:
```
opt = torch.optim.RMSprop(model.parameters(), lr=0.01)
# here's how it's used:
loss.backward() # add new gradients
opt.step() # change weights
opt.zero_grad() # clear gradients
# dispose of old variables to avoid bugs later
del x, y, y_predicted, loss, y_pred
```
### Putting it all together
```
# create network again just in case
model = nn.Sequential()
model.add_module('first', nn.Linear(784, 1))
model.add_module('second', nn.Sigmoid())
opt = torch.optim.Adam(model.parameters(), lr=1e-3)
history = []
for i in range(100):
# sample 256 random images
ix = np.random.randint(0, len(X_train), 256)
x_batch = torch.tensor(X_train[ix], dtype=torch.float32)
y_batch = torch.tensor(y_train[ix], dtype=torch.float32)
# predict probabilities
y_predicted = ### YOUR CODE
assert y_predicted.dim() == 1, "did you forget to select first column with [:, 0]"
# compute loss, just like before
loss = ### YOUR CODE
# compute gradients
### YOUR CODE
# Adam step
### YOUR CODE
# clear gradients
### YOUR CODE
history.append(loss.data.numpy())
if i % 10 == 0:
print("step #%i | mean loss = %.3f" % (i, np.mean(history[-10:])))
```
__Debugging tips:__
* make sure your model predicts probabilities correctly. Just print them and see what's inside.
* don't forget _minus_ sign in the loss function! It's a mistake 99% ppl do at some point.
* make sure you zero-out gradients after each step. Srsly:)
* In general, pytorch's error messages are quite helpful, read 'em before you google 'em.
* if you see nan/inf, print what happens at each iteration to find our where exactly it occurs.
* If loss goes down and then turns nan midway through, try smaller learning rate. (Our current loss formula is unstable).
### Evaluation
Let's see how our model performs on test data
```
# use your model to predict classes (0 or 1) for all test samples
predicted_y_test = ### YOUR CODE
predicted_y_test = np.array(predicted_y_test > 0.5)
assert isinstance(predicted_y_test, np.ndarray), "please return np array, not %s" % type(predicted_y_test)
assert predicted_y_test.shape == y_test.shape, "please predict one class for each test sample"
assert np.in1d(predicted_y_test, y_test).all(), "please predict class indexes"
accuracy = np.mean(predicted_y_test == y_test)
print("Test accuracy: %.5f" % accuracy)
assert accuracy > 0.95, "try training longer"
print('Great job!')
```
```
```
```
```
```
```
```
```
```
```
### More about pytorch:
* Using torch on GPU and multi-GPU - [link](http://pytorch.org/docs/master/notes/cuda.html)
* More tutorials on pytorch - [link](http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)
* Pytorch examples - a repo that implements many cool DL models in pytorch - [link](https://github.com/pytorch/examples)
* Practical pytorch - a repo that implements some... other cool DL models... yes, in pytorch - [link](https://github.com/spro/practical-pytorch)
* And some more - [link](https://www.reddit.com/r/pytorch/comments/6z0yeo/pytorch_and_pytorch_tricks_for_kaggle/)
| github_jupyter |
<a href="https://colab.research.google.com/github/yukinaga/lecture_pytorch/blob/master/lecture4/cnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# CNNの実装
PyTorchを使って、畳み込みニューラルネットワーク(CNN)を実装します。
CNN自体はCNNの層を追加するのみで実装可能なのですが、今回はデータ拡張とドロップアウトの実装も行います。
## CIFAR-10
torchvision.datasetsを使い、CIFAR-10を読み込みます。
CIFARは、約6万枚の画像にラベルをつけたたデータセットです。
以下のコードでは、CIFAR-10を読み込み、ランダムな25枚の画像を表示します。
```
from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import numpy as np
import matplotlib.pyplot as plt
cifar10_data = CIFAR10(root="./data",
train=False,download=True,
transform=transforms.ToTensor())
cifar10_classes = np.array(["airplane", "automobile", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"])
print("データの数:", len(cifar10_data))
n_image = 25 # 表示する画像の数
cifar10_loader = DataLoader(cifar10_data, batch_size=n_image, shuffle=True)
dataiter = iter(cifar10_loader) # イテレータ
images, labels = dataiter.next() # 最初のバッチを取り出す
plt.figure(figsize=(10,10)) # 画像の表示サイズ
for i in range(n_image):
plt.subplot(5,5,i+1)
plt.imshow(np.transpose(images[i], (1, 2, 0))) # チャンネルを一番後ろに
label = cifar10_classes[labels[i]]
plt.title(label)
plt.tick_params(labelbottom=False, labelleft=False, bottom=False, left=False) # ラベルとメモリを非表示に
plt.show()
```
## データ拡張
torchvision.transformsを使ってデータ拡張を行います。
今回は、cifar-10の画像に-30〜30°の回転、および0.8〜1.2倍のリサイズを行います。
これらの処理は、バッチを取り出す際に元の画像に対してランダムに加えられます。
```
from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import numpy as np
import matplotlib.pyplot as plt
transform = transforms.Compose([transforms.RandomAffine([-30, 30], scale=(0.8, 1.2)), # 回転とリサイズ
transforms.ToTensor()])
cifar10_data = CIFAR10(root="./data",
train=False,download=True,
transform=transform)
cifar10_classes = np.array(["airplane", "automobile", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"])
print("データの数:", len(cifar10_data))
n_image = 25 # 表示する画像の数
cifar10_loader = DataLoader(cifar10_data, batch_size=n_image, shuffle=True)
dataiter = iter(cifar10_loader) # イテレータ
images, labels = dataiter.next() # 最初のバッチを取り出す
plt.figure(figsize=(10,10)) # 画像の表示サイズ
for i in range(n_image):
plt.subplot(5,5,i+1)
plt.imshow(np.transpose(images[i], (1, 2, 0))) # チャンネルを一番後ろに
label = cifar10_classes[labels[i]]
plt.title(label)
plt.tick_params(labelbottom=False, labelleft=False, bottom=False, left=False) # ラベルとメモリを非表示に
plt.show()
```
## データの前処理
ここからCNNを実装します。
データ拡張として、回転とリサイズ、および左右反転を行います。
また、学習が効率的になるように入力の平均値を0、標準偏差を1にします(標準化)。
DataLoaderは、訓練データ、テストデータそれぞれで設定しますが、テストデータにはミニバッチ法を適用しないのでバッチサイズは元データのサンプル数にします。
```
from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
affine = transforms.RandomAffine([-15, 15], scale=(0.8, 1.2)) # 回転とリサイズ
flip = transforms.RandomHorizontalFlip(p=0.5) # 左右反転
normalize = transforms.Normalize((0.0, 0.0, 0.0), (1.0, 1.0, 1.0)) # 平均値を0、標準偏差を1に
to_tensor = transforms.ToTensor()
transform_train = transforms.Compose([affine, flip, to_tensor, normalize])
transform_test = transforms.Compose([to_tensor, normalize])
cifar10_train = CIFAR10("./data", train=True, download=True, transform=transform_train)
cifar10_test = CIFAR10("./data", train=False, download=True, transform=transform_test)
# DataLoaderの設定
batch_size = 64
train_loader = DataLoader(cifar10_train, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(cifar10_test, batch_size=len(cifar10_test), shuffle=False)
```
## モデルの構築
`nn.Module`モジュールを継承したクラスとして、モデルを構築します。
今回は、過学習を抑制するためにドロップアウトを導入します。
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # 畳み込み層:(入力チャンネル数, フィルタ数、フィルタサイズ)
self.pool = nn.MaxPool2d(2, 2) # プーリング層:(領域のサイズ, ストライド)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 256) # 全結合層
self.dropout = nn.Dropout(p=0.5) # ドロップアウト:(p=ドロップアウト率)
self.fc2 = nn.Linear(256, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
net = Net()
net.cuda() # GPU対応
print(net)
```
## 学習
モデルを訓練します。
DataLoaderを使い、ミニバッチを取り出して訓練および評価を行います。
今回は、評価時にミニバッチ法は使わず、テストデータ全体を使って一度に誤差を計算します。
学習には時間がかかりますので、編集→ノートブックの設定のハードウェアアクセラレーターでGPUを選択しましょう。
```
from torch import optim
# 交差エントロピー誤差関数
loss_fnc = nn.CrossEntropyLoss()
# 最適化アルゴリズム
optimizer = optim.Adam(net.parameters())
# 損失のログ
record_loss_train = []
record_loss_test = []
# 学習
x_test, t_test = iter(test_loader).next()
x_test, t_test = x_test.cuda(), t_test.cuda()
for i in range(20): # 20エポック学習
net.train() # 訓練モード
loss_train = 0
for j, (x, t) in enumerate(train_loader): # ミニバッチ(x, t)を取り出す
x, t = x.cuda(), t.cuda() # GPU対応
y = net(x)
loss = loss_fnc(y, t)
loss_train += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_train /= j+1
record_loss_train.append(loss_train)
net.eval() # 評価モード
y_test = net(x_test)
loss_test = loss_fnc(y_test, t_test).item()
record_loss_test.append(loss_test)
if i%1 == 0:
print("Epoch:", i, "Loss_Train:", loss_train, "Loss_Test:", loss_test)
```
## 誤差の推移
訓練データ、テストデータで誤差の推移をグラフ表示します。
```
import matplotlib.pyplot as plt
plt.plot(range(len(record_loss_train)), record_loss_train, label="Train")
plt.plot(range(len(record_loss_test)), record_loss_test, label="Test")
plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Error")
plt.show()
```
## 正解率
モデルの性能を把握するため、テストデータ使い正解率を測定します。
```
correct = 0
total = 0
net.eval() # 評価モード
for i, (x, t) in enumerate(test_loader):
x, t = x.cuda(), t.cuda() # GPU対応
y = net(x)
correct += (y.argmax(1) == t).sum().item()
total += len(x)
print("正解率:", str(correct/total*100) + "%")
```
## 訓練済みのモデルを使った予測
訓練済みのモデルを使ってみましょう。
画像を入力し、モデルが機能していることを確かめます。
```
cifar10_loader = DataLoader(cifar10_test, batch_size=1, shuffle=True)
dataiter = iter(cifar10_loader)
images, labels = dataiter.next() # サンプルを1つだけ取り出す
plt.imshow(np.transpose(images[0], (1, 2, 0))) # チャンネルを一番後ろに
plt.tick_params(labelbottom=False, labelleft=False, bottom=False, left=False) # ラベルとメモリを非表示に
plt.show()
net.eval() # 評価モード
x, t = images.cuda(), labels.cuda() # GPU対応
y = net(x)
print("正解:", cifar10_classes[labels[0]],
"予測結果:", cifar10_classes[y.argmax().item()])
```
| github_jupyter |

# YES BANK DATATHON
## Machine Learning Challenge Round 3 - EDA
### Data Description
The data given is of credit records of individuals with certain attributes.
```
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
train=pd.read_csv('Yes_Bank_Train.csv')
test=pd.read_csv('Yes_Bank_Test_int.csv')
train.info()
sub=pd.read_csv('sample_clusters.csv')
train.head()
train.describe(include='all').T
test.describe(include='all').T
```
Most of them are Categorical value, lets have a view at the features and what they signify
a. **serial number** : unique identification key
b. **account_info :** Categorized details of existing accounts of the individuals. The balance of money in account provided is stated by this variable
* A11 signifies 0 (excluding 0) or lesser amount credited to current checking account. (Amounts are in units of certain currency)
* A12 signifies greater than 0 (including 0) and lesser than 200 (excluding 200) units of currency
* A13 signifies amount greater than 200 (including 200) being recorded in the account
* A14 signifies no account details provided*
c. **duration_month** : Duration in months for which the credit is existing
d. **credit_history** : This categorical variable signifies the credit history of the individual who has taken the loan
* A30 signifies that no previous loans has been taken or all loans taken have been payed back.
* A31 signifies that all loans from the current bank has been payed off. Loan information of other banks are not available.
* A32 signifies loan exists but till now regular installments have been payed back in full amount.
* A33 signifies that significant delays have been seen in repayment of loan installments.
* A34 signifies other loans exist at the same bank. Irregular behaviour in repayment.*
e. **purpose**: This variable signifies why the loan was taken
* A40 signifies that the loan is taken to buy a new car
* A41 signifies that the loan was taken to buy a old car
* A42 signifies that the loan is taken to buy furniture or equipment
* A43 signifies that the loan is taken to buy radio or TV
* A44 signifies that the loan is taken to buy domestic appliances
* A45 signifies that the loan is taken for repairing purposes
* A46 signifies that the loan is taken for education
* A47 signifies that the loan is taken for vacation
* A48 signifies that the loan is taken for re skilling
* A49 signifies that the loan is taken for business and establishment
* A410 signifies other purposes*
f. **credit_amount**: The numerical variable signifies the amount credited to the individual (in units of a certain currency)(**TARGET**)
g. **savings_account**: This variable signifies details of the amount present in savings account of the individual:
* A61 signifies that less than 100 units (excluding 100) of currency is present
* A62 signifies that greater than 100 units (including 100) and less than 500 (excluding 500) units of currency is present
* A63 signifies that greater than 500 (including 500) and less than 1000 (excluding 1000) units of currency is present.
* A64 signifies that greater than 1000 (including 1000) units of currency is present.
* A65 signifies that no savings account details is present on record*
h. **employment_s**: Catergorical variable that signifies the employment status of everyone who has been alloted loans
* A71 signifies that the individual is unemployed
* A72 signifies that the individual has been employed for less than a year
* A73 signifies that the individual has been employed for more than a year but less than four years
* A74 signifies that the individual has been employed more than four years but less than seven years
* A75 signifies that the individual has been employed for more than seven years*
i. **poi**: This numerical variable signifies what percentage of disposable income is spent on loan interest amount.
j. ***personal_status**: This categorical variable signifies the personal status of the individual
* A91 signifies that the individual is a separated or divorced male
* A92 signifies female individuals who are separated or divorced
* A93 signifies unmarried males
* A94 signifies married or widowed males
* A95 signifies single females*
k. **gurantors**: Categorical variable which signifies if any other individual is involved with an individual loan case
* A101 signifies that only a single individual is involved in the loan application
* A102 signifies that one or more co-applicant is present in the loan application
* A103 signifies that guarantor are present.*
l. **resident_since**: Numerical variable that signifies for how many years the applicant has been a resident
m. **property_type**: This qualitative variable defines the property holding information of the individual
* A121 signifies that the individual holds real estate property
* A122 signifies that the individual holds a building society savings agreement or life insurance
* A123 signifies that the individual holds cars or other properties
* A124 signifies that property information is not available*
n. **age**: Numerical variable that signifies age in number of years
o. **installment_type**: This variable signifies other installment types taken
* A141 signifies installment to bank
* A142 signifies installment to outlets or stores
* A143 signifies that no information is present*
p. **housing_type**: This is a categorical variable that signifies which type of housing does a applicant have.
* A151 signifies that the housing is on rent
* A152 signifies that the housing is owned by the applicant
* A153 signifies that no loan amount is present on the housing and there is no expense for the housing) *
q. **credits_no**: Numerical variable for number of credits taken by the person
r. **job_type**: Signifies the employment status of the person
* A171 signifies that the individual is unemployed or unskilled and is a non-resident
* A172 signifies that the individual is unskilled but is a resident
* A173 signifies that the individual is a skilled employee or official
* A174 signifies that the individual is involved in management or is self-employed or a highly qualified employee or officer*
s. **liables**: Signifies number of persons dependent on the applicant
t. **telephone**: Signifies if the individual has a telephone or not
* A191 signifies that no telephonic records are present
* A192 signifies that a telephone is registered with the customer’s name*
u. **foreigner**: Signifies if the individual is a foreigner or not (considering the country of residence of the bank)
* A201 signifies that the individual is a foreigner
* A202 signifies that the individual is a resident*
```
plt.figure(figsize=(12,9))
sns.countplot(train.account_info)
plt.figure(figsize=(12,9))
sns.pairplot(train.drop(['serial number','liables'],axis=1),hue='account_info')
plt.figure(figsize=(12,9))
sns.pairplot(train.drop(['serial number','liables'],axis=1),hue='credit_history')
plt.figure(figsize=(12,9))
sns.jointplot(train.age,train.credit_amount,kind='hex')
# sns.jointplot(train.poi,train.credit_amount,kind='scatter')
sns.lmplot('duration_month','credit_amount',train,hue='credit_history')
```
### One Hot Encoded
```
dftrain=pd.get_dummies(train,drop_first=True)
dftrain.head()
```
**Correlation**
```
plt.figure(figsize=(15,10))
sns.heatmap(dftrain.corr())
plt.figure(figsize=(12,9))
plt.scatter(dftrain['serial number'],dftrain.credit_amount)
```
**Lets have a try at train test split to see where we are at**
```
X,y=dftrain.drop(['serial number','credit_amount'],axis=1),dftrain.credit_amount
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1994)
```

# YES BANK DATATHON
## Machine Learning Challenge Round 3 - Prediction
### Ensemble
Taking **RMSE** as the Eval Metric
```
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error
rf=RandomForestRegressor(n_estimators=420)
rf.fit(X_train,y_train)
p=rf.predict(X_test)
print(np.sqrt(mean_squared_error(y_test,p)))
```
Feature Importances
```
col=pd.DataFrame({'col':X.columns,'imp':rf.feature_importances_}).sort_values('imp',ascending=False)
col
```
Taking top 45 values
```
main_col=col.col.values[:45]
```
Taking and Encoding Test Data as well
```
dftest=pd.get_dummies(test,drop_first=True)
dftest.head()
```
Using **Kfold**
```
X1=X[main_col]
dftest1=dftest[main_col]
from sklearn.metrics import mean_squared_error
err=[]
pdd=[]
from sklearn.model_selection import KFold
fold=KFold(n_splits=4,shuffle=True)
for train_index, test_index in fold.split(X1):
X_train, X_test = X1.iloc[train_index], X1.iloc[test_index]
y_train, y_test = y[train_index], y[test_index]
rf=RandomForestRegressor(n_estimators=420,max_features=10)
rf.fit(X_train,y_train)
err.append(np.sqrt(mean_squared_error(y_test,rf.predict(X_test))))
p=rf.predict(dftest1)
pdd.append(p)
np.mean(err,axis=0)
pdd_mean=np.mean(pdd,axis=0)
pdd_mean
```
### Neural Network
```
from sklearn.neural_network import MLPClassifier, MLPRegressor
mlp=MLPRegressor(hidden_layer_sizes=(120,30,), activation="relu", max_iter=500, random_state=8,solver='adam')
mlp.fit(X_train,y_train)
p=mlp.predict(X_test)
print(np.sqrt(mean_squared_error(y_test,p)))
mlp.fit(X,y)
pred=mlp.predict(dftest.drop('serial number',axis=1))
pred
```
Taking Avg
```
main_p=(pdd_mean+pred)/2
sub=pd.DataFrame({'serial number':test['serial number'],'credit_amount':main_p})
sub.head()
sub.to_csv('stack_main.csv',index=False)
```
| github_jupyter |
Week 5 Notebook: Building a Deep Learning Model
===============================================================
Now, we'll look at a deep learning model based on low-level track features.
```
import tensorflow.keras as keras
import numpy as np
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
import uproot
import tensorflow
import yaml
with open('definitions.yml') as file:
# The FullLoader parameter handles the conversion from YAML
# scalar values to Python the dictionary format
definitions = yaml.load(file, Loader=yaml.FullLoader)
features = definitions['features']
spectators = definitions['spectators']
labels = definitions['labels']
nfeatures = definitions['nfeatures']
nspectators = definitions['nspectators']
nlabels = definitions['nlabels']
ntracks = definitions['ntracks']
```
## Data Generators
A quick aside on data generators. As training on large datasets is a key component of many deep learning approaches (and especially in high energy physics), and these datasets no longer fit in memory, it is imporatant to write a data generator which can automatically fetch data.
Here we modify one from: https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly
```
from DataGenerator import DataGenerator
help(DataGenerator)
# load training and validation generators
train_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/train/ntuple_merged_10.root']
val_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/train/ntuple_merged_11.root']
train_generator = DataGenerator(train_files, features, labels, spectators, batch_size=1024, n_dim=ntracks,
remove_mass_pt_window=False,
remove_unlabeled=True, max_entry=8000)
val_generator = DataGenerator(val_files, features, labels, spectators, batch_size=1024, n_dim=ntracks,
remove_mass_pt_window=False,
remove_unlabeled=True, max_entry=2000)
```
## Test Data Generator
Note that the track array has a different "shape." There are also less than the requested `batch_size=1024` because we remove unlabeled samples.
```
X, y = train_generator[1]
print(X.shape)
print(y.shape)
```
Note this generator can be optimized further (storing the data file locally, etc.). It's important to note that I/O is often a bottleneck for training big networks.
## Fully Connected Neural Network Classifier
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, BatchNormalization, Flatten
import tensorflow.keras.backend as K
# define dense keras model
inputs = Input(shape=(ntracks, nfeatures,), name='input')
x = BatchNormalization(name='bn_1')(inputs)
x = Flatten(name='flatten_1')(x)
x = Dense(64, name='dense_1', activation='relu')(x)
x = Dense(32, name='dense_2', activation='relu')(x)
x = Dense(32, name='dense_3', activation='relu')(x)
outputs = Dense(nlabels, name='output', activation='softmax')(x)
keras_model_dense = Model(inputs=inputs, outputs=outputs)
keras_model_dense.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
print(keras_model_dense.summary())
# define callbacks
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
early_stopping = EarlyStopping(monitor='val_loss', patience=5)
reduce_lr = ReduceLROnPlateau(patience=5, factor=0.5)
model_checkpoint = ModelCheckpoint('keras_model_dense_best.h5', monitor='val_loss', save_best_only=True)
callbacks = [early_stopping, model_checkpoint, reduce_lr]
# fit keras model
history_dense = keras_model_dense.fit(train_generator,
validation_data=val_generator,
steps_per_epoch=len(train_generator),
validation_steps=len(val_generator),
max_queue_size=5,
epochs=20,
shuffle=False,
callbacks=callbacks,
verbose=0)
# reload best weights
keras_model_dense.load_weights('keras_model_dense_best.h5')
plt.figure()
plt.plot(history_dense.history['loss'], label='Loss')
plt.plot(history_dense.history['val_loss'], label='Val. loss')
plt.xlabel('Epoch')
plt.legend()
plt.show()
```
## Deep Sets Classifier
This model uses the `Dense` layer of Keras, but really it's more like the Deep Sets architecture applied to jets, the so-caled Particle-flow network approach{cite:p}`Komiske:2018cqr,NIPS2017_6931`.
We are applying the same fully connected neural network to each track.
Then the `GlobalAveragePooling1D` layer sums over the tracks (actually it takes the mean).
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, BatchNormalization, GlobalAveragePooling1D
import tensorflow.keras.backend as K
# define Deep Sets model with Dense Keras layer
inputs = Input(shape=(ntracks, nfeatures,), name='input')
x = BatchNormalization(name='bn_1')(inputs)
x = Dense(64, name='dense_1', activation='relu')(x)
x = Dense(32, name='dense_2', activation='relu')(x)
x = Dense(32, name='dense_3', activation='relu')(x)
# sum over tracks
x = GlobalAveragePooling1D(name='pool_1')(x)
x = Dense(100, name='dense_4', activation='relu')(x)
outputs = Dense(nlabels, name='output', activation='softmax')(x)
keras_model_deepset = Model(inputs=inputs, outputs=outputs)
keras_model_deepset.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
print(keras_model_deepset.summary())
# define callbacks
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
early_stopping = EarlyStopping(monitor='val_loss', patience=5)
reduce_lr = ReduceLROnPlateau(patience=5, factor=0.5)
model_checkpoint = ModelCheckpoint('keras_model_deepset_best.h5', monitor='val_loss', save_best_only=True)
callbacks = [early_stopping, model_checkpoint, reduce_lr]
# fit keras model
history_deepset = keras_model_deepset.fit(train_generator,
validation_data=val_generator,
steps_per_epoch=len(train_generator),
validation_steps=len(val_generator),
max_queue_size=5,
epochs=20,
shuffle=False,
callbacks=callbacks,
verbose=0)
# reload best weights
keras_model_deepset.load_weights('keras_model_deepset_best.h5')
plt.figure()
plt.plot(history_deepset.history['loss'], label='Loss')
plt.plot(history_deepset.history['val_loss'], label='Val. loss')
plt.xlabel('Epoch')
plt.legend()
plt.show()
# load testing file
test_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/test/ntuple_merged_0.root']
test_generator = DataGenerator(test_files, features, labels, spectators, batch_size=1024, n_dim=ntracks,
remove_mass_pt_window=True,
remove_unlabeled=True)
# run model inference on test data set
predict_array_dense = []
predict_array_deepset = []
label_array_test = []
for t in test_generator:
label_array_test.append(t[1])
predict_array_dense.append(keras_model_dense.predict(t[0]))
predict_array_deepset.append(keras_model_deepset.predict(t[0]))
predict_array_dense = np.concatenate(predict_array_dense, axis=0)
predict_array_deepset = np.concatenate(predict_array_deepset, axis=0)
label_array_test = np.concatenate(label_array_test, axis=0)
# create ROC curves
fpr_dense, tpr_dense, threshold_dense = roc_curve(label_array_test[:,1], predict_array_dense[:,1])
fpr_deepset, tpr_deepset, threshold_deepset = roc_curve(label_array_test[:,1], predict_array_deepset[:,1])
# plot ROC curves
plt.figure()
plt.plot(tpr_dense, fpr_dense, lw=2.5, label="Dense, AUC = {:.1f}%".format(auc(fpr_dense, tpr_dense)*100))
plt.plot(tpr_deepset, fpr_deepset, lw=2.5, label="Deep Sets, AUC = {:.1f}%".format(auc(fpr_deepset, tpr_deepset)*100))
plt.xlabel(r'True positive rate')
plt.ylabel(r'False positive rate')
plt.semilogy()
plt.ylim(0.001, 1)
plt.xlim(0, 1)
plt.grid(True)
plt.legend(loc='upper left')
plt.show()
```
We see the more structurally-aware Deep Sets model does better than a simple fully conneted neural network appraoch.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm
from astropy.table import Table
import astropy.units as u
import os
# Using `batman` to create & fit fake transit
import batman
# Using astropy BLS and scipy curve_fit to fit transit
from astropy.timeseries import BoxLeastSquares
# Using emcee & corner to find and plot (e, w) distribution with MCMC
import emcee
import corner
# Using dynesty to do the same with nested sampling
import dynesty
import scipy.constants as c
# And importing `photoeccentric`
import photoeccentric as ph
%load_ext autoreload
%autoreload 2
# pandas display option
pd.set_option('display.float_format', lambda x: '%.5f' % x)
```
1. Choose 3 planets with e from a Gaussian distrbution with (0.5, 0.2)
2. Fit them using photoeccentric, find fit e and w
3. Implement Van Eylen equation to find underlying e dist on surface
```
true_es = np.random.normal(loc=0.5, scale=0.2, size=3)
true_ws = np.random.uniform(low=-90, high=270, size=3)
true_es
nwalk = 64
nsteps = 1000
ndiscard = 500
arrlen = (nsteps-ndiscard)*nwalk
smass_kg = 1.9885e30 # Solar mass (kg)
srad_m = 696.34e6 # Solar radius (m)
muirhead_data = pd.read_csv("datafiles/Muirhead2013_isochrones/muirhead_data_incmissing.txt", sep=" ")
# ALL Kepler planets from exo archive
planets = pd.read_csv('datafiles/exoplanetarchive/cumulative_kois.csv')
# Take the Kepler planet archive entries for the planets in Muirhead et al. 2013 sample
spectplanets = pd.read_csv('spectplanets.csv')
# Kepler-Gaia Data
kpgaia = Table.read('datafiles/Kepler-Gaia/kepler_dr2_4arcsec.fits', format='fits').to_pandas();
# Kepler-Gaia data for only the objects in our sample
muirhead_gaia = pd.read_csv("muirhead_gaia.csv")
# Combined spectroscopy data + Gaia/Kepler data for our sample
muirhead_comb = pd.read_csv('muirhead_comb.csv')
# Only targets from table above with published luminosities from Gaia
muirhead_comb_lums = pd.read_csv('muirhead_comb_lums.csv')
# Kepler ID for Kepler-1582 b
kepid = 9710326
kepname = spectplanets.loc[spectplanets['kepid'] == kepid].kepler_name.values[0]
kp737b = muirhead_comb.loc[muirhead_comb['KIC'] == kepid]
KOI = 947
isodf = pd.read_csv("datafiles/isochrones/iso_lums_" + str(kepid) + ".csv")
mstar = isodf["mstar"].mean()
mstar_err = isodf["mstar"].std()
rstar = isodf["radius"].mean()
rstar_err = isodf["radius"].std()
rho_star, mass, radius = ph.find_density_dist_symmetric(mstar, mstar_err, rstar, rstar_err, arrlen)
period, period_uerr, period_lerr, rprs, rprs_uerr, rprs_lerr, a_arc, a_uerr_arc, a_lerr_arc, i, e_arc, w_arc = ph.planet_params_from_archive(spectplanets, kepname)
# We calculate a_rs to ensure that it's consistent with the spec/Gaia stellar density.
a_rs = ph.calc_a(period*86400.0, mstar*smass_kg, rstar*srad_m)
a_rs_err = np.mean((a_uerr_arc, a_lerr_arc))
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('Period (Days): ', period, 'Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
inc = 89.99
```
## First Planet
```
# 30 minute cadence
cadence = 0.02142857142857143
time = np.arange(-300, 300, cadence)
e
w
# Define e and w, calculate flux from transit model
e = true_es[0]
w = true_ws[0]
flux = ph.integratedlc(time, period, rprs, a_rs, e, i, w, 0.0)
# Adding some gaussian noise on the order of Kepler noise (by eyeball)
noise = np.random.normal(0,0.0001,len(time))
nflux = flux+noise
flux_err = np.array([0.0001]*len(nflux))
plt.errorbar(time, nflux, yerr=flux_err, fmt='o')
plt.xlabel('Time')
plt.ylabel('Flux')
plt.xlim(-0.5, 0.5)
plt.axvline(0.0, c='r', label='Transit midpoint')
plt.legend()
transitmpt = 0
midpoints = np.unique(np.sort(np.concatenate((np.arange(transitmpt, time[0], -period), np.arange(transitmpt, time[-1], period)))))
```
## Fitting the transit
```
# Remove Out of Transit Data
ttime = []
tflux = []
tflux_err = []
for i in range(len(midpoints)):
m, b, t1bjd, t1, fnorm, fe1 = ph.do_linfit(time, nflux, flux_err, midpoints[i], 11, 5)
ttime.append(t1bjd)
tflux.append(fnorm)
tflux_err.append(fe1)
ttime = np.array(ttime).flatten()
tflux = np.array(tflux).flatten()
tflux_err = np.array(tflux_err).flatten()
tflux = np.nan_to_num(tflux, nan=1.0)
tflux_err = np.nan_to_num(tflux_err, nan=np.nanmedian(tflux_err))
priortransform = [3., 27., 1., 0., 15., 64., 2., 88., 0.1, transitmpt]
nbuffer = 11
dres, perDists, rpDists, arsDists, incDists, t0Dist = ph.fit_keplc_dynesty(KOI, midpoints, ttime, tflux, tflux_err, priortransform, arrlen, nbuffer, spectplanets, muirhead_comb)
#perDists
np.savetxt('S1periods.csv', perDists, delimiter=',')
np.savetxt('S1rprs.csv', rpDists, delimiter=',')
np.savetxt('S1ars.csv', arsDists, delimiter=',')
np.savetxt('S1inc.csv', incDists, delimiter=',')
np.savetxt('S1t0.csv', t0Dist, delimiter=',')
t0Dists = t0Dist
per_f = ph.mode(perDists)
rprs_f = ph.mode(rpDists)
a_f = ph.mode(arsDists)
i_f = ph.mode(incDists)
t0_f = ph.mode(t0Dists)
# Create a light curve with the fit parameters
fit1 = ph.integratedlc_fitter(ttime, per_f, rprs_f, a_f, i_f, t0_f)
plt.errorbar(ttime, tflux, yerr=tflux_err, c='blue', alpha=0.5, label='Original LC')
plt.plot(ttime, fit1, c='red', alpha=1.0, label='Fit LC')
#plt.xlim(-0.1, 0.1)
plt.legend()
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('\n')
print('Input params:')
print('Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
print('\n')
print('Fit params:')
print('Rp/Rs: ', rprs_f)
print('a/Rs: ', a_f)
print('i (deg): ', i_f)
```
### Determining T14 and T23
```
pdist = perDists
rdist = rpDists
adist = arsDists
idist = incDists
t0dist = t0Dists
T14dist = ph.get_T14(pdist, rdist, adist, idist)
T14errs = ph.get_sigmas(T14dist)
T23dist = ph.get_T23(pdist, rdist, adist, idist)
T23errs = ph.get_sigmas(T23dist)
```
# Get $g$
```
gs, rho_c = ph.get_g_distribution(rho_star, pdist, rdist, T14dist, T23dist)
g_mean = ph.mode(gs)
g_sigma = np.mean(np.abs(ph.get_sigmas(gs)))
g_mean
g_sigma
#Guesses
w_guess = 0.0
e_guess = 0.0
solnx = (w_guess, e_guess)
pos = solnx + 1e-4 * np.random.randn(32, 2)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, ph.log_probability, args=(g_mean, g_sigma), threads=4)
sampler.run_mcmc(pos, 5000, progress=True);
labels = ["w", "e"]
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
fig = corner.corner(flat_samples, labels=labels, title_kwargs={"fontsize": 12}, truths=[w, e], plot_contours=True)
```
## Second Planet
```
# 30 minute cadence
cadence = 0.02142857142857143
time = np.arange(-300, 300, cadence)
i
# Define e and w, calculate flux from transit model
e = true_es[1]
w = true_ws[1]
flux = ph.integratedlc(time, period, rprs, a_rs, e, i, w, 0.0)
inc
# Adding some gaussian noise on the order of Kepler noise (by eyeball)
noise = np.random.normal(0,0.0001,len(time))
nflux = flux+noise
flux_err = np.array([0.0001]*len(nflux))
plt.errorbar(time, flux, yerr=flux_err, fmt='o')
plt.xlabel('Time')
plt.ylabel('Flux')
plt.xlim(-0.5, 0.5)
plt.axvline(0.0, c='r', label='Transit midpoint')
plt.legend()
transitmpt = 0
midpoints = np.unique(np.sort(np.concatenate((np.arange(transitmpt, time[0], -period), np.arange(transitmpt, time[-1], period)))))
```
## Fitting the transit
```
# Remove Out of Transit Data
ttime = []
tflux = []
tflux_err = []
for i in range(len(midpoints)):
m, b, t1bjd, t1, fnorm, fe1 = ph.do_linfit(time, nflux, flux_err, midpoints[i], 11, 5)
ttime.append(t1bjd)
tflux.append(fnorm)
tflux_err.append(fe1)
ttime = np.array(ttime).flatten()
tflux = np.array(tflux).flatten()
tflux_err = np.array(tflux_err).flatten()
tflux = np.nan_to_num(tflux, nan=1.0)
tflux_err = np.nan_to_num(tflux_err, nan=np.nanmedian(tflux_err))
priortransform = [3., 27., 1., 0., 15., 64., 2., 88., 0.1, transitmpt]
nbuffer = 11
ms, bs, timesBJD, timesPhase, fluxNorm, fluxErrs, perDists, rpDists, arsDists, incDists, t0Dist = ph.fit_keplc_dynesty(KOI, midpoints, ttime, tflux, tflux_err, priortransform, arrlen, nbuffer, spectplanets, muirhead_comb)
perDists
np.savetxt('Speriods.csv', perDists, delimiter=',')
np.savetxt('Srprs.csv', rpDists, delimiter=',')
np.savetxt('Sars.csv', arsDists, delimiter=',')
np.savetxt('Sinc.csv', incDists, delimiter=',')
np.savetxt('St0.csv', t0Dists, delimiter=',')
per_f = ph.mode(perDists)
rprs_f = ph.mode(rpDists)
a_f = ph.mode(arsDists)
i_f = ph.mode(incDists)
t0_f = ph.mode(t0Dists)
# Create a light curve with the fit parameters
fit1 = ph.integratedlc_fitter(time1, per_f, rprs_f, a_f, i_f, t0_f)
plt.errorbar(time1, nflux1, yerr=fluxerr1, c='blue', alpha=0.5, label='Original LC')
plt.plot(time1, fit1, c='red', alpha=1.0, label='Fit LC')
#plt.xlim(-0.1, 0.1)
plt.legend()
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('\n')
print('Input params:')
print('Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
print('\n')
print('Fit params:')
print('Rp/Rs: ', rprs_f)
print('a/Rs: ', a_f)
print('i (deg): ', i_f)
```
### Determining T14 and T23
```
T14dist = ph.get_T14(pdist, rdist, adist, idist)
T14errs = ph.get_sigmas(T14dist)
T23dist = ph.get_T23(pdist, rdist, adist, idist)
T23errs = ph.get_sigmas(T23dist)
```
# Get $g$
```
gs, rho_c = ph.get_g_distribution(rho_star, pdist, rdist, T14dist, T23dist)
g_mean = ph.mode(gs)
g_sigma = np.mean(np.abs(ph.get_sigmas(gs)))
g_mean
g_sigma
#Guesses
w_guess = 0.0
e_guess = 0.0
solnx = (w_guess, e_guess)
pos = solnx + 1e-4 * np.random.randn(32, 2)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, ph.log_probability, args=(g_mean, g_sigma), threads=4)
sampler.run_mcmc(pos, 5000, progress=True);
labels = ["w", "e"]
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
fig = corner.corner(flat_samples, labels=labels, title_kwargs={"fontsize": 12}, truths=[w, e], plot_contours=True)
```
## Third Planet
```
# 30 minute cadence
cadence = 0.02142857142857143
time = np.arange(-300, 300, cadence)
# Define e and w, calculate flux from transit model
e = true_es[2]
w = true_ws[2]
flux = ph.integratedlc(time, period, rprs, a_rs, e, i, w, 0.0)
# Adding some gaussian noise on the order of Kepler noise (by eyeball)
noise = np.random.normal(0,0.0001,len(time))
nflux = flux+noise
flux_err = np.array([0.0001]*len(nflux))
plt.errorbar(time, nflux, yerr=flux_err, fmt='o')
plt.xlabel('Time')
plt.ylabel('Flux')
plt.xlim(-0.5, 0.5)
plt.axvline(0.0, c='r', label='Transit midpoint')
plt.legend()
transitmpt = 0
midpoints = np.unique(np.sort(np.concatenate((np.arange(transitmpt, time[0], -period), np.arange(transitmpt, time[-1], period)))))
```
## Fitting the transit
```
# Remove Out of Transit Data
ttime = []
tflux = []
tflux_err = []
for i in range(len(midpoints)):
m, b, t1bjd, t1, fnorm, fe1 = ph.do_linfit(time, nflux, flux_err, midpoints[i], 11, 5)
ttime.append(t1bjd)
tflux.append(fnorm)
tflux_err.append(fe1)
ttime = np.array(ttime).flatten()
tflux = np.array(tflux).flatten()
tflux_err = np.array(tflux_err).flatten()
tflux = np.nan_to_num(tflux, nan=1.0)
tflux_err = np.nan_to_num(tflux_err, nan=np.nanmedian(tflux_err))
priortransform = [3., 27., 1., 0., 15., 64., 2., 88., 0.1, transitmpt]
nbuffer = 11
ms, bs, timesBJD, timesPhase, fluxNorm, fluxErrs, perDists, rpDists, arsDists, incDists, t0Dist = ph.fit_keplc_dynesty(KOI, midpoints, ttime, tflux, tflux_err, priortransform, arrlen, nbuffer, spectplanets, muirhead_comb)
perDists
np.savetxt('Speriods.csv', perDists, delimiter=',')
np.savetxt('Srprs.csv', rpDists, delimiter=',')
np.savetxt('Sars.csv', arsDists, delimiter=',')
np.savetxt('Sinc.csv', incDists, delimiter=',')
np.savetxt('St0.csv', t0Dists, delimiter=',')
per_f = ph.mode(perDists)
rprs_f = ph.mode(rpDists)
a_f = ph.mode(arsDists)
i_f = ph.mode(incDists)
t0_f = ph.mode(t0Dists)
# Create a light curve with the fit parameters
fit1 = ph.integratedlc_fitter(time1, per_f, rprs_f, a_f, i_f, t0_f)
plt.errorbar(time1, nflux1, yerr=fluxerr1, c='blue', alpha=0.5, label='Original LC')
plt.plot(time1, fit1, c='red', alpha=1.0, label='Fit LC')
#plt.xlim(-0.1, 0.1)
plt.legend()
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('\n')
print('Input params:')
print('Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
print('\n')
print('Fit params:')
print('Rp/Rs: ', rprs_f)
print('a/Rs: ', a_f)
print('i (deg): ', i_f)
```
### Determining T14 and T23
```
T14dist = ph.get_T14(pdist, rdist, adist, idist)
T14errs = ph.get_sigmas(T14dist)
T23dist = ph.get_T23(pdist, rdist, adist, idist)
T23errs = ph.get_sigmas(T23dist)
```
# Get $g$
```
gs, rho_c = ph.get_g_distribution(rho_star, pdist, rdist, T14dist, T23dist)
g_mean = ph.mode(gs)
g_sigma = np.mean(np.abs(ph.get_sigmas(gs)))
g_mean
g_sigma
#Guesses
w_guess = 0.0
e_guess = 0.0
solnx = (w_guess, e_guess)
pos = solnx + 1e-4 * np.random.randn(32, 2)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, ph.log_probability, args=(g_mean, g_sigma), threads=4)
sampler.run_mcmc(pos, 5000, progress=True);
labels = ["w", "e"]
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
fig = corner.corner(flat_samples, labels=labels, title_kwargs={"fontsize": 12}, truths=[w, e], plot_contours=True)
```
Probability Grid
```
mumesh = np.linspace(0, 1, 100)
sigmesh = np.linspace(0.01, 0.3, 100)
mus, sigmas = np.meshgrid(mumesh, sigmesh)
# Vet 100 values from each e distribution
fit_es = [np.random.normal(loc=0.2, scale=0.05, size=100), np.random.normal(loc=0.3, scale=0.05, size=100), np.random.normal(loc=0.4, scale=0.05, size=100)]
fit_ws = [np.random.normal(loc=90, scale=10, size=100), np.random.normal(loc=-90, scale=10, size=100), np.random.normal(loc=0.0, scale=10, size=100)]
import scipy
# for each planet
# Planet 1: true_es[0]
pethetasum1 = np.zeros((100,100))
# Calculating p(obs|theta) for 10,000 grid points, for N posterior values for 1 panet
for n1 in tqdm(range(len(mus))): # For each grid point x
for n2 in range(len(mus[0])): # For each grid point y
mu_test = mus[n1][n2]
sig_test = sigmas[n1][n2] # x, y of grid point
for N in range(len(fit_es[0])): # For each posterior value (out of 100)
pethetasum1[n1][n2] += scipy.stats.norm.pdf(fit_es[0][N], loc=mu_test, scale=sig_test)
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(pethetasum1, extent=[0, 1, 0.01, 0.3], aspect = 'auto')
ax.set_xlabel('mean eccentricity')
ax.set_ylabel('sigma')
# Planet 2: true_es[1]
pethetasum2 = np.zeros((100,100))
# Calculating p(obs|theta) for 10,000 grid points, for N posterior values for 1 panet
for n1 in tqdm(range(len(mus))): # For each grid point x
for n2 in range(len(mus[0])): # For each grid point y
mu_test = mus[n1][n2]
sig_test = sigmas[n1][n2] # x, y of grid point
for N in range(len(fit_es[1])): # For each posterior value (out of 100)
pethetasum2[n1][n2] += scipy.stats.norm.pdf(fit_es[1][N], loc=mu_test, scale=sig_test)
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(pethetasum2, extent=[0, 1, 0.01, 0.3], aspect = 'auto')
ax.set_xlabel('mean eccentricity')
ax.set_ylabel('sigma')
# Planet 2: true_es[1]
pethetasum3 = np.zeros((100,100))
# Calculating p(obs|theta) for 10,000 grid points, for N posterior values for 1 panet
for n1 in tqdm(range(len(mus))): # For each grid point x
for n2 in range(len(mus[0])): # For each grid point y
mu_test = mus[n1][n2]
sig_test = sigmas[n1][n2] # x, y of grid point
for N in range(len(fit_es[1])): # For each posterior value (out of 100)
pethetasum3[n1][n2] += scipy.stats.norm.pdf(fit_es[2][N], loc=mu_test, scale=sig_test)
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(pethetasum3, extent=[0, 1, 0.01, 0.3], aspect = 'auto')
ax.set_xlabel('mean eccentricity')
ax.set_ylabel('sigma')
P = pethetasum1*pethetasum2*pethetasum3
P = P/np.sqrt(100*100)
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(P, extent=[0, 1, 0.01, 0.3], aspect = 'auto')
ax.set_xlabel('mean eccentricity')
ax.set_ylabel('sigma')
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Using the SavedModel format
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/guide/saved_model"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/guide/saved_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/saved_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/guide/saved_model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
A SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying (with [TFLite](https://tensorflow.org/lite), [TensorFlow.js](https://js.tensorflow.org/), [TensorFlow Serving](https://www.tensorflow.org/tfx/serving/tutorials/Serving_REST_simple), or [TFHub](https://tensorflow.org/hub)).
If you have code for a model in Python and want to load weights into it, see the [guide to training checkpoints](./checkpoints.ipynb).
For a quick introduction, this section exports a pre-trained Keras model and serves image classification requests with it. The rest of the guide will fill in details and discuss other ways to create SavedModels.
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
%tensorflow_version 2.x # Colab only.
except Exception:
pass
import tensorflow as tf
from matplotlib import pyplot as plt
import numpy as np
file = tf.keras.utils.get_file(
"grace_hopper.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg")
img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224])
plt.imshow(img)
plt.axis('off')
x = tf.keras.preprocessing.image.img_to_array(img)
x = tf.keras.applications.mobilenet.preprocess_input(
x[tf.newaxis,...])
```
We'll use an image of Grace Hopper as a running example, and a Keras pre-trained image classification model since it's easy to use. Custom models work too, and are covered in detail later.
```
#tf.keras.applications.vgg19.decode_predictions
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
pretrained_model = tf.keras.applications.MobileNet()
result_before_save = pretrained_model(x)
print()
decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1]
print("Result before saving:\n", decoded)
```
The top prediction for this image is "military uniform".
```
tf.saved_model.save(pretrained_model, "/tmp/mobilenet/1/")
```
The save-path follows a convention used by TensorFlow Serving where the last path component (`1/` here) is a version number for your model - it allows tools like Tensorflow Serving to reason about the relative freshness.
SavedModels have named functions called signatures. Keras models export their forward pass under the `serving_default` signature key. The [SavedModel command line interface](#saved_model_cli) is useful for inspecting SavedModels on disk:
```
!saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve --signature_def serving_default
```
We can load the SavedModel back into Python with `tf.saved_model.load` and see how Admiral Hopper's image is classified.
```
loaded = tf.saved_model.load("/tmp/mobilenet/1/")
print(list(loaded.signatures.keys())) # ["serving_default"]
```
Imported signatures always return dictionaries.
```
infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)
```
Running inference from the SavedModel gives the same result as the original model.
```
labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]
decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1]
print("Result after saving and loading:\n", decoded)
```
## Serving the model
SavedModels are usable from Python, but production environments typically use a dedicated service for inference. This is easy to set up from a SavedModel using TensorFlow Serving.
See the [TensorFlow Serving REST tutorial](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/tutorials/Serving_REST_simple.ipynb) for more details about serving, including instructions for installing `tensorflow_model_server` in a notebook or on your local machine. As a quick sketch, to serve the `mobilenet` model exported above just point the model server at the SavedModel directory:
```bash
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=mobilenet \
--model_base_path="/tmp/mobilenet" >server.log 2>&1
```
Then send a request.
```python
!pip install requests
import json
import numpy
import requests
data = json.dumps({"signature_name": "serving_default",
"instances": x.tolist()})
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/mobilenet:predict',
data=data, headers=headers)
predictions = numpy.array(json.loads(json_response.text)["predictions"])
```
The resulting `predictions` are identical to the results from Python.
### SavedModel format
A SavedModel is a directory containing serialized signatures and the state needed to run them, including variable values and vocabularies.
```
!ls /tmp/mobilenet/1 # assets saved_model.pb variables
```
The `saved_model.pb` file contains a set of named signatures, each identifying a function.
SavedModels may contain multiple sets of signatures (multiple MetaGraphs, identified with the `tag_set` argument to `saved_model_cli`), but this is rare. APIs which create multiple sets of signatures include [`tf.Estimator.experimental_export_all_saved_models`](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator#experimental_export_all_saved_models) and in TensorFlow 1.x `tf.saved_model.Builder`.
```
!saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve
```
The `variables` directory contains a standard training checkpoint (see the [guide to training checkpoints](./checkpoints.ipynb)).
```
!ls /tmp/mobilenet/1/variables
```
The `assets` directory contains files used by the TensorFlow graph, for example text files used to initialize vocabulary tables. It is unused in this example.
SavedModels may have an `assets.extra` directory for any files not used by the TensorFlow graph, for example information for consumers about what to do with the SavedModel. TensorFlow itself does not use this directory.
### Exporting custom models
In the first section, `tf.saved_model.save` automatically determined a signature for the `tf.keras.Model` object. This worked because Keras `Model` objects have an unambiguous method to export and known input shapes. `tf.saved_model.save` works just as well with low-level model building APIs, but you will need to indicate which function to use as a signature if you're planning to serve a model.
```
class CustomModule(tf.Module):
def __init__(self):
super(CustomModule, self).__init__()
self.v = tf.Variable(1.)
@tf.function
def __call__(self, x):
return x * self.v
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def mutate(self, new_v):
self.v.assign(new_v)
module = CustomModule()
```
This module has two methods decorated with `tf.function`. While these functions will be included in the SavedModel and available if the SavedModel is reloaded via `tf.saved_model.load` into a Python program, without explicitly declaring the serving signature tools like Tensorflow Serving and `saved_model_cli` cannot access them.
`module.mutate` has an `input_signature`, and so there is enough information to save its computation graph in the SavedModel already. `__call__` has no signature and so this method needs to be called before saving.
```
module(tf.constant(0.))
tf.saved_model.save(module, "/tmp/module_no_signatures")
```
For functions without an `input_signature`, any input shapes used before saving will be available after loading. Since we called `__call__` with just a scalar, it will accept only scalar values.
```
imported = tf.saved_model.load("/tmp/module_no_signatures")
assert 3. == imported(tf.constant(3.)).numpy()
imported.mutate(tf.constant(2.))
assert 6. == imported(tf.constant(3.)).numpy()
```
The function will not accept new shapes like vectors.
```python
imported(tf.constant([3.]))
```
<pre>
ValueError: Could not find matching function to call for canonicalized inputs ((<tf.Tensor 'args_0:0' shape=(1,) dtype=float32>,), {}). Only existing signatures are [((TensorSpec(shape=(), dtype=tf.float32, name=u'x'),), {})].
</pre>
`get_concrete_function` lets you add input shapes to a function without calling it. It takes `tf.TensorSpec` objects in place of `Tensor` arguments, indicating the shapes and dtypes of inputs. Shapes can either be `None`, indicating that any shape is acceptable, or a list of axis sizes. If an axis size is `None` then any size is acceptable for that axis. `tf.TensorSpecs` can also have names, which default to the function's argument keywords ("x" here).
```
module.__call__.get_concrete_function(x=tf.TensorSpec([None], tf.float32))
tf.saved_model.save(module, "/tmp/module_no_signatures")
imported = tf.saved_model.load("/tmp/module_no_signatures")
assert [3.] == imported(tf.constant([3.])).numpy()
```
Functions and variables attached to objects like `tf.keras.Model` and `tf.Module` are available on import, but many Python types and attributes are lost. The Python program itself is not saved in the SavedModel.
We didn't identify any of the functions we exported as a signature, so it has none.
```
!saved_model_cli show --dir /tmp/module_no_signatures --tag_set serve
```
## Identifying a signature to export
To indicate that a function should be a signature, specify the `signatures` argument when saving.
```
call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
tf.saved_model.save(module, "/tmp/module_with_signature", signatures=call)
```
Notice that we first converted the `tf.function` to a `ConcreteFunction` with `get_concrete_function`. This is necessary because the function was created without a fixed `input_signature`, and so did not have a definite set of `Tensor` inputs associated with it.
```
!saved_model_cli show --dir /tmp/module_with_signature --tag_set serve --signature_def serving_default
imported = tf.saved_model.load("/tmp/module_with_signature")
signature = imported.signatures["serving_default"]
assert [3.] == signature(x=tf.constant([3.]))["output_0"].numpy()
imported.mutate(tf.constant(2.))
assert [6.] == signature(x=tf.constant([3.]))["output_0"].numpy()
assert 2. == imported.v.numpy()
```
We exported a single signature, and its key defaulted to "serving_default". To export multiple signatures, pass a dictionary.
```
@tf.function(input_signature=[tf.TensorSpec([], tf.string)])
def parse_string(string_input):
return imported(tf.strings.to_number(string_input))
signatures = {"serving_default": parse_string,
"from_float": imported.signatures["serving_default"]}
tf.saved_model.save(imported, "/tmp/module_with_multiple_signatures", signatures)
!saved_model_cli show --dir /tmp/module_with_multiple_signatures --tag_set serve
```
`saved_model_cli` can also run SavedModels directly from the command line.
```
!saved_model_cli run --dir /tmp/module_with_multiple_signatures --tag_set serve --signature_def serving_default --input_exprs="string_input='3.'"
!saved_model_cli run --dir /tmp/module_with_multiple_signatures --tag_set serve --signature_def from_float --input_exprs="x=3."
```
## Fine-tuning imported models
Variable objects are available, and we can backprop through imported functions.
```
optimizer = tf.optimizers.SGD(0.05)
def train_step():
with tf.GradientTape() as tape:
loss = (10. - imported(tf.constant(2.))) ** 2
variables = tape.watched_variables()
grads = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(grads, variables))
return loss
for _ in range(10):
# "v" approaches 5, "loss" approaches 0
print("loss={:.2f} v={:.2f}".format(train_step(), imported.v.numpy()))
```
## Control flow in SavedModels
Anything that can go in a `tf.function` can go in a SavedModel. With [AutoGraph](./autograph.ipynb) this includes conditional logic which depends on Tensors, specified with regular Python control flow.
```
@tf.function(input_signature=[tf.TensorSpec([], tf.int32)])
def control_flow(x):
if x < 0:
tf.print("Invalid!")
else:
tf.print(x % 3)
to_export = tf.Module()
to_export.control_flow = control_flow
tf.saved_model.save(to_export, "/tmp/control_flow")
imported = tf.saved_model.load("/tmp/control_flow")
imported.control_flow(tf.constant(-1)) # Invalid!
imported.control_flow(tf.constant(2)) # 2
imported.control_flow(tf.constant(3)) # 0
```
## SavedModels from Estimators
Estimators export SavedModels through [`tf.Estimator.export_saved_model`](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator#export_saved_model). See the [guide to Estimator](https://www.tensorflow.org/guide/estimators) for details.
```
input_column = tf.feature_column.numeric_column("x")
estimator = tf.estimator.LinearClassifier(feature_columns=[input_column])
def input_fn():
return tf.data.Dataset.from_tensor_slices(
({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16)
estimator.train(input_fn)
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
tf.feature_column.make_parse_example_spec([input_column]))
export_path = estimator.export_saved_model(
"/tmp/from_estimator/", serving_input_fn)
```
This SavedModel accepts serialized `tf.Example` protocol buffers, which are useful for serving. But we can also load it with `tf.saved_model.load` and run it from Python.
```
imported = tf.saved_model.load(export_path)
def predict(x):
example = tf.train.Example()
example.features.feature["x"].float_list.value.extend([x])
return imported.signatures["predict"](
examples=tf.constant([example.SerializeToString()]))
print(predict(1.5))
print(predict(3.5))
```
`tf.estimator.export.build_raw_serving_input_receiver_fn` allows you to create input functions which take raw tensors rather than `tf.train.Example`s.
## Load a SavedModel in C++
The C++ version of the SavedModel [loader](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/saved_model/loader.h) provides an API to load a SavedModel from a path, while allowing SessionOptions and RunOptions. You have to specify the tags associated with the graph to be loaded. The loaded version of SavedModel is referred to as SavedModelBundle and contains the MetaGraphDef and the session within which it is loaded.
```C++
const string export_dir = ...
SavedModelBundle bundle;
...
LoadSavedModel(session_options, run_options, export_dir, {kSavedModelTagTrain},
&bundle);
```
<a id=saved_model_cli/>
## Details of the SavedModel command line interface
You can use the SavedModel Command Line Interface (CLI) to inspect and
execute a SavedModel.
For example, you can use the CLI to inspect the model's `SignatureDef`s.
The CLI enables you to quickly confirm that the input
Tensor dtype and shape match the model. Moreover, if you
want to test your model, you can use the CLI to do a sanity check by
passing in sample inputs in various formats (for example, Python
expressions) and then fetching the output.
### Install the SavedModel CLI
Broadly speaking, you can install TensorFlow in either of the following
two ways:
* By installing a pre-built TensorFlow binary.
* By building TensorFlow from source code.
If you installed TensorFlow through a pre-built TensorFlow binary,
then the SavedModel CLI is already installed on your system
at pathname `bin\saved_model_cli`.
If you built TensorFlow from source code, you must run the following
additional command to build `saved_model_cli`:
```
$ bazel build tensorflow/python/tools:saved_model_cli
```
### Overview of commands
The SavedModel CLI supports the following two commands on a
`MetaGraphDef` in a SavedModel:
* `show`, which shows a computation on a `MetaGraphDef` in a SavedModel.
* `run`, which runs a computation on a `MetaGraphDef`.
### `show` command
A SavedModel contains one or more `MetaGraphDef`s, identified by their tag-sets.
To serve a model, you
might wonder what kind of `SignatureDef`s are in each model, and what are their
inputs and outputs. The `show` command let you examine the contents of the
SavedModel in hierarchical order. Here's the syntax:
```
usage: saved_model_cli show [-h] --dir DIR [--all]
[--tag_set TAG_SET] [--signature_def SIGNATURE_DEF_KEY]
```
For example, the following command shows all available
MetaGraphDef tag-sets in the SavedModel:
```
$ saved_model_cli show --dir /tmp/saved_model_dir
The given SavedModel contains the following tag-sets:
serve
serve, gpu
```
The following command shows all available `SignatureDef` keys in
a `MetaGraphDef`:
```
$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve
The given SavedModel `MetaGraphDef` contains `SignatureDefs` with the
following keys:
SignatureDef key: "classify_x2_to_y3"
SignatureDef key: "classify_x_to_y"
SignatureDef key: "regress_x2_to_y3"
SignatureDef key: "regress_x_to_y"
SignatureDef key: "regress_x_to_y2"
SignatureDef key: "serving_default"
```
If a `MetaGraphDef` has *multiple* tags in the tag-set, you must specify
all tags, each tag separated by a comma. For example:
<pre>
$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve,gpu
</pre>
To show all inputs and outputs TensorInfo for a specific `SignatureDef`, pass in
the `SignatureDef` key to `signature_def` option. This is very useful when you
want to know the tensor key value, dtype and shape of the input tensors for
executing the computation graph later. For example:
```
$ saved_model_cli show --dir \
/tmp/saved_model_dir --tag_set serve --signature_def serving_default
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: x:0
The given SavedModel SignatureDef contains the following output(s):
outputs['y'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: y:0
Method name is: tensorflow/serving/predict
```
To show all available information in the SavedModel, use the `--all` option.
For example:
<pre>
$ saved_model_cli show --dir /tmp/saved_model_dir --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['classify_x2_to_y3']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: x2:0
The given SavedModel SignatureDef contains the following output(s):
outputs['scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: y3:0
Method name is: tensorflow/serving/classify
...
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['x'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: x:0
The given SavedModel SignatureDef contains the following output(s):
outputs['y'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: y:0
Method name is: tensorflow/serving/predict
</pre>
### `run` command
Invoke the `run` command to run a graph computation, passing
inputs and then displaying (and optionally saving) the outputs.
Here's the syntax:
```
usage: saved_model_cli run [-h] --dir DIR --tag_set TAG_SET --signature_def
SIGNATURE_DEF_KEY [--inputs INPUTS]
[--input_exprs INPUT_EXPRS]
[--input_examples INPUT_EXAMPLES] [--outdir OUTDIR]
[--overwrite] [--tf_debug]
```
The `run` command provides the following three ways to pass inputs to the model:
* `--inputs` option enables you to pass numpy ndarray in files.
* `--input_exprs` option enables you to pass Python expressions.
* `--input_examples` option enables you to pass `tf.train.Example`.
#### `--inputs`
To pass input data in files, specify the `--inputs` option, which takes the
following general format:
```bsh
--inputs <INPUTS>
```
where *INPUTS* is either of the following formats:
* `<input_key>=<filename>`
* `<input_key>=<filename>[<variable_name>]`
You may pass multiple *INPUTS*. If you do pass multiple inputs, use a semicolon
to separate each of the *INPUTS*.
`saved_model_cli` uses `numpy.load` to load the *filename*.
The *filename* may be in any of the following formats:
* `.npy`
* `.npz`
* pickle format
A `.npy` file always contains a numpy ndarray. Therefore, when loading from
a `.npy` file, the content will be directly assigned to the specified input
tensor. If you specify a *variable_name* with that `.npy` file, the
*variable_name* will be ignored and a warning will be issued.
When loading from a `.npz` (zip) file, you may optionally specify a
*variable_name* to identify the variable within the zip file to load for
the input tensor key. If you don't specify a *variable_name*, the SavedModel
CLI will check that only one file is included in the zip file and load it
for the specified input tensor key.
When loading from a pickle file, if no `variable_name` is specified in the
square brackets, whatever that is inside the pickle file will be passed to the
specified input tensor key. Otherwise, the SavedModel CLI will assume a
dictionary is stored in the pickle file and the value corresponding to
the *variable_name* will be used.
#### `--input_exprs`
To pass inputs through Python expressions, specify the `--input_exprs` option.
This can be useful for when you don't have data
files lying around, but still want to sanity check the model with some simple
inputs that match the dtype and shape of the model's `SignatureDef`s.
For example:
```bsh
`<input_key>=[[1],[2],[3]]`
```
In addition to Python expressions, you may also pass numpy functions. For
example:
```bsh
`<input_key>=np.ones((32,32,3))`
```
(Note that the `numpy` module is already available to you as `np`.)
#### `--input_examples`
To pass `tf.train.Example` as inputs, specify the `--input_examples` option.
For each input key, it takes a list of dictionary, where each dictionary is an
instance of `tf.train.Example`. The dictionary keys are the features and the
values are the value lists for each feature.
For example:
```bsh
`<input_key>=[{"age":[22,24],"education":["BS","MS"]}]`
```
#### Save output
By default, the SavedModel CLI writes output to stdout. If a directory is
passed to `--outdir` option, the outputs will be saved as `.npy` files named after
output tensor keys under the given directory.
Use `--overwrite` to overwrite existing output files.
| github_jupyter |
# Import Dependencies
```
import warnings
warnings.filterwarnings('ignore')
import keras
import matplotlib.pyplot as plt
```
## Define Types
```
from typing import Tuple
ImageShape = Tuple[int, int]
GrayScaleImageShape = Tuple[int, int, int]
```
# MNIST Sandbox Baseline Example
This sandbox example is meant mostly to establish a few baselines for model performance to compare against, and also to get the basic Keras neural network architecture set up. I split the training and testing data and then one-hot encode the targets (one column per target, so ten columns after encoding).
```
from keras.datasets import mnist
import matplotlib.pyplot as plt
from typing import Tuple
import numpy as np
Dataset = Tuple[np.ndarray, np.ndarray]
#download mnist data and split into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(f"The shape of X_train is {X_train.shape}")
print(f"The shape of y_train is {y_train.shape}")
print(f"The shape of X_test is {X_test.shape}")
print(f"The shape of y_test is {y_test.shape} - some example targets: {y_test[:5]}")
mnist_image_shape: ImageShape = X_train.shape[1:]
print(mnist_image_shape)
from keras.utils import to_categorical
OneHotEncodedTarget = np.ndarray
Categories = int
encoded_y_train: OneHotEncodedTarget = to_categorical(y_train)
encoded_y_test: OneHotEncodedTarget = to_categorical(y_test)
print(f"One-hot encoding y_train {y_train.shape} -> {encoded_y_train.shape}")
print(f"One-hot encoding y_test {y_test.shape} -> {encoded_y_test.shape}")
K: Categories = encoded_y_test.shape[1]
```
# Vanilla CNN Implementation
Build a vanilla CNN implementation, with two convolutional layers, 64 and 32 filters each, with kernel size of `3 x 3`. Then the values are flattened and fed into the final softmax classification dense layer for predictions.
```
from keras.models import Sequential, Model
from keras.layers import Dense, Conv2D, Flatten, Input
from tensorflow.python.framework.ops import Tensor
import warnings
warnings.filterwarnings('ignore')
# define model architecture and hyperparameters
NUM_FILTERS_L1 = 64
NUM_FILTERS_L2 = 32
KERNEL_SIZE = 3
# the images are 28 x 28 (pixel size) x 1 (grayscale - if RGB, then 3)
input_dims: GrayScaleImageShape = (28,28,1)
def build_vanilla_cnn(filters_layer1:int, filters_layer2:int, kernel_size:int, input_dims: GrayScaleImageShape)-> Model:
inputs: Tensor = Input(shape=input_dims)
x: Tensor = Conv2D(filters=filters_layer1, kernel_size=kernel_size, activation='relu')(inputs)
x: Tensor = Conv2D(filters=filters_layer2, kernel_size=kernel_size, activation='relu')(x)
x: Tensor = Flatten()(x)
predictions = Dense(K, activation="softmax")(x)
print(predictions)
#compile model using accuracy to measure model performance
model: Model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=['accuracy'])
return model
model: Model = build_vanilla_cnn(NUM_FILTERS_L1, NUM_FILTERS_L2, KERNEL_SIZE, input_dims)
```
## Helper Function to Expand Tensor Dimensions By 1
```
X_train.reshape((60000,1,28,28))
def expand_tensor_shape(X_train: np.ndarray)-> np.ndarray:
new_shape: Tuple = X_train.shape + (1,)
# new_tensor = X_train.reshape(new_shape).reshape((-1,1,28,28))
new_tensor = X_train.reshape(new_shape)
print(f"Expanding shape from {X_train.shape} to {new_tensor.shape}")
return new_tensor
X_train_expanded: np.ndarray = expand_tensor_shape(X_train)
X_test_expanded: np.ndarray = expand_tensor_shape(X_test)
# train model and retrieve history
# from keras.callbacks import History
# history: History = model.fit(X_train_expanded, encoded_y_train,
# validation_data=(X_test_expanded, encoded_y_test), epochs=2, batch_size=2058)
```
## Global Average Pooling Layer
Output shape of convolutional layer is typically `batch size x number of filters x width x height`. The GAP layer will take the average of the width/height axis and return a vector of length equal to the number of filters.
```
from keras import backend as K
np.reshape(X_train_expanded, (-1,1,28,28)).shape
from keras.layers import Dense, Conv2D, Flatten, Input, MaxPool2D
###### from keras.layers import Layer, Lambda, Input
from tensorflow.python.framework.ops import Tensor
from keras.models import Sequential, Model
from keras.layers import Dense, Conv2D, Flatten, Input, MaxPool2D
from tensorflow.python.framework.ops import Tensor
def global_average_pooling(x: Layer):
return K.mean(x, axis = (2,3))
def global_average_pooling_shape(input_shape):
# return the dimensions corresponding with batch size and number of filters
return (input_shape[0], input_shape[-1])
def build_global_average_pooling_layer(function, output_shape):
return Lambda(pooling_function, output_shape)
inputs: Tensor = Input(shape=(28,28,1))
x: Tensor = Conv2D(filters=32, kernel_size=5, activation='relu')(inputs)
# x: Tensor = MaxPool2D()(x)
# x: Tensor = Conv2D(filters=64, kernel_size=5, activation='relu')(x)
x: Tensor = Lambda(lambda x: K.mean(x, axis=(1,2)), output_shape=global_average_pooling_shape)(x)
# x: Tensor = Dense(128, activation="relu")(x)
predictions: Tensor = Dense(10, activation="softmax")(x)
model: Model = Model(inputs=inputs, outputs=predictions)
model.summary()
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=['accuracy'])
from keras.callbacks import History
history: History = model.fit(X_train_expanded, encoded_y_train,
validation_data=(X_test_expanded, encoded_y_test), epochs=100, batch_size=5126)
```
## Save the Class Activation Model Weights
```
import cv2
from keras.layers import Layer, Lambda
def global_average_pooling(x: Layer):
return K.mean(x, axis = (2,3))
def global_average_pooling_shape(input_shape):
# return only the first two dimensions (batch size and number of filters)
return input_shape[0:2]
def build_global_average_pooling_layer(function, output_shape):
return Lambda(pooling_function, output_shape)
def get_output_layer(model, layer_name):
# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers])
layer = layer_dict[layer_name]
return layer
# persist mode
save_filepath: str = "basic_cam.h5"
model.save(save_filepath)
first_image = X_train[5]
first_image = first_image.reshape(28,28,1)
img = np.array(first_image).reshape(1, 28, 28, 1)
img.shape
plt.imshow(img.reshape((28,28)))
#img = np.array([np.transpose(np.float32(first_image), (2, 0, 1))])
```
## Load Basic Model
(since the model files are so large, they cannot be pushed to Github- just email me for a copy of the `.h5` model files)
```
from keras.models import load_model
model = load_model("basic_cam.h5")
dense_10_layer: Layer = model.layers[-1]
dense_10_weights = dense_10_layer.get_weights()[0]
print(f"Dense 10 weights: {dense_10_weights.shape}")
dense_128_layer: Layer = model.layers[-2]
dense_128_weights = dense_128_layer.get_weights()[0]
print(f"Dense 128 weights: {dense_128_weights.shape}")
```
## Map the Final Class Activation Map Back to the Original Input Shapes and Visualize
```
import keras.backend as K
class_weights = model.layers[-1].get_weights()[0]
final_conv_layer = get_output_layer(model, "conv2d_1")
get_output = K.function([model.layers[0].input], [final_conv_layer.output, model.layers[-1].output])
[conv_outputs, predictions] = get_output([img])
conv_outputs = conv_outputs[0,:,:,:]
print(conv_outputs.shape)
print(class_weights.shape)
def make_cam(conv_outputs, class_weights, original_shape, target_class):
cam = np.zeros(dtype=np.float32, shape = conv_outputs.shape[0:2])
for i, w in enumerate(class_weights[:, target_class]):
cam += w * conv_outputs[:,:,i]
cam /= np.max(cam)
return cv2.resize(cam, (28, 28))
def make_heatmap(cam):
heatmap = cv2.applyColorMap(np.uint8(255*cam), cv2.COLORMAP_JET)
heatmap[np.where(cam < 0.1)] = 0
return heatmap
cam = make_cam(conv_outputs, class_weights, original_shape=(28,28), target_class=2)
false_cam = make_cam(conv_outputs, class_weights, original_shape=(28,28), target_class=4)
false2_cam = make_cam(conv_outputs, class_weights, original_shape=(28,28), target_class=5)
heatmap = make_heatmap(cam)
false_heatmap = make_heatmap(false_cam)
false2_heatmap = make_heatmap(false2_cam)
new_img = heatmap*0.5 + img
final_img = new_img.reshape((28,28,3))
# f, axarr = plt.subplots(2,1)
# axarr[0,0].imshow(heatmap)
# axarr[0,1].imshow(img.reshape(28,28))
imgs = [heatmap, img.reshape(28,28)]
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(15, 15))
axes[0].imshow(heatmap)
axes[0].set_title("Activation Map for 2")
axes[1].imshow(false_heatmap)
axes[1].set_title("Activation Map for 4")
axes[2].imshow(false2_heatmap)
axes[2].set_title("Activation Map for 5")
axes[3].imshow(img.reshape((28,28)))
axes[3].set_title("True Image")
import matplotlib.pyplot as plt
plt.imshow(img.reshape((28,28)))
cam /= np.max(cam)
import keras.backend as K
from tensorflow.python.framework.ops import Tensor
dense_weights = model.layers[-2].get_weights()[0]
softmax_weights = model.layers[-1].get_weights()[0]
dense_weights.shape
softmax_weights.shape
final_conv_layer = get_output_layer(model, "conv2d_28")
final_conv_layer.output
import keras.backend as K
from tensorflow.python.framework.ops import Tensor
class_weights: np.ndarray = model.layers[-1].get_weights()[0] # class weights is of shape 32 x 10 (number of filter outputs x classes)
print(f"Class weights is shape {class_weights.shape}")
final_conv_layer: Conv2D = get_output_layer(model, "conv2d_28")
input_tensor: Tensor = model.layers[0].input
final_conv_layer_output: Tensor = final_conv_layer.output
model_class_weights: Tensor = model.layers[-1].output
# K.function is a function factory that accepts arbitrary input layers and outputs arbitrary output layers
get_output = K.function([input_tensor], [final_conv_layer_output, model_class_weights])
[conv_outputs, predictions] = get_output([img])
print("Conv2D output shape:", conv_outputs.shape) # should match the shape of the outputs from the Conv2D layer
print("Predictions:", predictions.shape)
np.argmax(predictions)
conv_outputs = conv_outputs[0,:,:,:]
# [conv_outputs, predictions] = get_output([img])
# conv_outputs = conv_outputs[0, :, :, :]
class_weights.shape
# Create the class activation map
class_activation_map = np.zeros(dtype=np.float32, shape=conv_outputs.shape[1:3])
class_activation_map.shape
#Reshape to the network input shape (3, w, h).
img = np.array([np.transpose(np.float32(original_img), (2, 0, 1))])
#Get the 512 input weights to the softmax.
class_weights = model.layers[-1].get_weights()[0]
final_conv_layer = get_output_layer(model, "conv5_3")
get_output = K.function([model.layers[0].input], \
[final_conv_layer.output,
model.layers[-1].output])
[conv_outputs, predictions] = get_output([img])
conv_outputs = conv_outputs[0, :, :, :]
#Create the class activation map.
cam = np.zeros(dtype = np.float32, shape = conv_outputs.shape[1:3])
target_class = 1
for i, w in enumerate(class_weights[:, target_class]):
cam += w * conv_outputs[i, :, :]
```
# Everything Below This Section Is Doodling
```
image_path =
original_img = cv2.imread(image_path, 1)
width, height, _ = original_image.shape
def build_vanilla_cnn(filters_layer1:int, filters_layer2:int, kernel_size:int, input_dims: GrayScaleImageShape)-> Model:
inputs: Tensor = Input(shape=input_dims)
x: Tensor = Conv2D(filters=filters_layer1, kernel_size=kernel_size, activation='relu')(inputs)
x: Tensor = Conv2D(filters=filters_layer2, kernel_size=kernel_size, activation='relu')(x)
x: Tensor = build_global_average_pooling_layer(global_average_pooling, )
predictions = Dense(K, activation="softmax")(x)
print(predictions)
#compile model using accuracy to measure model performance
model: Model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=['accuracy'])
return model
from keras.layers import merge
def build_model(input_dim):
inputs = Input(shape=input_dim)
# ATTENTION PART STARTS HERE
attention_probs = Dense(input_dim, activation='softmax', name='attention_vec')(inputs)
attention_mul = merge([inputs, attention_probs], output_shape=32, name='attention_mul', mode='mul')
# ATTENTION PART FINISHES HERE
attention_mul = Dense(64)(attention_mul)
output = Dense(1, activation='sigmoid')(attention_mul)
model = Model(input=[inputs], output=output)
return model
inputs = Input(shape=input_dims)
attention_probs = Dense(input_dims, activation='softmax', name='attention_vec')(inputs)
```
## Compile and Fit Model
```
X_train.reshape((60000,1,28,28))
def expand_tensor_shape(X_train: np.ndarray)-> np.ndarray:
new_shape: Tuple = X_train.shape + (1,)
print(f"Expanding shape from {X_train.shape} to {new_shape}")
return X_train.reshape(new_shape)
X_train_expanded: np.ndarray = expand_tensor_shape(X_train)
X_test_expanded: np.ndarray = expand_tensor_shape(X_test)
```
# FEI Face Dataset
```
from PIL.JpegImagePlugin import JpegImageFile
image: JpegImageFile = load_img('1-01.jpg')
```
| github_jupyter |
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
```
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
```
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
```
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
```
print(train_X[100])
print(train_y[100])
```
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
```
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
```
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
review_to_words(train_X[100])
```
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?
**Answer:**
1. remove html elements
2. stemming
3. stopwords removal
4. lower case all the words
The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
```
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.
### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
```
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for review in data:
for word in review:
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_dict = sorted(word_count.items(), key = lambda x:x[1], reverse = True)
sorted_words = [w for w, v in sorted_dict]
sorted_dict = None
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
```
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?
**Answer:**['movi', 'film', 'one', 'like', 'time']
```
# TODO: Use this space to determine the five most frequently appearing words in the training set.
import numpy as np
def most_freq(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
for review in data:
for word in review:
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
sorted_dict = sorted(word_count.items(), key = lambda x:x[1], reverse = True)
sorted_words = [w for w, v in sorted_dict]
sorted_dict = None
return sorted_words[:5]
most_freq(train_X)
```
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
```
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
```
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
```
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
```
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
```
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
print(train_X[100], train_X_len[100])
```
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?
**Answer:** It's standard to process both training and testing set in the same way. And only training data is used to built the dictionary.
## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
```
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
```
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.
## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
```
!pygmentize train/model.py
```
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
```
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
```
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
```
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.
optimizer.zero_grad()
out = model.forward(batch_X)
loss = loss_fn(out, batch_y)
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
```
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
```
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
```
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.
### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
```
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.
```
# TODO: Deploy the trained model
predictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')
```
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
```
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?
**Answer:**It's better precision and it makes sense for the following reasons
1. RNN model will learn from early input and build up prediction power.
2. Sentimental analysis is context dependent project, and RNN suits better for this purpose.
3. The order of text (word) matters.
### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
```
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
```
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
```
# TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = review_to_words(test_review)
test_data = [np.array(convert_and_pad(word_dict, test_data)[0])]
```
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
```
predictor.predict(test_data)
```
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.
### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
```
estimator.delete_endpoint()
```
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
```
!pygmentize serve/predict.py
```
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.
### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
```
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
```
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
```
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
results.append(float(predictor.predict(review_input)))
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
```
As an additional test, we can try sending the `test_review` that we looked at earlier.
```
predictor.predict(test_review)
```
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.
## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
```
predictor.endpoint
```
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.
Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?
**Answer:**I tried something simple 'This is a good movie' And the result is positive.
I got another movie review from internet --- "The questions are fascinating, the performances are great and the emotions are jacked up to the point of discomfort. This is certainly the best work of Plaza's film career. "
And it's positive as well. Looks like the web app is working fine!
### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
```
predictor.delete_endpoint()
```
| github_jupyter |
\title{Digital Latches with myHDL}
\author{Steven K Armour}
\maketitle
# Refs
@book{brown_vranesic_2014, place={New York, NY}, edition={3}, title={Fundamentals of digital logic with Verilog design}, publisher={McGraw-Hill}, author={Brown, Stephen and Vranesic, Zvonko G}, year={2014} },
@book{lameres_2017, title={Introduction to logic circuits & logic design with Verilog}, publisher={springer}, author={LaMeres, Brock J}, year={2017} }
# Acknowledgments
Author of **myHDL** [Jan Decaluwe](http://www.myhdl.org/users/jandecaluwe.html) and the author of the **myHDL Peeker** [XESS Corp.](https://github.com/xesscorp/myhdlpeek)
[**Draw.io**](https://www.draw.io/)
**Xilinx**
# Python Libraries Utilized
```
import numpy as np
import pandas as pd
from sympy import *
init_printing()
from myhdl import *
from myhdlpeek import *
import random
#python file of convince tools. Should be located with this notebook
from sympy_myhdl_tools import *
```
# Latches vs Flip-Flops
Latches and Flip-Flops are both metastaple logic circuit tobologies in that once loaded with a state they hold that state information till that state is upset by a new state or a reset command. But the diffrance between the two is that Flip-Flops are clock controlled devices built upon Latches where as Latches are not clock dependent
# SR-Latch
## Symbol and Internals
The Symbol for a SR-Latch and one representation of it's internals is shown below
<img style="float: center;" src="SRLatchSymbolInternal.jpg">
## Definition
## State Diagram
## myHDL SR-Latch Gate and Testing
Need Help Getting this Latch via Combo Cirucits working geting AlwayCombError in using out signal as argument in out signals next state out
## myHDL SR-Latch Behavioral and Testing
```
def SRLatch(S_in, rst, Q_out, Qn_out):
@always_comb
def logic():
if S_in and rst==0:
Q_out.next=1
Qn_out.next=0
elif S_in==0 and rst:
Q_out.next=0
Qn_out.next=1
elif S_in and rst:
Q_out.next=0
Qn_out.next=0
return logic
S_in, rst, Q_out, Qn_out=[Signal(bool(0)) for _ in range(4)]
Peeker.clear()
Peeker(S_in, 'S_in'); Peeker(rst, 'rst')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=SRLatch(S_in=S_in, rst=rst, Q_out=Q_out, Qn_out=Qn_out)
inputs=[S_in, rst]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='SRLatch Behavioral simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
## myHDL SR-Latch Behavioral HDL Synthesis
```
toVerilog(SRLatch, S_in, rst, Q_out, Qn_out)
#toVHDL(SRLatch, S_in, rst, Q_out, Qn_out)
_=VerilogTextReader('SRLatch')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our Behaviorla SRLatch from the synthesised verilog code. We can see that the systhizied version is quite apstract from fig lakdfjkaj.
<img style="float: center;" src="SRLatchBehaviroalRTLSch.PNG">
# Gated SR-Latch
## myHDL SR-Latch Behavioral and Testing
```
def GSRLatch(S_in, rst, ena, Q_out, Qn_out):
@always_comb
def logic():
if ena:
if S_in and rst==0:
Q_out.next=1
Qn_out.next=0
elif S_in==0 and rst:
Q_out.next=0
Qn_out.next=1
elif S_in and rst:
Q_out.next=0
Qn_out.next=0
else:
pass
return logic
S_in, rst, ena, Q_out, Qn_out=[Signal(bool(0)) for _ in range(5)]
Peeker.clear()
Peeker(S_in, 'S_in'); Peeker(rst, 'rst'); Peeker(ena, 'ena')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=GSRLatch(S_in=S_in, rst=rst, ena=ena, Q_out=Q_out, Qn_out=Qn_out)
inputs=[S_in, rst, ena]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='GSRLatch Behavioral simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
## myHDL SR-Latch Behavioral HDL Synthesis
```
toVerilog(GSRLatch, S_in, rst, ena, Q_out, Qn_out)
#toVHDL(GSRLatch, S_in, rst,ena, Q_out, Qn_out)
_=VerilogTextReader('GSRLatch')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our Behaviorla Gated SRLatch from the synthesised verilog code. We can see that the systhizied version is quite apstract from fig lakdfjkaj.
<img style="float: center;" src="GSRLatchBehaviroalRTLSch.PNG">
# D-Latch
## myHDL Behavioral D-Latch and Testing
```
def DLatch(D_in, ena, Q_out, Qn_out):
#Normal Qn_out is not specifed since a not gate is so easily implimented
@always_comb
def logic():
if ena:
Q_out.next=D_in
Qn_out.next=not D_in
return logic
D_in, ena, Q_out, Qn_out=[Signal(bool(0)) for _ in range(4)]
Peeker.clear()
Peeker(D_in, 'D_in'); Peeker(ena, 'ena')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=DLatch(D_in=D_in, ena=ena, Q_out=Q_out, Qn_out=Qn_out)
inputs=[D_in, ena]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='DLatch Behavioral simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
## myHDL DLatch Behavioral HDL Synthesis
```
toVerilog(DLatch, D_in, ena, Q_out, Qn_out)
#toVHDL(DLatch,D_in, ena, Q_out, Qn_out)
_=VerilogTextReader('DLatch')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our myHDL Dlatch with a exsplisit $\bar{Q}$ verilog code. Note that becouse $\bar{Q}$ is not normal declared in HDL code Vivado produced two RTL DLatchs and used a NOT Gate to acount for the negated output
<img style="float: center;" src="DLatchBehavioralRTLSch.PNG">
# Examples
| github_jupyter |
```
%%capture
# { display-mode: 'form' }
# @title PyTTI-Tools [EzMode]: VQGAN
# @markdown ## Setup
# @markdown This may take a few minutes.
## 1. Install stuff
try:
import pytti
except ImportError:
!pip install kornia pytorch-lightning transformers
!pip install jupyter loguru einops PyGLM ftfy regex tqdm hydra-core exrex
!pip install seaborn adjustText bunch matplotlib-label-lines
!pip install --upgrade gdown
!pip install --upgrade git+https://github.com/pytti-tools/AdaBins.git
!pip install --upgrade git+https://github.com/pytti-tools/GMA.git
!pip install --upgrade git+https://github.com/pytti-tools/taming-transformers.git
!pip install --upgrade git+https://github.com/openai/CLIP.git
!pip install --upgrade git+https://github.com/pytti-tools/pytti-core.git
# These are notebook specific
!pip install --upgrade natsort
try:
import mmc
except:
# install mmc
!git clone https://github.com/dmarx/Multi-Modal-Comparators
!pip install poetry
!cd Multi-Modal-Comparators; poetry build
!cd Multi-Modal-Comparators; pip install dist/mmc*.whl
!python Multi-Modal-Comparators/src/mmc/napm_installs/__init__.py
from natsort import natsorted
from omegaconf import OmegaConf
from pathlib import Path
import mmc.loaders
!python -m pytti.warmup
notebook_params = {}
def get_output_paths():
outv = [str(p.resolve()) for p in Path('outputs/').glob('**/*.png')]
#outv.sort()
outv = natsorted(outv)
return outv
resume = True # @param {type:"boolean"}
if resume:
inits = get_output_paths()
if inits:
notebook_params.update({
'init_image':inits[-1],
})
# @markdown ## Basic Settings
prompts = "a photograph of albert einstein" # @param {type:"string"}
height = 512 # @param {type:"integer"}
width = 512 # @param {type:"integer"}
cell_params = {
"scenes": prompts,
"height":height,
"width":width,
}
notebook_params.update(cell_params)
# @markdown ## Advanced Settings
vqgan_model = "coco" # @param ["coco","sflickr","imagenet","wikiart","openimages"]
cell_params = {
"vqgan_model": vqgan_model,
}
notebook_params.update(cell_params)
invariants = """
## Invariant settings ##
steps_per_frame: 50
steps_per_scene: 500
pixel_size: 1
image_model: VQGAN
use_mmc: true
mmc_models:
- architecture: clip
publisher: openai
id: ViT-B/16
"""
from omegaconf import OmegaConf
from pathlib import Path
cfg_invariants = OmegaConf.create(invariants)
nb_cfg = OmegaConf.create(notebook_params)
conf = OmegaConf.merge(cfg_invariants, nb_cfg)
with open("config/conf/this_run.yaml", "w") as f:
outstr = "# @package _global_\n"
outstr += OmegaConf.to_yaml(conf)
print(outstr)
f.write(
outstr
)
#Path("config/conf/ezmode/").mkdir(parents=True, exist_ok=True)
## Do the run
! python -m pytti.workhorse conf=this_run
# @title Show Outputs
from IPython.display import Image, display
outputs = list(Path('outputs/').glob('**/*.png'))
outputs.sort()
im_path = str(outputs[-1])
Image(im_path, height=height, width=width)
# @markdown compile images into a video of the generative process
from PIL import Image as pilImage
from subprocess import Popen, PIPE
from tqdm.notebook import tqdm
fps = 12 # @param {type:'number'}
fpaths = get_output_paths()
frames = []
for filename in tqdm(fpaths):
frames.append(pilImage.open(filename))
cmd_in = ['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-']
cmd_out = ['-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '1', '-preset', 'veryslow', f'output.mp4']
cmd = cmd_in + cmd_out
p = Popen(cmd, stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("Encoding video...")
p.wait()
print("Video saved to output.mp4.")
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.