code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
# pandas字符串操作
很明显除了数值型,我们处理的数据还有很多字符类型的,而这部分数据显然也非常重要,因此这个部分我们提一提pandas的字符串处理。
```
%matplotlib inline
%config ZMQInteractiveShell.ast_node_interactivity='all'
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = (15, 3)
plt.rcParams['font.family'] = 'sans-serif'
```
前面看到pandas在处理数值型的时候,各种如鱼得水,偷偷告诉你,pandas处理字符串也相当生猛。<br>
咱们来读一份天气数据。
```
weather_2012 = pd.read_csv('./data/weather_2012.csv', parse_dates=True, index_col='Date/Time')
weather_2012[:5]
```
# 5.1字符串操作
从上面的数据里面可以看到,有 'Weather' 这一列。我们这里假定包含 "Snow" 的才是下雪天。
pandas的str类型提供了一系列方便的函数,比如这里的contains,更多的例子可以查看 [这里](http://pandas.pydata.org/pandas-docs/stable/basics.html#vectorized-string-methods)。
```
weather_description = weather_2012['Weather']
is_snowing = weather_description.str.contains('Snow')
```
你看我们contains返回的其实是布尔型的判定结果的dataframe。
```
# 返回bool型内容的dataframe
is_snowing[:5]
```
你以为懒癌晚期的我会一个个去看吗!!图样图森破!!我一个函数就给你画出来了!!!
```
# 就是屌!!!
is_snowing.plot()
```
# 6.2 平均气温
如果我们想知道每个月的温度值中位数,有一个很有用的函数可以调用哈,叫 `resample()`
```
weather_2012['Temp (C)'].resample('M', how=np.median).plot(kind='bar')
```
符合预期对吧,7月和8月是温度最高的
你要知道,布尔型的 `True` 和 `False`其实是不便于运算的,当然,其实他们就是0和1了,所以我们转成float型去做做运算可好?
```
is_snowing.astype(float)[:10]
```
然后我们很聪明地用 `resample` 去找到每个月下雪的比例状况(为嘛感觉在做很无聊的事情,其实哪个月下雪多我们知道的对么...)
```
is_snowing.astype(float).resample('M', how=np.mean)
is_snowing.astype(float).resample('M', how=np.mean).plot(kind='bar')
```
So,你也看到了,加拿大的12月是下雪最多的月份。然后你还能观察到一些其他的端倪,比如你会发现,11月开始突然下雪,接着就雪期漫漫,虽然下雪的概率逐步减小,但是可能要到4月或者5月才会停止。
# 5.3 画一下温度和雪期
我们把温度和下雪概率放到一起,组成dataframe的2列,然后画个图
```
temperature = weather_2012['Temp (C)'].resample('M', how=np.median)
is_snowing = weather_2012['Weather'].str.contains('Snow')
snowiness = is_snowing.astype(float).resample('M', how=np.mean)
# 给列取个名字
temperature.name = "Temperature"
snowiness.name = "Snowiness"
```
### 我们用concat完成字符串的拼接
用 `concat` 把这两列拼接到一列中,组成一个新的dataframe
```
stats = pd.concat([temperature, snowiness], axis=1)
stats
stats.plot(kind='bar')
```
你发现,什么鬼!!!紫色的下雪概率呢!!!<br>
是的亲,你这2个维度的幅度是不一样的,所以要分开画哦。
```
stats.plot(kind='bar', subplots=True, figsize=(15, 10))
```
# 总结
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Make the graphs a bit prettier, and bigger
pd.set_option('display.mpl_style', 'default')
# matplotlib.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 5)
plt.rcParams['font.family'] = 'sans-serif'
# This is necessary to show lots of columns in pandas 0.12.
# Not necessary in pandas 0.13.
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
# Load data
weather_2012 = pd.read_csv('./data/weather_2012.csv',
parse_dates=True,
index_col='Date/Time')
# Data Preprocessing
temperature = weather_2012['Temp (C)'].resample('M', how=np.median) #采样频率 1个月,统计 中位数
is_snowing = weather_2012['Weather'].str.contains('Snow') #统计 字符串 的值是否包含某个字段,
snowiness = is_snowing.astype(float).resample('M', how=np.mean) #采样频率 1个月,统计 平均值
# Rename the Dataframe
temperature.name = "Temperature"
snowiness.name = "Snowiness"
# Concat all of the Dataframe
stats = pd.concat([temperature, snowiness], axis=1)
# Plot
stats.plot(kind='bar', subplots=True, figsize=(15, 10))
```
|
github_jupyter
|
# Machine Learning - AA2AR
**By Jakke Neiro & Andrei Roibu**
## 1. Importing All Required Dependencies
This script imports all the required dependencies for running the different functions and the codes. Also, by using the _run_ command, the various notebooks are imprted into the main notebook.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import interp
import glob, os
from sklearn.metrics import accuracy_score, roc_curve, roc_auc_score, auc, roc_auc_score, confusion_matrix, classification_report, log_loss
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
from sklearn import svm, datasets, tree
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier, MLPRegressor
from sklearn.preprocessing import StandardScaler
```
In order to take advantage of the speed increases provided by GPUs, this code has been modified in order to run on Google Colab notebooks. In order to do this, the user needs to set the *google_colab_used* parameter to **True**. For usage on a local machine, this needs to be set to **False**
If used on a google colab notebook, the user will need to follow the instructions to generate and then copy an autorisation code from a generated link.
```
google_colab_used = True
if google_colab_used == True:
# Load the Drive helper and mount
from google.colab import drive
# This will prompt for authorization.
drive.mount('/content/drive')
data_drive = '/content/drive/My Drive/JA-ML/data'
os.chdir(data_drive)
else:
os.chdir("./data")
```
## 2. Data Pre-Processing
This section imports all the required datasets as pandas dataframes, concatenates them, after which it pre-processes them by eliminating all non-numerical data and columns which contain the same data-values. This script also creates the input dataset and the labeled output dataset.
```
def data_preprocessing():
'''
This reads all the input datasets, pre-processes them and then generates the input dataset and the labelled dataset.
Args:
None
Returns:
X (ndarray): A 2D array containing the input processed data
y (ndarray): A 1D array containing a list of labels, with 1 corresponding to "active" and 0 corresponding to "dummy"
'''
df_list = []
y = np.array([])
for file in glob.glob("aa2ar*.csv"):
df = pd.read_csv(file, header = 0)
cols = df.shape[0]
if "actives" in file:
y_df = np.ones((cols))
else:
y_df = np.zeros((cols))
y = np.concatenate((y,y_df), axis=0)
df_list.append(df)
global_df = pd.concat(df_list, axis=0, ignore_index=True)
global_df = global_df._get_numeric_data() # removes any non-numeric data
global_df = global_df.loc[:, (global_df != global_df.iloc[0]).any()] # modifies the dataframe to remove columns with only 0s
X_headers = list(global_df.columns.values)
X = global_df.values
return X,y
X,y = data_preprocessing()
def data_split(X,y,random_state=42):
'''
This function takes the original datasets and splits them into training and testing datasets. For consistency, the function employs a 80-20 split for the train and test sets.
Args:
X (ndarray): A 2D array containing the input processed data
y (ndarray): A 1D array containing a list of labels, with 1 corresponding to "active" and 0 corresponding to "dummy"
random_state (int): An integer, representing the seed to be used by the random number generator; if not provided, the default value goes to 42
Returns:
X_train (ndarray): 2D array of input dataset used for training
X_test (ndarray): 2D array of input dataset used for testing
y_train (ndarray): 1D array of train labels
y_test (ndarray): 1D array of test labels
'''
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=random_state)
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = data_split(X,y)
```
## 3. Model Evaluation
This section produces the ROC plot, as well as several other performance metrics, including the classifier scores, the log-loss for each classifier, the confusion matrix and the classification report including the f1 score. The f1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.
```
def ROC_plotting(title, y_test, y_score):
'''
This function generates the ROC plot for a given model.
Args:
title (string): String represending the name of the model.
y_test (ndarray): 1D array of test dataset
y_score (ndarray): 1D array of model-predicted labels
Returns:
ROC Plot
'''
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2 # linewidth
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title(title)
plt.legend(loc="lower right")
plt.show()
def performance_evaluation(X_train, X_test, y_train, y_test, predicted_train, predicted_test, y_score, title="model"):
'''
This function prints the results of the different classifiers,a s well as several performance metrics
Args:
X_train (ndarray): 2D array of input dataset used for training
X_test (ndarray): 2D array of input dataset used for testing
y_train (ndarray): 1D array of train labels
y_test (ndarray): 1D array of test labels
title (string): the classifier name
predicted_train (ndarray): 1D array of model-predicted labels for the train dataset
predicted_test (ndarray): 1D array of model-predicted labels for the test dataset
Returns:
ROC Plot
'''
print("For the ", title, " classifier:")
print("Training set score: %f" % accuracy_score(y_train,predicted_train ))
print("Training log-loss: %f" % log_loss(y_train, predicted_train))
print("Training set confusion matrix:")
print(confusion_matrix(y_train,predicted_train))
print("Training set classification report:")
print(classification_report(y_train,predicted_train))
print("Test set score: %f" % accuracy_score(y_test, predicted_test))
print("Test log-loss: %f" % log_loss(y_test, predicted_test))
print("Test set confusion matrix:")
print(confusion_matrix(y_test,predicted_test))
print("Test set classification report:")
print(classification_report(y_test,predicted_test))
ROC_plotting("ROC for "+ title,y_test, y_score)
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
print("AUC:" + str(roc_auc))
def model_evaluation(function_name, X_train, X_test, y_train, y_test, title):
'''
This function evaluates the propoesed model the results of the different classifiers,a s well as several performance metrics
Args:
function_name (function): the function describing the employed model
X_train (ndarray): 2D array of input dataset used for training
X_test (ndarray): 2D array of input dataset used for testing
y_train (ndarray): 1D array of train labels
y_test (ndarray): 1D array of test labels
title (string): the classifier name
Returns:
ROC Plot
'''
if title == 'Neural Network':
y_predicted_train, y_predicted_test, y_score = neural_network(X_train, X_test, y_train, y_test)
else:
y_predicted_train, y_predicted_test, y_score = function_name(X_train, y_train, X_test)
performance_evaluation(X_train, X_test, y_train, y_test, y_predicted_train, y_predicted_test, y_score, title)
def multiple_model_evaluation(function_name, X, y, title):
'''
This function takes the proposed model and original datasets and evaluates the proposed model by splitting the datasets randomly for 5 times.
Args:
function_name (function): the function describing the employed model
X (ndarray): A 2D array containing the input processed data
y (ndarray): A 1D array containing a list of labels, with 1 corresponding to "active" and 0 corresponding to "dummy"
title (string): the classifier name
'''
random_states = [1, 10, 25, 42, 56]
test_set_scores = []
test_log_losses = []
roc_aucs = []
for random_state in random_states:
X_train, X_test, y_train, y_test = data_split(X,y,random_state=random_state)
if title == 'Neural Network':
y_predicted_train, y_predicted_test, y_score = neural_network(X_train, X_test, y_train, y_test)
else:
y_predicted_train, y_predicted_test, y_score = function_name(X_train, y_train, X_test)
test_set_score = accuracy_score(y_test, y_predicted_test)
test_log_loss = log_loss(y_test, y_predicted_test)
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
test_set_scores.append(test_set_score)
test_log_losses.append(test_log_loss)
roc_aucs.append(roc_auc)
print("The average test set score for ", title, "is: ", str(np.mean(test_set_scores)))
print("The average test log-loss for ", title, "is: ", str(np.mean(test_log_loss)))
print("The average AUC for ", title, "is: ", str(np.mean(roc_auc)))
```
## 4. Logistic regression, linear and quadratic discriminant analysis
### 4.1. Logistic regression
Logistic regression (logit regression, log-liner classifier) is a generalized linear model used for classification that uses a log-linear link function to model the outcome of a binary reponse variable $\mathbf{y}$ using a single or multiple predictors $\mathbf{X}$. Mathematically, the logistic regression primarily computes the probability of the value of a response variable given a value of the predictor, and this probability is then used for predicting the most probable outcome. The logistic regression has several advantages: it is easy to implement, it is efficient to train and it does not require input features to be scaled. However, the logistic regression can only produce a non-linear decision boundary. Therefore, with a complex dataset as ours, we do not expect it to perform particularly well.
```
def LogReg(X_train, y_train, X_test):
"""Classification using logistic regression
Args:
X_train: Predictor or feature values used for training
y_train: Response values used for training
X_test: Predictor or feature values used for predicting the response values using the classifier
Returns:
y_predicted: The predicted response values
"""
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
#Define and train the model
classifier = LogisticRegression(max_iter = 500).fit(X_train, y_train)
#Predict the response values using the test predictor data
y_predicted_test = classifier.predict(X_test)
y_predicted_train = classifier.predict(X_train)
y_score = classifier.predict_proba(X_test)[:,1]
return y_predicted_train, y_predicted_test, y_score
model_evaluation(LogReg, X_train, X_test, y_train, y_test, title='Logistic Regression')
multiple_model_evaluation(LogReg, X, y, title='Logistic Regression')
```
### 4.2. Linear discriminant analysis
LDA employs Bayes' theorem to fit a Gaussian density to each class of data. The classes are assumed to have the same covariance matrix. This generates a linear decision boundry.
```
def LDA(X_train, y_train, X_test):
"""Classification using LDA
Args:
X_train: Predictor or feature values used for training
y_train: Response values used for training
X_test: Predictor or feature values used for predicting the response values using the classifier
Returns:
y_predicted_train: The predicted response values for the training dataset
y_predicted_test: The predicted response values for the test dataset
"""
classifier = LinearDiscriminantAnalysis()
classifier = classifier.fit(X_train, y_train)
y_predicted_test = classifier.predict(X_test)
y_predicted_train = classifier.predict(X_train)
y_score = classifier.predict_proba(X_test)[:,1]
return y_predicted_train, y_predicted_test, y_score
model_evaluation(LDA, X_train, X_test, y_train, y_test, title='Linear Discriminant')
multiple_model_evaluation(LDA, X, y, title='Linear Discriminant')
```
### 4.3. Quadratic discriminant analysis
QDA is similar to LDA, however it employs a quadratic decision boundary, rather than a linear one.
```
def QDA(X_train, y_train, X_test):
"""Classification using QDA
Args:
X_train: Predictor or feature values used for training
y_train: Response values used for training
X_test: Predictor or feature values used for predicting the response values using the classifier
Returns:
y_predicted_train: The predicted response values for the training dataset
y_predicted_test: The predicted response values for the test dataset
"""
classifier = QuadraticDiscriminantAnalysis()
classifier = classifier.fit(X_train, y_train)
y_predicted_test = classifier.predict(X_test)
y_predicted_train = classifier.predict(X_train)
y_score = classifier.predict_proba(X_test)[:,1]
return y_predicted_train, y_predicted_test, y_score
model_evaluation(QDA, X_train, X_test, y_train, y_test, title='Quadratic Discriminant')
multiple_model_evaluation(QDA, X, y, title='Quadratic Discriminant')
```
## 5. Decision trees and random forest
### 5.1. Single decision tree
Decision trees are a non-parametric learning method used for both classification and regression. The advantages of decision trees are that they are easy to understand and they can be used for a broad range of data. However, the main disadvantages are that a single decision tree is easily overfitted and hence even small perturbations in the data might result in a markedly different classifier. This problem is tackled by generating several decision trees for deriving the final classifier. Here, we first train a single decision tree before we looking into more sophisticated ensemble methods.
We fit a single decision tree with default parameters and predict the values of $\mathbf{y}$ based on the test data.
```
def DecisionTree(X_train, y_train, X_test):
"""Classification using Decision Tree
Args:
X_train: Predictor or feature values used for training
y_train: Response values used for training
X_test: Predictor or feature values used for predicting the response values using the classifier
Returns:
y_predicted_train: The predicted response values for the training dataset
y_predicted_test: The predicted response values for the test dataset
"""
classifier = tree.DecisionTreeClassifier()
classifier = classifier.fit(X_train, y_train)
y_predicted_test = classifier.predict(X_test)
y_predicted_train = classifier.predict(X_train)
y_score = classifier.predict_proba(X_test)[:,1]
return y_predicted_train, y_predicted_test, y_score
model_evaluation(DecisionTree, X_train, X_test, y_train, y_test, title='Decision Tree')
multiple_model_evaluation(DecisionTree, X, y, title='Decision Tree')
```
### 5.2. Random forest
Radnom forest explanation...
```
def RandomForest(X_train, y_train, X_test):
"""Classification using Random Forest
Args:
X_train: Predictor or feature values used for training
y_train: Response values used for training
X_test: Predictor or feature values used for predicting the response values using the classifier
Returns:
y_predicted_train: The predicted response values for the training dataset
y_predicted_test: The predicted response values for the test dataset
"""
rf_classifier = RandomForestClassifier(n_estimators=200)
rf_classifier = rf_classifier.fit(X_train, y_train)
y_predicted_test = rf_classifier.predict(X_test)
y_predicted_train = rf_classifier.predict(X_train)
y_score = rf_classifier.predict_proba(X_test)[:,1]
return y_predicted_train, y_predicted_test, y_score
model_evaluation(RandomForest, X_train, X_test, y_train, y_test, title='Random Forest')
multiple_model_evaluation(RandomForest, X, y, title='Random Forest')
```
## 6. Neural Network
A Neural Network, also known as a multi-layered perceptron, is a supervised learning algorithm that learns a function, which is trained using a set of features and targets. A neural network can learns a non-linear function approximator, allowing classification of data. Between the input and output layers, there are a set of non-linear hidden layers. The advantages of a neural network are it's ability to learn non-linear models and perform learning in real-time. However, a NN can suffer from different validation accuracy induced by random weight initialization, has a large number of hyper parameters which require tunning and is sensitive to feature scaling.
The neural_network function below makes use of inbuilt MLPClassifier, which implements a multi-layer perceptron (MLP) algorithm that trains using Backpropagation.
As MLPs are sensitive to feature scaling, the data is scaled using the built-in StandardScaler for standardization. The same scaling s applied to the test set for meaningful results.
Most of the MLPClassifier's parameters where left to random. However, several were modifed in order to enhance performance. Firstly, the solver was set to _adam_, which reffers to a stochastic gradient-based optimizer, the alpha regularization parameter was set to 1e-5, the number of hidden layers was set to 2, each with 70 neurons (numbers determined through experimentation throughout the day), and the max_iterations was set to 1500.
```
def neural_network(X_train, X_test, y_train, y_test):
'''
This function takes in the input datasets, creates a neural network, trains and then tests it.
Written by AndreiRoibu
Args:
X_train (ndarray): 2D array of input dataset used for training
X_test (ndarray): 2D array of input dataset used for testing
y_train (ndarray): 1D array of train labels
y_test (ndarray): 1D array of test labels
Returns:
predicted_train (ndarray): 1D array of model-predicted labels for the train dataset
predicted_test (ndarray): 1D array of model-predicted labels for the test dataset
'''
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
classifier = MLPClassifier(solver='adam', alpha=1e-5, hidden_layer_sizes=(70,70), random_state=1, max_iter=1500)
classifier.fit(X_train, y_train)
predicted_train = classifier.predict(X_train)
predicted_test = classifier.predict(X_test)
y_score = classifier.predict_proba(X_test)[:,1]
return predicted_train, predicted_test, y_score
model_evaluation(neural_network, X_train, X_test, y_train, y_test, title='Neural Network')
multiple_model_evaluation(neural_network, X, y, title='Neural Network')
```
|
github_jupyter
|
# Lecture 8: p-hacking and Multiple Comparisons
[J. Nathan Matias](https://github.com/natematias)
[SOC412](https://natematias.com/courses/soc412/), February 2019
In Lecture 8, we discussed Stephanie Lee's story about [Brian Wansink](https://www.buzzfeednews.com/article/stephaniemlee/brian-wansink-cornell-p-hacking#.btypwrDwe5), a food researcher who was found guilty of multiple kinds of research misconduct, including "p-hacking," where researchers keep looking for an answer until they find one. In this lecture, we will discuss what p-hacking is and what researchers can do to protect against it in our own work.
This example uses the [DeclareDesign](http://declaredesign.org/) library, which supports the simulation and evaluation of experiment designs. We will be using DeclareDesign to help with designing experiments in this class.
What can you do in your research to protect yourself against the risk of p-hacking or against reductions in the credibility of your research if people accuse you of p-hacking?
* Conduct a **power analysis** to choose a sample size that is large enough to observe the effect you're looking for (see below)
* If you have multiple statistical tests in each experiment, [adjust your analysis for multiple comparisons](https://egap.org/methods-guides/10-things-you-need-know-about-multiple-comparisons).
* [Pre-register](https://cos.io/prereg/) your study, being clear about whether your research is exploratory or confirmatory, and committing in advance to the statistical tests you're using to analyze the results
* Use cross-validation with training and holdout samples to take an exploratory + confirmatory approach (requires a much larger sample size, typically greater than 2x)
# Load Libraries
```
options("scipen"=9, "digits"=4)
library(dplyr)
library(MASS)
library(ggplot2)
library(rlang)
library(corrplot)
library(Hmisc)
library(tidyverse)
library(viridis)
library(fabricatr)
library(DeclareDesign)
## Installed DeclareDesign 0.13 using the following command:
# install.packages("DeclareDesign", dependencies = TRUE,
# repos = c("http://R.declaredesign.org", "https://cloud.r-project.org"))
options(repr.plot.width=7, repr.plot.height=4)
set.seed(03456920)
sessionInfo()
```
# What is a p-value?
A p-value (which can be calculated differently for different kinds of statistical tests) is an estimate of the probability of rejecting a null hypothesis. When testing differences in means, we are usually testing the null hypothesis of no difference between the two distributions. In those cases, the p-value is the probability of observing a difference between the distributions that is at least as extreme as the one observed.
You can think of the p-value as the probability represented by the area under the following t distribution of all of the possible outcomes for a given difference between means if the null hypothesis is true:

### Illustrating The Null Hypothesis
In the following case, I generate 100 sets of normal distributions with exactly the same mean and standard deviation, and then plot the differences between those means:
```
### GENERATE n.samples simulations at n.sample.size observations
### using normal distributions at the specified means
### and record the difference in means and the p value of the observations
#
# `@diff.df: the dataframe to pass in
# `@n.sample.size: the sample sizes to draw from a normal distribution
generate.n.samples <- function(diff.df, n.sample.size = 500){
for(i in seq(nrow(diff.df))){
row = diff.df[i,]
a.dist = rnorm(n.sample.size, mean = row$a.mean, sd = row$a.sd)
b.dist = rnorm(n.sample.size, mean = row$b.mean, sd = row$a.sd)
t <- t.test(a.dist, b.dist)
diff.df[i,]$p.value <- t$p.value
diff.df[i,]$mean.diff <- mean(b.dist) - mean(a.dist)
}
diff.df
}
#expand.grid
n.samples = 1000
null.hypothesis.df = data.frame(a.mean = 1, a.sd = 1,
b.mean = 1, b.sd = 1,
id=seq(n.samples),
mean.diff = NA,
p.value = NA)
null.hypothesis.df <- generate.n.samples(null.hypothesis.df, 200)
ggplot(null.hypothesis.df, aes(mean.diff)) +
geom_histogram(binwidth=0.01) +
xlim(-1.2,1.2) +
ggtitle("Simulated Differences in means under the null hypothesis")
ggplot(null.hypothesis.df, aes(mean.diff, p.value, color=factor(p.value < 0.05))) +
geom_point() +
geom_hline(yintercept = 0.05) +
ggtitle("Simulated p values under the null hypothesis")
print("How often is the p-value < 0.05?")
summary(null.hypothesis.df$p.value > 0.05)
```
### Illustrating A Difference in Means (first with a small sample size)
```
#expand.grid
small.sample.diff.df = data.frame(a.mean = 1, a.sd = 1,
b.mean = 1.2, b.sd = 1,
id=seq(n.samples),
mean.diff = NA,
p.value = NA)
small.sample.diff.df <- generate.n.samples(small.sample.diff.df, 20)
ggplot(small.sample.diff.df, aes(mean.diff)) +
geom_histogram(binwidth=0.01) +
xlim(-1.2,1.2) +
ggtitle("Simulated Differences in means under the a diff in means of 1 (n=20)")
ggplot(small.sample.diff.df, aes(mean.diff, p.value, color=factor(p.value < 0.05))) +
geom_point() +
geom_hline(yintercept = 0.05) +
ggtitle("Simulated p values under a diff in means of 0.2 (n = 20)")
print("How often is the p-value < 0.05?")
summary(small.sample.diff.df$p.value > 0.05)
print("How often is the p-value < 0.05? when the estimate is < 0 (false positive)?")
nrow(subset(small.sample.diff.df, mean.diff<0 &p.value < 0.05))
print("How often is the p-value >= 0.05 when the estimate is 0.2 or greater (false negative)?")
print(sprintf("%1.2f precent",
nrow(subset(small.sample.diff.df, mean.diff>=0.2 &p.value >= 0.05)) /
nrow(small.sample.diff.df)*100))
print("What is the smallest positive, statistically-significant result?")
sprintf("%1.2f, which is greater than the true difference of 0.2",
min(subset(small.sample.diff.df, mean.diff>0 & p.value < 0.05)$mean.diff))
print("If we only published statistically-significant results, what we would we think the true effect would be?")
sprintf("%1.2f, which is greater than the true difference of 0.2",
mean(subset(small.sample.diff.df, p.value < 0.05)$mean.diff))
print("If we published all experiment results, what we would we think the true effect would be?")
sprintf("%1.2f, which is very close to the true difference of 0.2",
mean(small.sample.diff.df$mean.diff))
```
### Illustrating A Difference in Means (with a larger sample size)
```
#expand.grid
larger.sample.diff.df = data.frame(a.mean = 1, a.sd = 1,
b.mean = 1.2, b.sd = 1,
id=seq(n.samples),
mean.diff = NA,
p.value = NA)
larger.sample.diff.df <- generate.n.samples(larger.sample.diff.df, 200)
ggplot(larger.sample.diff.df, aes(mean.diff)) +
geom_histogram(binwidth=0.01) +
xlim(-1.2,1.2) +
ggtitle("Simulated Differences in means under the a diff in means of 1 (n=200)")
ggplot(larger.sample.diff.df, aes(mean.diff, p.value, color=factor(p.value < 0.05))) +
geom_point() +
geom_hline(yintercept = 0.05) +
ggtitle("Simulated p values under a diff in means of 0.2 (n = 200)")
print("If we only published statistically-significant results, what we would we think the true effect would be?")
sprintf("%1.2f, which is greater than the true difference of 0.2",
mean(subset(larger.sample.diff.df, p.value < 0.05)$mean.diff))
print("How often is the p-value < 0.05?")
sprintf("%1.2f percent",
nrow(subset(larger.sample.diff.df,p.value < 0.05)) / nrow(larger.sample.diff.df)*100)
```
### Illustrating a Difference in Means (with an adequately large sample size)
```
adequate.sample.diff.df = data.frame(a.mean = 1, a.sd = 1,
b.mean = 1.2, b.sd = 1,
id=seq(n.samples),
mean.diff = NA,
p.value = NA)
adequate.sample.diff.df <- generate.n.samples(larger.sample.diff.df, 400)
ggplot(adequate.sample.diff.df, aes(mean.diff, p.value, color=factor(p.value < 0.05))) +
geom_point() +
geom_hline(yintercept = 0.05) +
ggtitle("Simulated p values under a diff in means of 0.2 (n = 400)")
print("How often is the p-value < 0.05?")
sprintf("%1.2f percent",
nrow(subset(adequate.sample.diff.df,p.value < 0.05)) / nrow(adequate.sample.diff.df)*100)
print("If we only published statistically-significant results, what we would we think the true effect would be?")
sprintf("%1.2f, which is greater than the true difference of 0.2",
mean(subset(adequate.sample.diff.df, p.value < 0.05)$mean.diff))
```
# The Problem of Multiple Comparisons
In the above example, I demonstrated that across 100 samples under the null hypothesis and a decision rule of p = 0.05, roughly 5% of the results are statistically significant. This is similarly true for a single experiment with multiple outcome variables.
```
## Generate n normally distributed outcome variables with no difference on average
#
#` @num.samples: sample size for the dataframe
#` @num.columns: how many outcome variables to observe
#` @common.mean: the mean of the outcomes
#` @common.sd: the standard deviation of the outcomes
generate.n.outcomes.null <- function( num.samples, num.columns, common.mean, common.sd){
df <- data.frame(id = seq(num.samples))
for(i in seq(num.columns)){
df[paste('row.',i,sep="")] <- rnorm(num.samples, mean=common.mean, sd=common.sd)
}
df
}
```
### With 10 outcome variables, if we look for correlations between every outcomes, we expect to see 5% false positives on average under the null hypothesis.
```
set.seed(487)
## generate the data
null.10.obs <- generate.n.outcomes.null(100, 10, 1, 3)
null.10.obs$id <- NULL
null.correlations <- cor(null.10.obs, method="pearson")
null.pvalues <- cor.mtest(null.10.obs, conf.level = 0.95, method="pearson")$p
corrplot(cor(null.10.obs, method="pearson"), sig.level = 0.05, p.mat = null.pvalues)
```
### With multiple comparisons, increasing the sample size does not make the problem go away. Here, we use a sample of 10000 instead of 100
```
null.10.obs.large <- generate.n.outcomes.null(10000, 10, 1, 3)
null.10.obs.large$id <- NULL
null.correlations <- cor(null.10.obs.large, method="pearson")
null.pvalues <- cor.mtest(null.10.obs.large, conf.level = 0.95, method="pearson")$p
corrplot(cor(null.10.obs.large, method="pearson"), sig.level = 0.05, p.mat = null.pvalues)
```
# Power Analysis
A power analysis is a process for deciding what sample size to use based on the chance of observing the minimum effect you are looking for in your study. This power analysis uses [DeclareDesign](http://declaredesign.org/). Another option is the [egap Power Analysis page.](https://egap.org/content/power-analysis-simulations-r)
(we will discuss this in further detail in a subsequent class)
```
mean.a <- 0
effect.b <- 0.1
sample.size <- 500
design <-
declare_population(
N = sample.size
) +
declare_potential_outcomes(
YA_Z_0 = rnorm(n=N, mean = mean.a, sd=1),
YA_Z_1 = rnorm(n=N, mean = mean.a + effect.b, sd=1)
) +
declare_assignment(num_arms = 2,
conditions = (c("0", "1"))) +
declare_estimand(ate_YA_1_0 = effect.b) +
declare_reveal(outcome_variables = c("YA")) +
declare_estimator(YA ~ Z, estimand="ate_YA_1_0")
design
diagnose_design(design, sims=500, bootstrap_sims=500)
```
|
github_jupyter
|
```
import pandas as pd
```
# Classification
We'll take a tour of the methods for classification in sklearn. First let's load a toy dataset to use:
```
from sklearn.datasets import load_breast_cancer
breast = load_breast_cancer()
```
Let's take a look
```
# Convert it to a dataframe for better visuals
df = pd.DataFrame(breast.data)
df.columns = breast.feature_names
df
```
And now look at the targets
```
print(breast.target_names)
breast.target
```
## Classification Trees
Using the scikit learn models is basically the same as in Julia's ScikitLearn.jl
```
from sklearn.tree import DecisionTreeClassifier
cart = DecisionTreeClassifier(max_depth=2, min_samples_leaf=140)
cart.fit(breast.data, breast.target)
```
Here's a helper function to plot the trees.
# Installing Graphviz (tedious)
## Windows
1. Download graphviz from https://graphviz.gitlab.io/_pages/Download/Download_windows.html
2. Install it by running the .msi file
3. Set the pat variable:
(a) Go to Control Panel > System and Security > System > Advanced System Settings > Environment Variables > Path > Edit
(b) Add 'C:\Program Files (x86)\Graphviz2.38\bin'
4. Run `conda install graphviz`
5. Run `conda install python-graphviz`
## macOS and Linux
1. Run `brew install graphviz` (install `brew` from https://docs.brew.sh/Installation if you don't have it)
2. Run `conda install graphviz`
3. Run `conda install python-graphviz`
```
import graphviz
import sklearn.tree
def visualize_tree(sktree):
dot_data = sklearn.tree.export_graphviz(sktree, out_file=None,
filled=True, rounded=True,
special_characters=False,
feature_names=df.columns)
return graphviz.Source(dot_data)
visualize_tree(cart)
```
We can get the label predictions with the `.predict` method
```
labels = cart.predict(breast.data)
labels
```
And similarly the predicted probabilities with `.predict_proba`
```
probs = cart.predict_proba(breast.data)
probs
```
Just like in Julia, the probabilities are returned for each class
```
probs.shape
```
We can extract the second column of the probs by slicing, just like how we did it in Julia
```
probs = cart.predict_proba(breast.data)[:,1]
probs
```
To evaluate the model, we can use functions from `sklearn.metrics`
```
from sklearn.metrics import roc_auc_score, accuracy_score, confusion_matrix
roc_auc_score(breast.target, probs)
accuracy_score(breast.target, labels)
confusion_matrix(breast.target, labels)
from lazypredict.Supervised import LazyClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
data = load_breast_cancer()
X = data.data
y= data.target
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.5,random_state =123)
clf = LazyClassifier(verbose=0,ignore_warnings=True, custom_metric=None)
models,predictions = clf.fit(X_train, X_test, y_train, y_test)
models
```
## Random Forests and Boosting
We use random forests and boosting in the same way as CART
```
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=100)
forest.fit(breast.data, breast.target)
labels = forest.predict(breast.data)
probs = forest.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
from sklearn.ensemble import GradientBoostingClassifier
boost = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1)
boost.fit(breast.data, breast.target)
labels = boost.predict(breast.data)
probs = boost.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
#!pip install xgboost
from xgboost import XGBClassifier
boost2 = XGBClassifier()
boost2.fit(breast.data, breast.target)
labels = boost2.predict(breast.data)
probs = boost2.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
```
## Neural Networks
```
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(max_iter=1000)
mlp.fit(breast.data, breast.target)
labels = mlp.predict(breast.data)
probs = mlp.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
# load dataset
X = breast.data
Y = breast.target
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y = np_utils.to_categorical(Y)
# define baseline model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(8, input_dim=30, activation='relu'))
model.add(Dense(2, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
estimator = KerasClassifier(build_fn=baseline_model, epochs=10, batch_size=16, verbose=1)
kfold = KFold(n_splits=5, shuffle=True)
results = cross_val_score(estimator, X, dummy_y, cv=kfold)
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
```
## Logistic Regression
We can also access logistic regression from sklearn
```
from sklearn.linear_model import LogisticRegression
logit = LogisticRegression(solver='liblinear')
logit.fit(breast.data, breast.target)
labels = logit.predict(breast.data)
probs = logit.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
```
The sklearn implementation has options for regularization in logistic regression. You can choose between L1 and L2 regularization:


Note that this regularization is adhoc and **not equivalent to robustness**. For a robust logistic regression, follow the approach from 15.680.
You control the regularization with the `penalty` and `C` hyperparameters. We can see that our model above used L2 regularization with $C=1$.
### Exercise
Try out unregularized logistic regression as well as L1 regularization. Which of the three options seems best? What if you try changing $C$?
```
# No regularization
logit = LogisticRegression(C=1e10, solver='liblinear')
logit.fit(breast.data, breast.target)
labels = logit.predict(breast.data)
probs = logit.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
# L1 regularization
logit = LogisticRegression(C=100, penalty='l1', solver='liblinear')
logit.fit(breast.data, breast.target)
labels = logit.predict(breast.data)
probs = logit.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
```
# Regression
Now let's take a look at regression in sklearn. Again we can start by loading up a dataset.
```
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.DESCR)
```
Take a look at the X
```
df = pd.DataFrame(boston.data)
df.columns = boston.feature_names
df
boston.target
```
## Regression Trees
We use regression trees in the same way as classification
```
from sklearn.tree import DecisionTreeRegressor
cart = DecisionTreeRegressor(max_depth=2, min_samples_leaf=5)
cart.fit(boston.data, boston.target)
visualize_tree(cart)
```
Like for classification, we get the predicted labels out with the `.predict` method
```
preds = cart.predict(boston.data)
preds
```
There are functions provided by `sklearn.metrics` to evaluate the predictions
```
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
```
## Random Forests and Boosting
Random forests and boosting for regression work the same as in classification, except we use the `Regressor` version rather than `Classifier`.
### Exercise
Test and compare the (in-sample) performance of random forests and boosting on the Boston data with some sensible parameters.
```
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=100)
forest.fit(boston.data, boston.target)
preds = forest.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
from sklearn.ensemble import GradientBoostingRegressor
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.2)
boost.fit(boston.data, boston.target)
preds = boost.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
from xgboost import XGBRegressor
boost2 = XGBRegressor()
boost2.fit(boston.data, boston.target)
preds = boost2.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
```
## Neural Networks
```
from sklearn.neural_network import MLPRegressor
mlp = MLPRegressor(max_iter=1000)
mlp.fit(boston.data, boston.target)
preds = mlp.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
# load dataset
X = boston.data
Y = boston.target
# define baseline model
def baseline_model():
# create model
model = Sequential()
model.add(Dense(13, input_dim=X.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
estimator = KerasRegressor(build_fn=baseline_model, epochs=10, batch_size=16, verbose=1)
kfold = KFold(n_splits=5, shuffle=True)
results = cross_val_score(estimator, X, Y, cv=kfold)
print("Mean Squared Error: %.2f (%.2f)" % (abs(results.mean()), results.std()))
```
## Linear Regression Models
There are a large collection of linear regression models in sklearn. Let's start with a simple ordinary linear regression
```
from sklearn.linear_model import LinearRegression
linear = LinearRegression()
linear.fit(boston.data, boston.target)
preds = linear.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
```
We can also take a look at the betas:
```
linear.coef_
```
We can use regularized models as well. Here is ridge regression:
```
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=10)
ridge.fit(boston.data, boston.target)
preds = ridge.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
ridge.coef_
```
And here is lasso
```
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=1)
lasso.fit(boston.data, boston.target)
preds = lasso.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
lasso.coef_
```
There are many other linear regression models available. See the [linear model documentation](http://scikit-learn.org/stable/modules/linear_model.html) for more.
### Exercise
The elastic net is another linear regression method that combines ridge and lasso regularization. Try running it on this dataset, referring to the documentation as needed to learn how to use it and control the hyperparameters.
```
from sklearn.linear_model import ElasticNet
elastic = ElasticNet(alpha=1, l1_ratio=.7)
elastic.fit(boston.data, boston.target)
preds = elastic.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
elastic.coef_
?DecisionTreeClassifier
```
|
github_jupyter
|
# Multiple Linear Regression with sklearn - Exercise Solution
You are given a real estate dataset.
Real estate is one of those examples that every regression course goes through as it is extremely easy to understand and there is a (almost always) certain causal relationship to be found.
The data is located in the file: 'real_estate_price_size_year.csv'.
You are expected to create a multiple linear regression (similar to the one in the lecture), using the new data.
Apart from that, please:
- Display the intercept and coefficient(s)
- Find the R-squared and Adjusted R-squared
- Compare the R-squared and the Adjusted R-squared
- Compare the R-squared of this regression and the simple linear regression where only 'size' was used
- Using the model make a prediction about an apartment with size 750 sq.ft. from 2009
- Find the univariate (or multivariate if you wish - see the article) p-values of the two variables. What can you say about them?
- Create a summary table with your findings
In this exercise, the dependent variable is 'price', while the independent variables are 'size' and 'year'.
Good luck!
## Import the relevant libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.linear_model import LinearRegression
```
## Load the data
```
data = pd.read_csv('real_estate_price_size_year.csv')
data.head()
data.describe()
```
## Create the regression
### Declare the dependent and the independent variables
```
x = data[['size','year']]
y = data['price']
```
### Regression
```
reg = LinearRegression()
reg.fit(x,y)
```
### Find the intercept
```
reg.intercept_
```
### Find the coefficients
```
reg.coef_
```
### Calculate the R-squared
```
reg.score(x,y)
```
### Calculate the Adjusted R-squared
```
# Let's use the handy function we created
def adj_r2(x,y):
r2 = reg.score(x,y)
n = x.shape[0]
p = x.shape[1]
adjusted_r2 = 1-(1-r2)*(n-1)/(n-p-1)
return adjusted_r2
adj_r2(x,y)
```
### Compare the R-squared and the Adjusted R-squared
It seems the the R-squared is only slightly larger than the Adjusted R-squared, implying that we were not penalized a lot for the inclusion of 2 independent variables.
### Compare the Adjusted R-squared with the R-squared of the simple linear regression
Comparing the Adjusted R-squared with the R-squared of the simple linear regression (when only 'size' was used - a couple of lectures ago), we realize that 'Year' is not bringing too much value to the result.
### Making predictions
Find the predicted price of an apartment that has a size of 750 sq.ft. from 2009.
```
reg.predict([[750,2009]])
```
### Calculate the univariate p-values of the variables
```
from sklearn.feature_selection import f_regression
f_regression(x,y)
p_values = f_regression(x,y)[1]
p_values
p_values.round(3)
```
### Create a summary table with your findings
```
reg_summary = pd.DataFrame(data = x.columns.values, columns=['Features'])
reg_summary ['Coefficients'] = reg.coef_
reg_summary ['p-values'] = p_values.round(3)
reg_summary
```
It seems that 'Year' is not event significant, therefore we should remove it from the model.
|
github_jupyter
|
# ENGR 1330 Computational Thinking with Data Science
Last GitHub Commit Date: 14 February 2021
## Lesson 8 The Pandas module
- About Pandas
- How to install
- Anaconda
- JupyterHub/Lab (on Linux)
- JupyterHub/Lab (on MacOS)
- JupyterHub/Lab (on Windoze)
- The Dataframe
- Primatives
- Using Pandas
- Create, Modify, Delete datagrames
- Slice Dataframes
- Conditional Selection
- Synthetic Programming (Symbolic Function Application)
- Files
- Access Files from a remote Web Server
- Get file contents
- Get the actual file
- Adaptations for encrypted servers (future semester)
---
### Special Script Blocks
```
%%html
<!--Script block to left align Markdown Tables-->
<style>
table {margin-left: 0 !important;}
</style>
```
---
## Objectives
1. To understand the **dataframe abstraction** as implemented in the Pandas library(module).
1. To be able to access and manipulate data within a dataframe
2. To be able to obtain basic statistical measures of data within a dataframe
2. Read/Write from/to files
1. MS Excel-type files (.xls,.xlsx,.csv) (LibreOffice files use the MS .xml standard)
2. Ordinary ASCII (.txt) files
3. Access files directly from a URL (advanced concept)
1. Using a wget-type function
2. Using a curl-type function
3. Using API keys (future versions)
### Pandas:
Pandas is the core library for dataframe manipulation in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. The library’s name is derived from the term ‘Panel Data’.
If you are curious about Pandas, this cheat sheet is recommended: [https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)
#### Data Structure
The Primary data structure is called a dataframe. It is an **abstraction** where data are represented as a 2-dimensional mutable and heterogenous tabular data structure; much like a Worksheet in MS Excel. The structure itself is popular among statisticians and data scientists and business executives.
According to the marketing department
*"Pandas Provides rich data structures and functions designed to make working with data fast, easy, and expressive. It is useful in data manipulation, cleaning, and analysis; Pandas excels in performance and productivity "*
---
# The Dataframe
A data table is called a `DataFrame` in pandas (and other programming environments too). The figure below from [https://pandas.pydata.org/docs/getting_started/index.html](https://pandas.pydata.org/docs/getting_started/index.html) illustrates a dataframe model:

Each **column** and each **row** in a dataframe is called a series, the header row, and index column are special.
Like MS Excel we can query the dataframe to find the contents of a particular `cell` using its **row name** and **column name**, or operate on entire **rows** and **columns**
To use pandas, we need to import the module.
## Computational Thinking Concepts
The CT concepts expressed within Pandas include:
- `Decomposition` : Data interpretation, manipulation, and analysis of Pandas dataframes is an act of decomposition -- although the dataframes can be quite complex.
- `Abstraction` : The dataframe is a data representation abstraction that allows for placeholder operations, later substituted with specific contents for a problem; enhances reuse and readability. We leverage the principle of algebraic replacement using these abstractions.
- `Algorithms` : Data interpretation, manipulation, and analysis of dataframes are generally implemented as part of a supervisory algorithm.
## Module Set-Up
In principle, Pandas should be available in a default Anaconda install
- You should not have to do any extra installation steps to install the library in Python
- You do have to **import** the library in your scripts
How to check
- Simply open a code cell and run `import pandas` if the notebook does not protest (i.e. pink block of error), the youis good to go.
```
import pandas
```
If you do get an error, that means that you will have to install using `conda` or `pip`; you are on-your-own here! On the **content server** the process is:
1. Open a new terminal from the launcher
2. Change to root user `su` then enter the root password
3. `sudo -H /opt/jupyterhib/bin/python3 -m pip install pandas`
4. Wait until the install is complete; for security, user `compthink` is not in the `sudo` group
5. Verify the install by trying to execute `import pandas` as above.
The process above will be similar on a Macintosh, or Windows if you did not use an Anaconda distribution. Best is to have a sucessful anaconda install, or go to the [GoodJobUntilMyOrgansGetHarvested](https://apply.mysubwaycareer.com/us/en/).
If you have to do this kind of install, you will have to do some reading, some references I find useful are:
1. https://jupyterlab.readthedocs.io/en/stable/user/extensions.html
2. https://www.pugetsystems.com/labs/hpc/Note-How-To-Install-JupyterHub-on-a-Local-Server-1673/#InstallJupyterHub
3. https://jupyterhub.readthedocs.io/en/stable/installation-guide-hard.html (This is the approach on the content server which has a functioning JupyterHub)
### Dataframe-type Structure using primative python
First lets construct a dataframe like object using python primatives.
We will construct 3 lists, one for row names, one for column names, and one for the content.
```
import numpy
mytabular = numpy.random.randint(1,100,(5,4))
myrowname = ['A','B','C','D','E']
mycolname = ['W','X','Y','Z']
mytable = [['' for jcol in range(len(mycolname)+1)] for irow in range(len(myrowname)+1)] #non-null destination matrix, note the implied loop construction
```
The above builds a placeholder named `mytable` for the psuedo-dataframe.
Next we populate the table, using a for loop to write the column names in the first row, row names in the first column, and the table fill for the rest of the table.
```
for irow in range(1,len(myrowname)+1): # write the row names
mytable[irow][0]=myrowname[irow-1]
for jcol in range(1,len(mycolname)+1): # write the column names
mytable[0][jcol]=mycolname[jcol-1]
for irow in range(1,len(myrowname)+1): # fill the table (note the nested loop)
for jcol in range(1,len(mycolname)+1):
mytable[irow][jcol]=mytabular[irow-1][jcol-1]
```
Now lets print the table out by row and we see we have a very dataframe-like structure
```
for irow in range(0,len(myrowname)+1):
print(mytable[irow][0:len(mycolname)+1])
```
We can also query by row
```
print(mytable[3][0:len(mycolname)+1])
```
Or by column
```
for irow in range(0,len(myrowname)+1): #cannot use implied loop in a column slice
print(mytable[irow][2])
```
Or by row+column index; sort of looks like a spreadsheet syntax.
```
print(' ',mytable[0][3])
print(mytable[3][0],mytable[3][3])
```
# Now we shall create a proper dataframe
We will now do the same using pandas
```
mydf = pandas.DataFrame(numpy.random.randint(1,100,(5,4)), ['A','B','C','D','E'], ['W','X','Y','Z'])
mydf
```
We can also turn our table into a dataframe, notice how the constructor adds header row and index column
```
mydf1 = pandas.DataFrame(mytable)
mydf1
```
To get proper behavior, we can just reuse our original objects
```
mydf2 = pandas.DataFrame(mytabular,myrowname,mycolname)
mydf2
```
Why are `mydf` and `mydf2` different?
### Getting the shape of dataframes
The shape method, which is available after the dataframe is constructed, will return the row and column rank (count) of a dataframe.
```
mydf.shape
mydf1.shape
mydf2.shape
```
### Appending new columns
To append a column simply assign a value to a new column name to the dataframe
```
mydf['new']= 'NA'
mydf
```
## Appending new rows
This is sometimes a bit trickier but here is one way:
- create a copy of a row, give it a new name.
- concatenate it back into the dataframe.
```
newrow = mydf.loc[['E']].rename(index={"E": "X"}) # create a single row, rename the index
newtable = pandas.concat([mydf,newrow]) # concatenate the row to bottom of df - note the syntax
newtable
```
### Removing Rows and Columns
To remove a column is straightforward, we use the drop method
```
newtable.drop('new', axis=1, inplace = True)
newtable
```
To remove a row, you really got to want to, easiest is probablty to create a new dataframe with the row removed
```
newtable = newtable.loc[['A','B','D','E','X']] # select all rows except C
newtable
# or just use drop with axis specify
newtable.drop('X', axis=0, inplace = True)
newtable
```
# Indexing
We have already been indexing, but a few examples follow:
```
newtable['X'] #Selecing a single column
newtable[['X','W']] #Selecing a multiple columns
newtable.loc['E'] #Selecing rows based on label via loc[ ] indexer
newtable
newtable.loc[['E','D','B']] #Selecing multiple rows based on label via loc[ ] indexer
newtable.loc[['B','E','D'],['X','Y']] #Selecting elements via both rows and columns via loc[ ] indexer
```
# Conditional Selection
```
mydf = pandas.DataFrame({'col1':[1,2,3,4,5,6,7,8],
'col2':[444,555,666,444,666,111,222,222],
'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})
mydf
#What fruit corresponds to the number 555 in ‘col2’?
mydf[mydf['col2']==555]['col3']
#What fruit corresponds to the minimum number in ‘col2’?
mydf[mydf['col2']==mydf['col2'].min()]['col3']
```
# Descriptor Functions
```
#Creating a dataframe from a dictionary
mydf = pandas.DataFrame({'col1':[1,2,3,4,5,6,7,8],
'col2':[444,555,666,444,666,111,222,222],
'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})
mydf
```
### `head` method
Returns the first few rows, useful to infer structure
```
#Returns only the first five rows
mydf.head()
```
### `info` method
Returns the data model (data column count, names, data types)
```
#Info about the dataframe
mydf.info()
```
### `describe` method
Returns summary statistics of each numeric column.
Also returns the minimum and maximum value in each column, and the IQR (Interquartile Range).
Again useful to understand structure of the columns.
```
#Statistics of the dataframe
mydf.describe()
```
### Counting and Sum methods
There are also methods for counts and sums by specific columns
```
mydf['col2'].sum() #Sum of a specified column
```
The `unique` method returns a list of unique values (filters out duplicates in the list, underlying dataframe is preserved)
```
mydf['col2'].unique() #Returns the list of unique values along the indexed column
```
The `nunique` method returns a count of unique values
```
mydf['col2'].nunique() #Returns the total number of unique values along the indexed column
```
The `value_counts()` method returns the count of each unique value (kind of like a histogram, but each value is the bin)
```
mydf['col2'].value_counts() #Returns the number of occurences of each unique value
```
## Using functions in dataframes - symbolic apply
The power of **Pandas** is an ability to apply a function to each element of a dataframe series (or a whole frame) by a technique called symbolic (or synthetic programming) application of the function.
This employs principles of **pattern matching**, **abstraction**, and **algorithm development**; a holy trinity of Computational Thinning.
It's somewhat complicated but quite handy, best shown by an example:
```
def times2(x): # A prototype function to scalar multiply an object x by 2
return(x*2)
print(mydf)
print('Apply the times2 function to col2')
mydf['reallynew'] = mydf['col2'].apply(times2) #Symbolic apply the function to each element of column col2, result is another dataframe
mydf
```
## Sorts
```
mydf.sort_values('col2', ascending = True) #Sorting based on columns
mydf.sort_values('col3', ascending = True) #Lexiographic sort
```
# Aggregating (Grouping Values) dataframe contents
```
#Creating a dataframe from a dictionary
data = {
'key' : ['A', 'B', 'C', 'A', 'B', 'C'],
'data1' : [1, 2, 3, 4, 5, 6],
'data2' : [10, 11, 12, 13, 14, 15],
'data3' : [20, 21, 22, 13, 24, 25]
}
mydf1 = pandas.DataFrame(data)
mydf1
# Grouping and summing values in all the columns based on the column 'key'
mydf1.groupby('key').sum()
# Grouping and summing values in the selected columns based on the column 'key'
mydf1.groupby('key')[['data1', 'data2']].sum()
```
# Filtering out missing values
Filtering and *cleaning* are often used to describe the process where data that does not support a narrative is removed ;typically for maintenance of profit applications, if the data are actually missing that is common situation where cleaning is justified.
```
#Creating a dataframe from a dictionary
df = pandas.DataFrame({'col1':[1,2,3,4,None,6,7,None],
'col2':[444,555,None,444,666,111,None,222],
'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})
df
```
Below we drop any row that contains a `NaN` code.
```
df_dropped = df.dropna()
df_dropped
```
Below we replace `NaN` codes with some value, in this case 0
```
df_filled1 = df.fillna(0)
df_filled1
```
Below we replace `NaN` codes with some value, in this case the mean value of of the column in which the missing value code resides.
```
df_filled2 = df.fillna(df.mean())
df_filled2
```
---
## Reading a File into a Dataframe
Pandas has methods to read common file types, such as `csv`,`xlsx`, and `json`.
Ordinary text files are also quite manageable.
On a machine you control you can write script to retrieve files from the internet and process them.
```
readfilecsv = pandas.read_csv('CSV_ReadingFile.csv') #Reading a .csv file
print(readfilecsv)
```
Similar to reading and writing .csv files, you can also read and write .xslx files as below (useful to know this)
```
readfileexcel = pandas.read_excel('Excel_ReadingFile.xlsx', sheet_name='Sheet1', engine='openpyxl') #Reading a .xlsx file
print(readfileexcel)
```
# Writing a dataframe to file
```
#Creating and writing to a .csv file
readfilecsv = pandas.read_csv('CSV_ReadingFile.csv')
readfilecsv.to_csv('CSV_WritingFile1.csv')
readfilecsv = pandas.read_csv('CSV_WritingFile1.csv')
print(readfilecsv)
#Creating and writing to a .csv file by excluding row labels
readfilecsv = pandas.read_csv('CSV_ReadingFile.csv')
readfilecsv.to_csv('CSV_WritingFile2.csv', index = False)
readfilecsv = pandas.read_csv('CSV_WritingFile2.csv')
print(readfilecsv)
#Creating and writing to a .xlsx file
readfileexcel = pandas.read_excel('Excel_ReadingFile.xlsx', sheet_name='Sheet1')
readfileexcel.to_excel('Excel_WritingFile.xlsx', sheet_name='MySheet', index = False)
readfileexcel = pandas.read_excel('Excel_WritingFile.xlsx', sheet_name='MySheet')
print(readfileexcel)
```
---
## Downloading files from websites (optional)
This section shows how to get files from a remote computer. There are several ways to get the files, most importantly you need the FQDN to the file.
### Method: Get the actual file from a remote web server (unencrypted)
> - You know the FQDN to the file it will be in structure of "http://server-name/.../filename.ext"
> - The server is running ordinary (unencrypted) web services, i.e. `http://...`
We will need a module to interface with the remote server. Here we will use ``requests`` , so first we load the module
> You may need to install the module into your anaconda environment using the anaconda power shell, on my computer the commands are:
> - sudo -H /opt/jupyterhub/bin/python3 -m pip install requests
>
> Or:
> - sudo -H /opt/conda/envs/python/bin/python -m pip install requests
>
> You will have to do some reading, but with any luck something similar will work for you.
```
import requests # Module to process http/https requests
```
Now we will generate a ``GET`` request to the remote http server. I chose to do so using a variable to store the remote URL so I can reuse code in future projects. The ``GET`` request (an http/https method) is generated with the requests method ``get`` and assigned to an object named ``rget`` -- the name is arbitrary. Next we extract the file from the ``rget`` object and write it to a local file with the name of the remote file - esentially automating the download process. Then we import the ``pandas`` module.
```
remote_url="http://54.243.252.9/engr-1330-webroot/MyJupyterNotebooks/42-DataScience-EvaporationAnalysis/all_quads_gross_evaporation.csv" # set the url
rget = requests.get(remote_url, allow_redirects=True) # get the remote resource, follow imbedded links
open('all_quads_gross_evaporation.csv','wb').write(rget.content) # extract from the remote the contents, assign to a local file same name
import pandas as pd # Module to process dataframes (not absolutely needed but somewhat easier than using primatives, and gives graphing tools)
# verify file exists
! pwd
! ls -la
```
Now we can read the file contents and check its structure, before proceeding.
```
#evapdf = pd.read_csv("all_quads_gross_evaporation.csv",parse_dates=["YYYY-MM"]) # Read the file as a .CSV assign to a dataframe evapdf
evapdf = pd.read_csv("all_quads_gross_evaporation.csv")
evapdf.head() # check structure
```
Structure looks like a spreadsheet as expected; lets plot the time series for cell '911'
```
evapdf.plot.line(x='YYYY-MM',y='911') # Plot quadrant 911 evaporation time series
```
### Method 3: Get the actual file from an encrypted server
This section is saved for future semesters
### Method 1: Get data from a file on a remote server (unencrypted)
This section shows how to obtain data files from public URLs.
Prerequesites:
- You know the FQDN to the file it will be in structure of "http://server-name/.../filename.ext"
- The server is running ordinary (unencrypted) web services, i.e. `http://...`
#### Web Developer Notes
If you want to distribute files (web developers) the files need to be in the server webroot, but can be deep into the heirarchial structure.
Here we will do an example with a file that contains topographic data in XYZ format, without header information.
The first few lines of the remote file look like:
74.90959724 93.21251922 0
75.17907367 64.40278759 0
94.9935575 93.07951286 0
95.26234119 64.60091165 0
54.04976655 64.21159095 0
54.52914363 35.06934342 0
75.44993558 34.93079513 0
75.09317373 5.462959114 0
74.87357468 10.43130083 0
74.86249082 15.72938748 0
And importantly it is tab delimited.
The module to manipulate url in python is called ``urllib``
Google search to learn more, here we are using only a small component without exception trapping.
```
#Step 1: import needed modules to interact with the internet
from urllib.request import urlopen # import a method that will connect to a url and read file contents
import pandas #import pandas
```
This next code fragment sets a string called ``remote_url``; it is just a variable, name can be anything that honors python naming rules.
Then the ``urllib`` function ``urlopen`` with read and decode methods is employed, the result is stored in an object named ``elevationXYZ``
```
#Step 2: make the connection to the remote file (actually its implementing "bash curl -O http://fqdn/path ...")
remote_url = 'http://www.rtfmps.com/share_files/pip-corner-sumps.txt' #
elevationXYZ = urlopen(remote_url).read().decode().split() # Gets the file contents as a single vector, comma delimited, file is not retained locally
```
At this point the object exists as a single vector with hundreds of elements. We now need to structure the content. Here using python primatives, and knowing how the data are supposed to look, we prepare variables to recieve the structured results
```
#Step 3 Python primatives to structure the data, or use fancy modules (probably easy in numpy)
howmany = len(elevationXYZ) # how long is the vector?
nrow = int(howmany/3)
xyz = [[0 for j in range(3)] for j in range(nrow)] # null space to receive data define columnX
```
Now that everything is ready, we can extract from the object the values we want into ``xyz``
```
#Step4 Now will build xyz as a matrix with 3 columns
index = 0
for irow in range(0,nrow):
xyz[irow][0]=float(elevationXYZ[index])
xyz[irow][1]=float(elevationXYZ[index+1])
xyz[irow][2]=float(elevationXYZ[index+2])
index += 3 #increment the index
```
``xyz`` is now a 3-column float array and can now probably be treated as a data frame.
Here we use a ``pandas`` method to build the dataframe.
```
df = pandas.DataFrame(xyz)
```
Get some info, yep three columns (ordered triples to be precise!)
```
df.info()
```
And some summary statistics (meaningless for these data), but now have taken data from the internet and prepared it for analysis.
```
df.describe()
```
And lets look at the first few rows
```
df.head()
```
---
## References
Overland, B. (2018). Python Without Fear. Addison-Wesley
ISBN 978-0-13-468747-6.
Grus, Joel (2015). Data Science from Scratch: First Principles with Python O’Reilly
Media. Kindle Edition.
Precord, C. (2010) wxPython 2.8 Application Development Cookbook Packt Publishing Ltd. Birmingham , B27 6PA, UK
ISBN 978-1-849511-78-0.
```
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
```
|
github_jupyter
|
```
#import dependencies
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.ticker import StrMethodFormatter
import numpy as np
from config import username,password
from sqlalchemy import create_engine
#create engine
engine = create_engine(f'postgresql://{username}:{password}@localhost:5432/employees')
engine.begin()
connection = engine.connect()
employees = pd.read_sql('select * from employees', connection)
employees
```
# Create a histogram to visualize the most common salary ranges for employees.
```
#display salaries table
salaries = pd.read_sql('select * from salaries', connection)
salaries.head()
#finding the maximum value of salary
max_salary = salaries["salary"].max()
max_salary
#finding the minimum value of salary
min_salary = salaries["salary"].min()
min_salary
#calculate gap for 6 bins
bin_value = (max_salary - min_salary)/5
bin_value
#set bin value increase by $18K for each bin
salary_df = salaries["salary"]
salary_df
#create histogram for salary
#set bins_list
bins_list = [40000,58000,76000,94000,112000,130000]
ax = salary_df.hist(bins=bins_list)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Salary Range',fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.title('Most Common Salary Range',fontsize=15)
plt.show()
```
# Create a bar chart of average salary by title.
```
#display salaries table
salaries = pd.read_sql('select * from salaries', connection)
salaries.head()
#display titles table
titles = pd.read_sql('select * from titles', connection)
titles.head()
#display employees table
employees= pd.read_sql('select * from employees', connection)
employees.head()
#Merge salaries table and employees table
combined_df = pd.merge(salaries, employees, on="emp_no")
combined_df
#rename emp_title_id to title_id
combined_df = combined_df.rename(columns={"emp_title_id": "title_id"})
#display combined df
combined_df.head()
#combine combined df with titles table
combined_df = pd.merge(combined_df, titles, on="title_id")
combined_df
#create a dataframe group by title
salary_by_title = combined_df.groupby("title").mean()
salary_by_title
#drop emp_no column
salary_by_title = salary_by_title.drop(columns = "emp_no")
salary_by_title
# Reset Index
salary_by_title = salary_by_title.reset_index()
salary_by_title
# x_axis, y_axis & Tick Locations
x_axis = salary_by_title["title"]
ticks = np.arange(len(x_axis))
y_axis = salary_by_title["salary"]
# Create Bar Chart Based on Above Data
plt.bar(x_axis, y_axis, align="center", alpha=0.5)
# Create Ticks for Bar Chart's x_axis
plt.xticks(ticks, x_axis, rotation="vertical")
# Set Labels & Title
plt.ylabel("Salaries")
plt.xlabel("Titles")
plt.title("Average Salary by Title")
# Show plot
plt.show()
```
# Analysis
1. Most of employees are paid within the salary range of 40,000 to 60,000
2. The average salary by title is between 45,000 to 60,000
3. The histogram shows that there is salary which is paid beyond the average salay.
|
github_jupyter
|
# Measles Incidence in Altair
This is an example of reproducing the Wall Street Journal's famous [Measles Incidence Plot](http://graphics.wsj.com/infectious-diseases-and-vaccines/#b02g20t20w15) in Python using [Altair](http://github.com/ellisonbg/altair/).
## The Data
We'll start by downloading the data. Fortunately, others have made the data available in an easily digestible form; a github search revealed the dataset in CSV format here:
```
import pandas as pd
url = 'https://raw.githubusercontent.com/blmoore/blogR/master/data/measles_incidence.csv'
data = pd.read_csv(url, skiprows=2, na_values='-')
data.head()
```
## Data Munging with Pandas
This data needs to be cleaned-up a bit; we can do this with the Pandas library.
We first need to aggregate the incidence data by year:
```
annual = data.drop('WEEK', axis=1).groupby('YEAR').sum()
annual.head()
```
Next, because Altair is built to handle data where each row corresponds to a single sample, we will stack the data, re-labeling the columns for clarity:
```
measles = annual.reset_index()
measles = measles.melt('YEAR', var_name='state', value_name='incidence')
measles.head()
```
## Initial Visualization
Now we can use Altair's syntax for generating a heat map:
```
import altair as alt
alt.Chart(measles).mark_rect().encode(
x='YEAR:O',
y='state:N',
color='incidence'
).properties(
width=600,
height=400
)
```
## Adjusting Aesthetics
All operative components of the visualization appear above, we now just have to adjust the aesthetic features to reproduce the original plot.
Altair allows a wide range of flexibility for such adjustments, including size and color of markings, axis labels and titles, and more.
Here is the data visualized again with a number of these adjustments:
```
# Define a custom colormape using Hex codes & HTML color names
colormap = alt.Scale(domain=[0, 100, 200, 300, 1000, 3000],
range=['#F0F8FF', 'cornflowerblue', 'mediumseagreen', '#FFEE00', 'darkorange', 'firebrick'],
type='sqrt')
alt.Chart(measles).mark_rect().encode(
alt.X('YEAR:O', axis=alt.Axis(title=None, ticks=False)),
alt.Y('state:N', axis=alt.Axis(title=None, ticks=False)),
alt.Color('incidence:Q', sort='ascending', scale=colormap, legend=None)
).properties(
width=800,
height=500
)
```
The result clearly shows the impact of the the measles vaccine introduced in the mid-1960s.
## Layering & Selections
Here is another view of the data, using layering and selections to allow zooming-in
```
hover = alt.selection_single(on='mouseover', nearest=True, fields=['state'], empty='none')
line = alt.Chart().mark_line().encode(
alt.X('YEAR:Q',
scale=alt.Scale(zero=False),
axis=alt.Axis(format='f', title='year')
),
alt.Y('incidence:Q', axis=alt.Axis(title='measles incidence')),
detail='state:N',
opacity=alt.condition(hover, alt.value(1.0), alt.value(0.1))
).properties(
width=800,
height=300
)
point = line.mark_point().encode(
opacity=alt.value(0.0)
).properties(
selection=hover
)
mean = alt.Chart().mark_line().encode(
x=alt.X('YEAR:Q', scale=alt.Scale(zero=False)),
y='mean(incidence):Q',
color=alt.value('black')
)
text = alt.Chart().mark_text(align='right').encode(
x='min(YEAR):Q',
y='mean(incidence):Q',
text='state:N',
detail='state:N',
opacity=alt.condition(hover, alt.value(1.0), alt.value(0.0))
)
alt.layer(point, line, mean, text, data=measles).interactive(bind_y=False)
```
|
github_jupyter
|
# Sparkify Project Workspace
This workspace contains a tiny subset (128MB) of the full dataset available (12GB). Feel free to use this workspace to build your project, or to explore a smaller subset with Spark before deploying your cluster on the cloud. Instructions for setting up your Spark cluster is included in the last lesson of the Extracurricular Spark Course content.
You can follow the steps below to guide your data analysis and model building portion of this project.
```
# import libraries
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pyspark.sql import SparkSession
from pyspark.ml.feature import RegexTokenizer, VectorAssembler, Normalizer, StandardScaler
from pyspark.sql.functions import isnan, when, count, col
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType
%%timeit
# create a Spark session
spark = SparkSession \
.builder \
.appName("Data Wrangling") \
.getOrCreate()
# to check spark WebUI: http://localhost:4040/jobs/ or call directly 'spark' to get a link
spark
```
# Load and Clean Dataset
In this workspace, the mini-dataset file is `mini_sparkify_event_data.json`. Load and clean the dataset, checking for invalid or missing data - for example, records without userids or sessionids.
```
pwd
path = '/home/freemo/Projects/largeData/mini_sparkify_event_data.json'
df = spark.read.json(file)
df.take(1)
print((df.count(), len(df.columns)))
mising_values.show()
df.show(5)
spark.conf.set('spark.sql.repl.eagerEval.enabled', True)
df
df.show(1, vertical=True)
df.printSchema()
df.select([count(when(col(c).isNull(), c)).alias(c) for c in df.columns]).show()
df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in df.columns]).show()
# missing values in userID
df.select([count(when(isnan('userID'),True))]).show()
# missing values in sessionID
df.select([count(when(isnan('sessionID'),True))]).show()
df.select([count(when(isnan('userID') | col('userID').isNull() , True))]).show()
df.select([count(when(isnan('sessionID') | col('sessionID').isNull() , True))]).show()
df.distinct().show()
```
# Exploratory Data Analysis
When you're working with the full dataset, perform EDA by loading a small subset of the data and doing basic manipulations within Spark. In this workspace, you are already provided a small subset of data you can explore.
### Define Churn
Once you've done some preliminary analysis, create a column `Churn` to use as the label for your model. I suggest using the `Cancellation Confirmation` events to define your churn, which happen for both paid and free users. As a bonus task, you can also look into the `Downgrade` events.
### Explore Data
Once you've defined churn, perform some exploratory data analysis to observe the behavior for users who stayed vs users who churned. You can start by exploring aggregates on these two groups of users, observing how much of a specific action they experienced per a certain time unit or number of songs played.
# Feature Engineering
Once you've familiarized yourself with the data, build out the features you find promising to train your model on. To work with the full dataset, you can follow the following steps.
- Write a script to extract the necessary features from the smaller subset of data
- Ensure that your script is scalable, using the best practices discussed in Lesson 3
- Try your script on the full data set, debugging your script if necessary
If you are working in the classroom workspace, you can just extract features based on the small subset of data contained here. Be sure to transfer over this work to the larger dataset when you work on your Spark cluster.
# Modeling
Split the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, I suggest using F1 score as the metric to optimize.
# Final Steps
Clean up your code, adding comments and renaming variables to make the code easier to read and maintain. Refer to the Spark Project Overview page and Data Scientist Capstone Project Rubric to make sure you are including all components of the capstone project and meet all expectations. Remember, this includes thorough documentation in a README file in a Github repository, as well as a web app or blog post.
|
github_jupyter
|
```
# Install package
%pip install --upgrade portfoliotools
from portfoliotools.screener.stock_screener import StockScreener
from portfoliotools.screener.utility.util import get_ticker_list, getHistoricStockPrices, get_nse_index_list, get_port_ret_vol_sr
from portfoliotools.screener.stock_screener import PortfolioStrategy, ADF_Reg_Model
from tqdm import tqdm
import pandas as pd
import numpy as np
import seaborn as sns
import plotly.graph_objects as go
import matplotlib.pyplot as plt
from IPython.display import display_html
from pandas.plotting import register_matplotlib_converters
from plotly.subplots import make_subplots
from datetime import datetime
import warnings
warnings.filterwarnings("ignore")
register_matplotlib_converters()
%matplotlib inline
sns.set()
pd.options.display.max_columns = None
pd.options.display.max_rows = None
```
### <font color = 'red'>USER INPUT</font>
```
tickers = get_ticker_list()
asset_list = [ticker['Ticker'] for ticker in tickers]
#asset_list.remove("GAIL")
#asset_list = ['ICICIBANK', 'HDFC', 'HDFCBANK', 'INFY', 'RELIANCE', 'ASIANPAINT', 'TCS', 'MARUTI', 'TATAMOTORS', 'CIPLA']
asset_list = ["ABBOTINDIA", "ACC", "ADANIENT", "ADANIGREEN", "ADANITRANS", "ALKEM", "AMBUJACEM", "APOLLOHOSP", "AUROPHARMA", "DMART", "BAJAJHLDNG", "BANDHANBNK", "BERGEPAINT", "BIOCON", "BOSCHLTD", "CADILAHC", "COLPAL", "DABUR", "DLF", "GAIL", "GODREJCP", "HAVELLS", "HDFCAMC", "HINDPETRO", "ICICIGI", "ICICIPRULI", "IGL", "INDUSTOWER", "NAUKRI", "INDIGO", "JUBLFOOD", "LTI", "LUPIN", "MARICO", "MOTHERSUMI", "MRF", "MUTHOOTFIN", "NMDC", "PETRONET", "PIDILITIND", "PEL", "PGHH", "PNB", "SBICARD", "SIEMENS", "TORNTPHARM", "UBL", "MCDOWELL-N", "VEDL", "YESBANK"]
cob = None #datetime(2020,2,1) # COB Date
```
### <font color = 'blue'>Portfolio Strategy Tools</font>
```
strat = PortfolioStrategy(asset_list, period = 1000, cob = cob)
```
#### Correlation Matrix
```
fig = strat.plot_correlation_matrix()
fig.show()
```
#### Correlated Stocks
```
corr_pair = strat.get_correlation_pair()
print("Highly Correlated:")
display_html(corr_pair[corr_pair['Correlation'] > .95])
print("\nInversely Correlated:")
display_html(corr_pair[corr_pair['Correlation'] < -.90])
```
#### <font color = 'black'>Efficient Frontier</font>
```
market_portfolio = strat.get_efficient_market_portfolio(plot = True, num_ports = 500, show_frontier =True) #plot =True to plot Frontier
```
#### Market Portfolio
```
market_portfolio = market_portfolio.T
market_portfolio = market_portfolio[market_portfolio['MP'] != 0.00]
plt.pie(market_portfolio.iloc[:-3,0], labels=market_portfolio.index.tolist()[:-3])
market_portfolio
```
**Ticker Performance**
```
strat.calcStat(format_result = True)
```
|
github_jupyter
|
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from __future__ import print_function
import math
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import time
from pydrake.solvers.mathematicalprogram import MathematicalProgram, Solve
from pydrake.solvers.ipopt import IpoptSolver
mp = MathematicalProgram()
xy = mp.NewContinuousVariables(2, "xy")
#def constraint(xy):
# return np.array([xy[0]*xy[0] + 2.0*xy[1]*xy[1]])
#constraint_bounds = (np.array([0.]), np.array([1.]))
#mp.AddConstraint(constraint, constraint_bounds[0], constraint_bounds[1], xy)
def constraint(xy):
theta = 1.0
return np.array([[np.cos(theta), -np.sin(theta)],
[np.sin(theta), np.cos(theta)]]).dot(
np.array([xy[0], xy[1]]))
constraint_bounds = (np.array([-0.5, -0.5]), np.array([0.5, 0.5]))
mp.AddConstraint(constraint, constraint_bounds[0], constraint_bounds[1], xy)
def cost(xy):
return xy[0]*1.0 + xy[1]*1.0
mp.AddCost(cost, xy)
#solver = IpoptSolver()
#result = solver.Solve(mp, None, None)
result = Solve(mp)
xystar = result.GetSolution()
print("Successful: ", result.is_success())
print("Solver: ", result.get_solver_id().name())
print("xystar: ", xystar)
# Demo of pulling costs / constraints from MathematicalProgram
# and evaluating them / getting gradients.
from pydrake.forwarddiff import gradient, jacobian
costs = mp.GetAllCosts()
total_cost_gradient = np.zeros(xystar.shape)
for cost in costs:
print("Cost: ", cost)
print("Eval at xystar: ", cost.evaluator().Eval(xystar))
grad = gradient(cost.evaluator().Eval, xystar)
print("Gradient at xystar: ", grad)
total_cost_gradient += grad
constraints = mp.GetAllConstraints()
total_constraint_gradient = np.zeros(xystar.shape)
for constraint in constraints:
print("Constraint: ", constraint)
val = constraint.evaluator().Eval(xystar)
print("Eval at xystar: ", val)
jac = jacobian(constraint.evaluator().Eval, xystar)
print("Gradient at xystar: ", jac)
total_constraint_gradient -= (val <= constraint_bounds[0] + 1E-6).dot(jac)
total_constraint_gradient += (val >= constraint_bounds[1] - 1E-6).dot(jac)
if np.any(total_cost_gradient):
total_cost_gradient /= np.linalg.norm(total_cost_gradient)
if np.any(total_constraint_gradient):
total_constraint_gradient /= np.linalg.norm(total_constraint_gradient)
print("Total cost grad dir: ", total_cost_gradient)
print("Total constraint grad dir: ", total_constraint_gradient)
# Draw feasible region
x_bounds = [-2., 2.]
y_bounds = [-2., 2.]
n_pts = [200, 300]
X, Y = np.meshgrid(np.linspace(x_bounds[0], x_bounds[1], n_pts[0]),
np.linspace(y_bounds[0], y_bounds[1], n_pts[1]),
indexing="ij")
vals = np.ones(n_pts)
for constraint in mp.GetAllConstraints():
for i in range(n_pts[0]):
for j in range(n_pts[1]):
vals_here = constraint.evaluator().Eval(np.array([X[i, j], Y[i, j]]))
vals[i, j] = (
np.all(vals_here >= constraint.evaluator().lower_bound()) and
np.all(vals_here <= constraint.evaluator().upper_bound())
)
plt.imshow(vals, extent=x_bounds+y_bounds)
arrow_cost = plt.arrow(
xystar[0], xystar[1],
total_cost_gradient[0]/2., total_cost_gradient[1]/2.,
width=0.05, color="g")
arrow_constraint = plt.arrow(
xystar[0], xystar[1],
total_constraint_gradient[0]/2., total_constraint_gradient[1]/2.,
width=0.05, color="r")
plt.legend([arrow_cost, arrow_constraint, ], ["Cost Increase Dir", "Constraint Violation Dir"]);
```
|
github_jupyter
|
# Bayes's Theorem
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
In the previous chapter, we derived Bayes's Theorem:
$$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$$
As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.
But since we had the complete dataset, we didn't really need Bayes's Theorem.
It was easy enough to compute the left side of the equation directly, and no easier to compute the right side.
But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability.
## The Cookie Problem
We'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):
> Suppose there are two bowls of cookies.
>
> * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies.
>
> * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.
>
> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?
What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.
But what we get from the statement of the problem is:
* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and
* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$.
Bayes's Theorem tells us how they are related:
$$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)}$$
The term on the left is what we want. The terms on the right are:
- $P(B_1)$, the probability that we chose Bowl 1,
unconditioned by what kind of cookie we got.
Since the problem says we chose a bowl at random,
we assume $P(B_1) = 1/2$.
- $P(V|B_1)$, the probability of getting a vanilla cookie
from Bowl 1, which is 3/4.
- $P(V)$, the probability of drawing a vanilla cookie from
either bowl.
To compute $P(V)$, we can use the law of total probability:
$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$
Plugging in the numbers from the statement of the problem, we have
$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$
We can also compute this result directly, like this:
* Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie.
* Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$.
Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:
$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$
This example demonstrates one use of Bayes's theorem: it provides a
way to get from $P(B|A)$ to $P(A|B)$.
This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left.
## Diachronic Bayes
There is another way to think of Bayes's theorem: it gives us a way to
update the probability of a hypothesis, $H$, given some body of data, $D$.
**This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data.**
Rewriting Bayes's theorem with $H$ and $D$ yields:
$$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$
In this interpretation, each term has a name:
- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.
- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.
- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.
- $P(D)$ is the **total probability of the data**, under any hypothesis.
Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.
In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.
The likelihood is usually the easiest part to compute. In the cookie
problem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis.
Computing the total probability of the data can be tricky.
It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.
Most often we simplify things by specifying a set of hypotheses that
are:
* Mutually exclusive, which means that only one of them can be true, and
* Collectively exhaustive, which means one of them must be true.
When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:
$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$
And more generally, with any number of hypotheses:
$$P(D) = \sum_i P(H_i)~P(D|H_i)$$
The process in this section, using data and a prior probability to compute a posterior probability, is called a **Bayesian update**.
## Bayes Tables
A convenient tool for doing a Bayesian update is a Bayes table.
You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.
First I'll make empty `DataFrame` **with one row for each hypothesis**:
```
import pandas as pd
table = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])
```
Now I'll add a column to represent the priors:
```
table['prior'] = 1/2, 1/2
table
```
And a column for the likelihoods:
```
table['likelihood'] = 3/4, 1/2
table
```
Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:
* The chance of getting a vanilla cookie from Bowl 1 is 3/4.
* The chance of getting a vanilla cookie from Bowl 2 is 1/2.
You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.
There's no reason they should add up to 1 and no problem if they don't.
The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:
```
table['unnorm'] = table['prior'] * table['likelihood']
table
```
I call the result `unnorm` because these values are the **"unnormalized posteriors"**. Each of them is the product of a prior and a likelihood:
$$P(B_i)~P(D|B_i)$$
which is the numerator of Bayes's Theorem.
If we add them up, we have
$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$
which is the denominator of Bayes's Theorem, $P(D)$.
So we can compute the total probability of the data like this:
```
prob_data = table['unnorm'].sum()
prob_data
```
Notice that we get 5/8, which is what we got by computing $P(D)$ directly.
And we can compute the posterior probabilities like this:
```
table['posterior'] = table['unnorm'] / prob_data
table
```
The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.
As a bonus, we also get the posterior probability of Bowl 2, which is 0.4.
When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. **This process is called "normalization"**, which is why the total probability of the data is also called the "normalizing constant".
## The Dice Problem
A Bayes table can also solve problems with more than two hypotheses. For example:
> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?
In this example, there are three hypotheses with equal prior
probabilities. The data is my report that the outcome is a 1.
If I chose the 6-sided die, the probability of the data is
1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.
Here's a Bayes table that uses integers to represent the hypotheses:
```
table2 = pd.DataFrame(index=[6, 8, 12])
```
I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.
```
from fractions import Fraction
table2['prior'] = Fraction(1, 3)
table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)
table2
```
Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:
```
def update(table):
"""Compute the posterior probabilities."""
table['unnorm'] = table['prior'] * table['likelihood']
prob_data = table['unnorm'].sum()
table['posterior'] = table['unnorm'] / prob_data
return prob_data
```
And call it like this.
```
prob_data = update(table2)
```
Here is the final Bayes table:
```
table2
```
The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.
Intuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw.
## The Monty Hall Problem
Next we'll use a Bayes table to solve one of the most contentious problems in probability.
The Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works:
* The host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door.
* One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).
* The object of the game is to guess which door has the car. If you guess right, you get to keep the car.
Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door.
To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?
To answer this question, we have to make some assumptions about the behavior of the host:
1. Monty always opens a door and offers you the option to switch.
2. He never opens the door you picked or the door with the car.
3. If you choose the door with the car, he chooses one of the other
doors at random.
Under these assumptions, you are better off switching.
If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.
If you have not encountered this problem before, you might find that
answer surprising. You would not be alone; many people have the strong
intuition that it doesn't matter if you stick or switch. There are two
doors left, they reason, so the chance that the car is behind Door 1 is 50%. But that is wrong.
To see why, it can help to use a Bayes table. We start with three
hypotheses: the car might be behind Door 1, 2, or 3. According to the
statement of the problem, the prior probability for each door is 1/3.
```
table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])
table3['prior'] = Fraction(1, 3)
table3
```
The data is that Monty opened Door 3 and revealed a goat. So let's
consider the probability of the data under each hypothesis:
* If the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$.
* If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.
* If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.
Here are the likelihoods.
```
table3['likelihood'] = Fraction(1, 2), 1, 0
table3
```
Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.
```
update(table3)
table3
```
After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;
the posterior probability of Door 2 is $2/3$.
So you are better off switching from Door 1 to Door 2.
As this example shows, our intuition for probability is not always
reliable.
Bayes's Theorem can help by providing a divide-and-conquer strategy:
1. First, write down the hypotheses and the data.
2. Next, figure out the prior probabilities.
3. Finally, compute the likelihood of the data under each hypothesis.
The Bayes table does the rest.
## Summary
In this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.
There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.
Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.
If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.
When Monty opens a door, **he provides information we can use to update our belief about the location of the car**. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.
In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.
But first, you might want to work on the exercises.
## Exercises
**Exercise:** Suppose you have two coins in a box.
One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.
What is the probability that you chose the trick coin?
```
# Solution goes here
coins=pd.DataFrame(index=['Tricky','Normal'])
coins['prior']=Fraction(1,2)
coins['likelihood']=1,Fraction(1,2)
update(coins)
coins.loc['Tricky','posterior']
```
<div style="font-size: 14px;font-family:Time" >
**My Answer was right!** ✅
</div>
**Exercise:** Suppose you meet someone and learn that they have two children.
You ask if either child is a girl and they say yes.
What is the probability that both children are girls?
Hint: Start with four equally likely hypotheses.
```
# Solution goes here
```
<div style="font-size: 14px;font-family:Time" >
**My Answer was wrong!** ❎ :
I don't understand problem correctly.
</div>
**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem).
For example, suppose Monty always chooses Door 2 if he can, and
only chooses Door 3 if he has to (because the car is behind Door 2).
If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?
If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?
```
# Solution goes here
monty_1=pd.DataFrame(index=['Door 1','Door 2','Door 3'])
monty_1['prior']=Fraction(1,3)
monty_1['likelihood']=1,0,1
update(monty_1)
monty_1.loc['Door 3','posterior']
```
<div style="font-size: 14px;font-family:Time" >
**My Answer was right!** ✅
</div>
# Solution goes here
monty_2=pd.DataFrame(index=['Door 1','Door 2','Door 3'])
monty_2['prior']=Fraction(1,3)
monty_2['likelihood']=0,1,0
update(monty_2)
monty_2.loc['Door 2','posterior']
monty_2
<div style="font-size: 14px;font-family:Time" >
**My Answer was right!** ✅
</div>
**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors.
Mars, Inc., which makes M&M's, changes the mixture of colors from time to time.
In 1995, they introduced blue M&M's.
* In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan.
* In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown.
Suppose a friend of mine has two bags of M&M's, and he tells me
that one is from 1994 and one from 1996. He won't tell me which is
which, but he gives me one M&M from each bag. One is yellow and
one is green. What is the probability that the yellow one came
from the 1994 bag?
Hint: The trick to this question is to define the hypotheses and the data carefully.
```
# Solution goes here
MandM=pd.DataFrame(index=['From 1994','From 1996'])
MandM['prior']=Fraction(1,2)
MandM['likelihood']=Fraction(2,10),Fraction(13,100)
update(MandM)
MandM.loc['From 1994','posterior']
MandM
```
<div style="font-size: 14px;font-family:Time" >
**My Answer was wrong!** ❎ :
I should pay attention to this fact, when we calculate probability that yellow came from 1994, green must be from 1996.
And I shall recalculate likelihood
</div>
|
github_jupyter
|
# Using the Prediction Model
## Environment
```
import getpass
import json
import os
import sys
import time
import pandas as pd
from tqdm import tqdm_notebook as tqdm
from seffnet.constants import (
DEFAULT_EMBEDDINGS_PATH, DEFAULT_GRAPH_PATH,
DEFAULT_MAPPING_PATH, DEFAULT_PREDICTIVE_MODEL_PATH,
RESOURCES
)
from seffnet.literature import query_europe_pmc
print(sys.version)
print(time.asctime())
print(getpass.getuser())
```
# Loading the Data
```
from seffnet.default_predictor import predictor
print(f"""Loaded default predictor using paths:
embeddings: {DEFAULT_EMBEDDINGS_PATH}
graph: {DEFAULT_GRAPH_PATH}
model: {DEFAULT_PREDICTIVE_MODEL_PATH}
mapping: {DEFAULT_MAPPING_PATH}
""")
```
# Examples of different kinds of predictions with literature evidence
## side effect - target association
```
r = predictor.find_new_relation(
source_name='EGFR_HUMAN',
target_name='Papulopustular rash',
)
print(json.dumps(r, indent=2))
#PMID: 18165622
r = predictor.find_new_relation(
source_id='9451', # Histamine receptor H1
target_id='331', # Drowsiness
)
print(json.dumps(r, indent=2))
#PMID: 26626077
r = predictor.find_new_relation(
source_id='9325', # SC6A2
target_id='56', # Tachycardia
)
print(json.dumps(r, indent=2))
#PMID: 30952858
r = predictor.find_new_relation(
source_id='8670', # ACES_HUMAN
target_id='309', # Bradycardia
)
print(json.dumps(r, indent=2))
#PMID: 30952858
```
## drug- side effect association
```
r = predictor.find_new_relation(
source_id='3534', # diazepam
target_id='670', # Libido decreased
)
print(json.dumps(r, indent=2))
#PMID: 29888057
r = predictor.find_new_relation(
source_id='1148', # Cytarabine
target_id='1149', # Anaemia megaloblastic
)
print(json.dumps(r, indent=2))
# PMID: 23157436
```
## drug-target association
```
r = predictor.find_new_relation(
source_id='14672', # Sertindole
target_id='9350', # CHRM1 receptor
)
print(json.dumps(r, indent=2))
# PMID: 29942259
```
# Example of predicting relations using node2vec model and embeddings
```
def get_predictions_df(curie, results_type=None):
results = predictor.find_new_relations(
node_curie=curie,
results_type=results_type,
k=50,
)
results_df = pd.DataFrame(results['predictions'])
results_df = results_df[['node_id', 'namespace', 'identifier', 'name', 'lor', 'novel']]
return results['query'], results_df
query, df = get_predictions_df('pubchem.compound:2159', 'phenotype')
print(json.dumps(query, indent=2))
df
query, df = get_predictions_df('pubchem.compound:4585', 'phenotype')
print(json.dumps(query, indent=2))
df
query, df = get_predictions_df('uniprot:P08172', 'phenotype')
print(json.dumps(query, indent=2))
df
query, df = get_predictions_df('uniprot:P08588', 'phenotype')
print(json.dumps(query, indent=2))
df
query, df = get_predictions_df('uniprot:P22303', 'phenotype')
print(json.dumps(query, indent=2))
df
query, df = get_predictions_df('uniprot:Q9UBN7', 'chemical')
print(json.dumps(query, indent=2))
df
query, df = get_predictions_df("umls:C0030567", 'chemical')
print(json.dumps(query, indent=2))
df
results = []
for ind, row in df.iterrows():
pmcid = []
lit = query_europe_pmc(
query_entity=row['name'],
target_entities=[
'umls:C0030567'
],
)
i = 0
for x in lit:
if i > 7:
pmcid.append('... ect')
lit.close()
break
pmcid.append(x['pmcid'])
i+=1
results.append((len(pmcid), pmcid))
df['co-occurance'] = results
df
df.to_csv(os.path.join(RESOURCES, 'parkinsons-chemicals.tsv'), sep='\t')
query, df = get_predictions_df('umls:C0242422', 'chemical')
print(json.dumps(query, indent=2))
df
query, df = get_predictions_df('pubchem.compound:5095', 'phenotype')
print(json.dumps(query, indent=2))
df
#PMID: 29241812
r = predictor.find_new_relation(
source_id='2071', #Amantadine
target_id='2248', #Parkinson's disease
)
print(json.dumps(r, indent=2))
#PMID: 21654146
r = predictor.find_new_relation(
source_id='5346', #Ropinirole
target_id='1348', #Restless legs syndrome
)
print(json.dumps(r, indent=2))
#PMID: 21654146
r = predictor.find_new_relation(
source_id='3627', #Disulfiram
target_id='2318', #Malignant melanoma
)
print(json.dumps(r, indent=2))
#PMID: 21654146
r = predictor.find_new_relation(
source_id='17528', #Brigatinib
target_id='5148', #Colorectal cancer
)
print(json.dumps(r, indent=2))
#PMID: 31410188
r = predictor.find_new_relation(
source_id='6995', #dasatinib
target_id='1179', #Diffuse large B-cell lymphoma
)
print(json.dumps(r, indent=2))
#PMID: 31383760
r = predictor.find_new_relation(
source_id='5265', #ribavirin
target_id='947', #Candida infection
)
print(json.dumps(r, indent=2))
#PMID: 31307986
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/dnhirapara/049_DarshikHirapara/blob/main/lab2/Lab_02_Data_Preprocessing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import LabelEncoder
cardata_df = pd.read_csv('/content/drive/MyDrive/ML_Labs/lab2/exercise-car-data.csv', index_col=[0])
print("\nData :\n",cardata_df)
print("\nData statistics\n",cardata_df.describe())
cardata_df.dropna(how='all',inplace=True)
print(cardata_df.dtypes)
# All rows, all columns except last
new_X = cardata_df.iloc[:, :-1].values
# Only last column
new_Y = cardata_df.iloc[:, -1].values
#FuelType
new_X[:,3]=new_X[:,3].astype('str')
le = LabelEncoder()
new_X[ : ,3] = le.fit_transform(new_X[ : ,3])
print("\n\nInput before imputation : \n\n", new_X)
str_to_num_dictionary={"zero":0,"one":1,"two":2,"three":3,"four":4,"five":5,"six":6,"seven":7,"eight":8,"nune":9,"ten":10}
# 3b. Imputation (Replacing null values with mean value of that attribute)
#for col-3
for i in range(new_X[:,3].size):
#KM
if new_X[i,2]=="??":
new_X[i,2]=np.nan
#HP
if new_X[i,4]=="????":
new_X[i,4]=np.nan
#Doors
temp_str = str(new_X[i,8])
if temp_str.isnumeric():
new_X[i,8]=int(temp_str)
else:
new_X[i,8]=str_to_num_dictionary[temp_str]
# Using Imputer function to replace NaN values with mean of that parameter value
imputer = SimpleImputer(missing_values = np.nan,strategy = "mean")
mode_imputer = SimpleImputer(missing_values = np.nan,strategy = "most_frequent")
# Fitting the data, function learns the stats
the_imputer = imputer.fit(new_X[:, 0:3])
# fit_transform() will execute those stats on the input ie. X[:, 1:3]
new_X[:, 0:3] = the_imputer.transform(new_X[:, 0:3])
# Fitting the data, function learns the stats
the_mode_imputer = mode_imputer.fit(new_X[:, 3:4])
new_X[:, 3:4] = the_mode_imputer.transform(new_X[:, 3:4])
# Fitting the data, function learns the stats
the_imputer = imputer.fit(new_X[:, 4:5])
new_X[:, 4:5] = the_imputer.transform(new_X[:, 4:5])
# Fitting the data, function learns the stats
the_mode_imputer = mode_imputer.fit(new_X[:, 5:6])
new_X[:, 5:6] = the_mode_imputer.transform(new_X[:, 5:6])
# filling the missing value with mean
print("\n\nNew Input with Mean Value for NaN : \n\n", new_X)
new_data_df = pd.DataFrame(new_X,columns=cardata_df.columns[:-1])
new_data_df = new_data_df.astype(float)
new_data_df.dtypes
#feature selection
corr = new_data_df.corr()
print(corr.head())
sns.heatmap(corr)
columns = np.full((len(new_data_df.columns),), True, dtype=bool)
for i in range(corr.shape[0]):
for j in range(i+1, corr.shape[0]):
if corr.iloc[i,j] >= 0.9:
if columns[j]:
columns[j] = False
selected_columns = new_data_df.columns[columns]
print(selected_columns)
new_data_df = new_data_df[selected_columns]
# Step 5a : Perform scaling and standardization
new_X = new_data_df.iloc[:, :-1].values
scaler = MinMaxScaler()
std = StandardScaler()
new_X[:,0:3] = std.fit_transform(scaler.fit_transform(new_X[:,0:3]))
new_X[:,4:5] = std.fit_transform(scaler.fit_transform(new_X[:,4:5]))
new_X[:,7:9] = std.fit_transform(scaler.fit_transform(new_X[:,7:9]))
print("Dataset after preprocessing\n\n",new_data_df)
```
|
github_jupyter
|
```
# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated
# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.
# ATTENTION: Please use the provided epoch values when training.
import csv
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from os import getcwd
def get_data(filename):
# You will need to write code that will read the file passed
# into this function. The first line contains the column headers
# so you should ignore it
# Each successive line contians 785 comma separated values between 0 and 255
# The first value is the label
# The rest are the pixel values for that picture
# The function will return 2 np.array types. One with all the labels
# One with all the images
#
# Tips:
# If you read a full line (as 'row') then row[0] has the label
# and row[1:785] has the 784 pixel values
# Take a look at np.array_split to turn the 784 pixels into 28x28
# You are reading in strings, but need the values to be floats
# Check out np.array().astype for a conversion
with open(filename) as training_file:
# Your code starts here
labels = []
images = []
training_file.readline()
while True:
row = training_file.readline()
if not row:
break
row = row.split(',')
row = np.array([float(x) for x in row])
labels.append(row[0])
images.append(row[1:].reshape((28, 28)))
images = np.array(images)
labels = np.array(labels)
# Your code ends here
return images, labels
path_sign_mnist_train = f"{getcwd()}/../tmp2/sign_mnist_train.csv"
path_sign_mnist_test = f"{getcwd()}/../tmp2/sign_mnist_test.csv"
training_images, training_labels = get_data(path_sign_mnist_train)
testing_images, testing_labels = get_data(path_sign_mnist_test)
# Keep these
print(training_images.shape)
print(training_labels.shape)
print(testing_images.shape)
print(testing_labels.shape)
# Their output should be:
# (27455, 28, 28)
# (27455,)
# (7172, 28, 28)
# (7172,)
# In this section you will have to add another dimension to the data
# So, for example, if your array is (10000, 28, 28)
# You will need to make it (10000, 28, 28, 1)
# Hint: np.expand_dims
training_images = np.expand_dims(training_images, 3)
testing_images = np.expand_dims(testing_images, 3)
# Create an ImageDataGenerator and do Image Augmentation
train_datagen = ImageDataGenerator(
rescale=1.0/255.0,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
validation_datagen = ImageDataGenerator(
rescale=1.0/255.0)
train_generator = train_datagen.flow(training_images,
training_labels,
batch_size=64)
validation_generator = validation_datagen.flow(testing_images,
testing_labels,
batch_size=64)
# Keep These
print(training_images.shape)
print(testing_images.shape)
# Their output should be:
# (27455, 28, 28, 1)
# (7172, 28, 28, 1)
# Define the model
# Use no more than 2 Conv2D and 2 MaxPooling2D
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(25, activation='softmax')
])
# Compile Model.
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
# Train the Model
history = model.fit_generator(train_generator, epochs=2, steps_per_epoch=training_images.shape[0]/32,
validation_data=validation_generator, validation_steps=testing_images.shape[0]/32)
model.evaluate(testing_images, testing_labels, verbose=0)
# Plot the chart for accuracy and loss on both training and validation
%matplotlib inline
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
# Submission Instructions
```
# Now click the 'Submit Assignment' button above.
```
# When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners.
```
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
```
|
github_jupyter
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Automated Machine Learning
_**Forecasting with grouping using Pipelines**_
## Contents
1. [Introduction](#Introduction)
2. [Setup](#Setup)
3. [Data](#Data)
4. [Compute](#Compute)
4. [AutoMLConfig](#AutoMLConfig)
5. [Pipeline](#Pipeline)
5. [Train](#Train)
6. [Test](#Test)
## Introduction
In this example we use Automated ML and Pipelines to train, select, and operationalize forecasting models for multiple time-series.
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already to establish your connection to the AzureML Workspace.
In this notebook you will learn how to:
* Create an Experiment in an existing Workspace.
* Configure AutoML using AutoMLConfig.
* Use our helper script to generate pipeline steps to split, train, and deploy the models.
* Explore the results.
* Test the models.
It is advised you ensure your cluster has at least one node per group.
An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)
## Setup
As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
```
import json
import logging
import warnings
import numpy as np
import pandas as pd
import azureml.core
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
```
Accessing the Azure ML workspace requires authentication with Azure.
The default authentication is interactive authentication using the default tenant. Executing the ws = Workspace.from_config() line in the cell below will prompt for authentication the first time that it is run.
If you have multiple Azure tenants, you can specify the tenant by replacing the ws = Workspace.from_config() line in the cell below with the following:
```
from azureml.core.authentication import InteractiveLoginAuthentication
auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')
ws = Workspace.from_config(auth = auth)
```
If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the ws = Workspace.from_config() line in the cell below with the following:
```
from azureml.core.authentication import ServicePrincipalAuthentication
auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')
ws = Workspace.from_config(auth = auth)
```
For more details, see aka.ms/aml-notebook-auth
```
ws = Workspace.from_config()
ds = ws.get_default_datastore()
# choose a name for the run history container in the workspace
experiment_name = 'automl-grouping-oj'
# project folder
project_folder = './sample_projects/{}'.format(experiment_name)
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
```
## Data
Upload data to your default datastore and then load it as a `TabularDataset`
```
from azureml.core.dataset import Dataset
# upload training and test data to your default datastore
ds = ws.get_default_datastore()
ds.upload(src_dir='./data', target_path='groupdata', overwrite=True, show_progress=True)
# load data from your datastore
data = Dataset.Tabular.from_delimited_files(path=ds.path('groupdata/dominicks_OJ_2_5_8_train.csv'))
data_test = Dataset.Tabular.from_delimited_files(path=ds.path('groupdata/dominicks_OJ_2_5_8_test.csv'))
data.take(5).to_pandas_dataframe()
```
## Compute
#### Create or Attach existing AmlCompute
You will need to create a compute target for your automated ML run. In this tutorial, you create AmlCompute as your training compute resource.
#### Creation of AmlCompute takes approximately 5 minutes.
If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
```
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "cpu-cluster-11"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
```
## AutoMLConfig
#### Create a base AutoMLConfig
This configuration will be used for all the groups in the pipeline.
```
target_column = 'Quantity'
time_column_name = 'WeekStarting'
grain_column_names = ['Brand']
group_column_names = ['Store']
max_horizon = 20
automl_settings = {
"iteration_timeout_minutes" : 5,
"experiment_timeout_minutes" : 15,
"primary_metric" : 'normalized_mean_absolute_error',
"time_column_name": time_column_name,
"grain_column_names": grain_column_names,
"max_horizon": max_horizon,
"drop_column_names": ['logQuantity'],
"max_concurrent_iterations": 2,
"max_cores_per_iteration": -1
}
base_configuration = AutoMLConfig(task = 'forecasting',
path = project_folder,
n_cross_validations=3,
**automl_settings
)
```
## Pipeline
We've written a script to generate the individual pipeline steps used to create each automl step. Calling this script will return a list of PipelineSteps that will train multiple groups concurrently and then deploy these models.
This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade).
### Call the method to build pipeline steps
`build_pipeline_steps()` takes as input:
* **automlconfig**: This is the configuration used for every automl step
* **df**: This is the dataset to be used for training
* **target_column**: This is the target column of the dataset
* **compute_target**: The compute to be used for training
* **deploy**: The option on to deploy the models after training, if set to true an extra step will be added to deploy a webservice with all the models (default is `True`)
* **service_name**: The service name for the model query endpoint
* **time_column_name**: The time column of the data
```
from azureml.core.webservice import Webservice
from azureml.exceptions import WebserviceException
service_name = 'grouped-model'
try:
# if you want to get existing service below is the command
# since aci name needs to be unique in subscription deleting existing aci if any
# we use aci_service_name to create azure aci
service = Webservice(ws, name=service_name)
if service:
service.delete()
except WebserviceException as e:
pass
from build import build_pipeline_steps
steps = build_pipeline_steps(
base_configuration,
data,
target_column,
compute_target,
group_column_names=group_column_names,
deploy=True,
service_name=service_name,
time_column_name=time_column_name
)
```
## Train
Use the list of steps generated from above to build the pipeline and submit it to your compute for remote training.
```
from azureml.pipeline.core import Pipeline
pipeline = Pipeline(
description="A pipeline with one model per data group using Automated ML.",
workspace=ws,
steps=steps)
pipeline_run = experiment.submit(pipeline)
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
pipeline_run.wait_for_completion(show_output=False)
```
## Test
Now we can use the holdout set to test our models and ensure our web-service is running as expected.
```
from azureml.core.webservice import AciWebservice
service = AciWebservice(ws, service_name)
X_test = data_test.to_pandas_dataframe()
# Drop the column we are trying to predict (target column)
x_pred = X_test.drop(target_column, inplace=False, axis=1)
x_pred.head()
# Get Predictions
test_sample = X_test.drop(target_column, inplace=False, axis=1).to_json()
predictions = service.run(input_data=test_sample)
print(predictions)
# Convert predictions from JSON to DataFrame
pred_dict =json.loads(predictions)
X_pred = pd.read_json(pred_dict['predictions'])
X_pred.head()
# Fix the index
PRED = 'pred_target'
X_pred[time_column_name] = pd.to_datetime(X_pred[time_column_name], unit='ms')
X_pred.set_index([time_column_name] + grain_column_names, inplace=True, drop=True)
X_pred.rename({'_automl_target_col': PRED}, inplace=True, axis=1)
# Drop all but the target column and index
X_pred.drop(list(set(X_pred.columns.values).difference({PRED})), axis=1, inplace=True)
X_test[time_column_name] = pd.to_datetime(X_test[time_column_name])
X_test.set_index([time_column_name] + grain_column_names, inplace=True, drop=True)
# Merge predictions with raw features
pred_test = X_test.merge(X_pred, left_index=True, right_index=True)
pred_test.head()
from sklearn.metrics import mean_absolute_error, mean_squared_error
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
def get_metrics(actuals, preds):
return pd.Series(
{
"RMSE": np.sqrt(mean_squared_error(actuals, preds)),
"NormRMSE": np.sqrt(mean_squared_error(actuals, preds))/np.abs(actuals.max()-actuals.min()),
"MAE": mean_absolute_error(actuals, preds),
"MAPE": MAPE(actuals, preds)},
)
get_metrics(pred_test[PRED].values, pred_test[target_column].values)
```
|
github_jupyter
|
# ex05-Filtering a Query with WHERE
Sometimes, you’ll want to only check the rows returned by a query, where one or more columns meet certain criteria. This can be done with a WHERE statement. The WHERE clause is an optional clause of the SELECT statement. It appears after the FROM clause as the following statement:
>SELECT column_list FROM table_name WHERE search_condition;
```
%load_ext sql
```
### 1. Connet to the given database of demo.db3
```
%sql sqlite:///data/demo.db3
```
If you do not remember the tables in the demo data, you can always use the follow command to query. Here we select the table of watershed_yearly as an example.
```
%sql SELECT name FROM sqlite_master WHERE type='table'
```
### 2. Retrieving data with WHERE
Take the table of ***rch*** as an example.
#### 2.1 Check the table colums firstly.
```
%sql SELECT * From rch LIMIT 5
```
#### 2.2 Check the number of rows
There should be 8280 rows. This can be done with the SQLite ***COUNT*** function. We will touch other SQLite function over the next few notebooks.
```
%sql SELECT COUNT(*) as nrow From rch
```
#### 2.3 Use WHERE to retrieve data
Let’s say we are interested in records for only the year 1981. Using a WHERE is pretty straightforward for a simple criterion like this.
```
%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR=1981
```
#### 2.4 use *AND* to further filter data
There are 23 RCHs. We are only intersted in the 10th RCH. We can add another filter condition with an ***AND*** statement.
```
%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR=1981 AND RCH=10
```
#### 2.5 More combinations of filters
We also can further filter data with the operators of ***!=*** or ***<>*** to get data except 1981.
```
%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR<>1981 and RCH=10 and MO=6
```
We can further filter the data to spefic months using ***OR*** statement. For example, we'd like check the data in the months of 3, 6 and 9. However, we have to use ***()*** to make them as one condition.:) It is a trick. You can try!
```
%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR>2009 and RCH=10 and (MO=3 or MO=6 or MO=9 or MO=12)
```
Or we can simplify the above filter using the ***IN*** statement.
```
%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR>2009 and RCH=10 and MO in (3, 6, 9, 12)
```
Or the months are ***NOT*** in 3, 6, 9, 12
```
%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR>2009 and RCH=10 and MO NOT IN (3,6,9,12)
```
#### 2.6 Filter with math operators
For example, we could use the modulus operator (%) to filter the MOs.
```
%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR>2009 and RCH=10 and MO % 3 = 0
```
### Summary
In the WHERE statement, we can the combinations of ***NOT, IN, <>, !=, >=, >, <, <=, AND, OR, ()*** and even some of math operators (such as %, *, /, +, -)to retrieve the data we want easily and efficiently.
|
github_jupyter
|
```
!pip3 install qiskit
import qiskit
constant_index_dictionary = {}
constant_index_dictionary['0000'] = [0, 2]
constant_index_dictionary['0001'] = [2, 3]
constant_index_dictionary['0010'] = [0, 1]
constant_index_dictionary['0011'] = [1, 3]
constant_index_dictionary['0100'] = [2, 3]
constant_index_dictionary['0101'] = [1, 2]
constant_index_dictionary['0110'] = [0, 2]
constant_index_dictionary['0111'] = [0, 2]
constant_index_dictionary['1000'] = [0, 3]
constant_index_dictionary['1001'] = [0, 1]
constant_index_dictionary['1010'] = [1, 2]
constant_index_dictionary['1011'] = [0, 3]
constant_index_dictionary['1100'] = [1, 3]
constant_index_dictionary['1101'] = [2, 3]
constant_index_dictionary['1110'] = [1, 3]
constant_index_dictionary['1111'] = [0, 1]
import qiskit
import numpy as np
import time
CLASSICAL_REGISTER_LENGTH = 5
QUANTUM_REGISTER_LENGTH = 5
circuit_building_start_time = time.time()
simulator = qiskit.Aer.get_backend('qasm_simulator')
classical_register = qiskit.ClassicalRegister(CLASSICAL_REGISTER_LENGTH)
quantum_register = qiskit.QuantumRegister(QUANTUM_REGISTER_LENGTH)
circuit = qiskit.QuantumCircuit(quantum_register, classical_register)
circuit_building_end_time = time.time()
AND_gate_auxillary_qubit = QUANTUM_REGISTER_LENGTH - 1 # last qubit as the auxillary qubit
'''
Applies quantum AND operation to specified pair of qubits, stores the operation in AND_gate_auxillary_qubit,
and stores the result in a classical register
@PARAMS:
qubit1: position of the first qubit
qubit2: position of the second qubit
qubit1_one: whether the first qubit is NOT
qubit2_one: whether the second qubit is NOT
classical_register_position: the classical register position to store the measurement of AND_gate_auxillary_qubit
'''
def AND_2_qubit(qubit1, qubit2, qubit1_one, qubit2_one, classical_register_position):
if(qubit1_one):
circuit.x(quantum_register[qubit1])
if(qubit2_one):
circuit.x(quantum_register[qubit2])
circuit.ccx(quantum_register[qubit1], quantum_register[qubit2], quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_position])
if(qubit1_one):
circuit.x(quantum_register[qubit1])
if(qubit2_one):
circuit.x(quantum_register[qubit2])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
'''
Applies the AND gate operation on a list of n qubits
@PARAMS:
qubit_list: list of qubits to perform the operation on
qubit_one_list: whether each of those qubits is NOT
@RETURN:
result of the n-qubit AND operation
'''
def AND_n_qubits(qubit_list, qubit_one_list):
length = len(qubit_list)
if(length != len(qubit_one_list)):
print("Incorrect dimensions")
return
classical_register_index = 0 # where to store pairwise AND operation results
# handling odd number of qubits by preprocessing the last qubit
if(length % 2 != 0):
if(qubit_one_list[length - 1] == 1):
circuit.x(quantum_register[qubit_list[length-1]])
circuit.cx(quantum_register[qubit_list[length - 1]], quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_index])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
classical_register_index = classical_register_index + 1
if(qubit_one_list[length - 1] == 1):
circuit.x(quantum_register[qubit_list[length-1]])
length = length - 1
for index in range(length - 1, 0, -2):
AND_2_qubit(qubit_list[index], qubit_list[index - 1], qubit_one_list[index], qubit_one_list[index - 1], classical_register_index)
classical_register_index = classical_register_index + 1
job = qiskit.execute(circuit, simulator, shots=1)
result = job.result()
counts = str(result.get_counts())
counts = counts[counts.find('\'') + 1:]
counts = counts[:counts.find('\'')]
output = 1
for index in range(0, classical_register_index, 1):
output = output & int(counts[CLASSICAL_REGISTER_LENGTH - 1 - index])
return output
def controlled_n_qubit_h(qubit_list, qubit_one_list):
output = AND_n_qubits(qubit_list, qubit_one_list)
if(output == 1):
circuit.h(quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[0])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
job = qiskit.execute(circuit, simulator, shots=1)
result = job.result()
counts = str(result.get_counts())
counts = counts[counts.find('\'') + 1:]
counts = counts[:counts.find('\'')]
return int(counts[len(counts) - 1])
return 0
'''
the main circuit for the following truth table:
A, B, C, D = binary representation input state for the robot
P, Q, R, S = binary representation of the output state from the robot
New circuit in register...
'''
def main_circuit(STEPS, initial_state):
signature = ""
state = initial_state
step = 0
while (step < STEPS):
dont_care_list = constant_index_dictionary[state]
input_state = state
state = ""
P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])
Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])
R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])
S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])
state = state + str(P) + str(Q) + str(R) + str(S)
y = int(input_state, 2)^int(state,2)
y = bin(y)[2:].zfill(len(state))
# print("" + str(y) + " is the XOR string")
hamming_distance = len(y.replace('0', ""))
# print(input_state + " " + state + " " + str(hamming_distance))
step = step + hamming_distance
hidden_state = ""
for j in range(len(state)):
if(j in dont_care_list):
hidden_state = hidden_state + "x"
else:
hidden_state = hidden_state + state[j]
# print(state + " " + hidden_state)
signature = signature + hidden_state
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
print("End state: " + str(P) + str(Q) + str(R) + str(S) )
print("Signature: " + signature)
def initialise_starting_state(P, Q, R, S):
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
print("Message: " + str(P) + str(Q) + str(R) + str(S))
def measure_time():
total_time = 0
for i in range(100):
start_time = time.time()
# output = AND_n_qubits([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
output = controlled_n_qubit_h([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
print(str(i) + " " + str(output))
end_time = time.time()
total_time = total_time + (end_time - start_time)
print("Average time: " + str(total_time/100))
start_time = time.time()
initialise_starting_state(1, 0, 1, 1) # message to be signed
STEPS = 20 # security parameter: length of the walk
main_circuit(STEPS, '1011')
# measure_time()
end_time = time.time()
print("Run in time " + str(end_time - start_time))
print(circuit_building_end_time - circuit_building_start_time)
def recipient_initialise_starting_state(P, Q, R, S):
if(P == "1"):
circuit.x(quantum_register[0])
if(Q == "1"):
circuit.x(quantum_register[1])
if(R == "1"):
circuit.x(quantum_register[2])
if(S == "1"):
circuit.x(quantum_register[3])
print("Message: " + str(P) + str(Q) + str(R) + str(S))
def recipient(message, signature, end_state):
STEPS = len(signature)/len(end_state)
STEPS = int(STEPS)
index = 0
recipient_initialise_starting_state(message[0], message[1], message[2], message[3])
state = message
recreated_signature = ""
for _ in range(STEPS):
dont_care_list = constant_index_dictionary[state]
state = ""
P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])
Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])
R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])
S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])
if(signature[index] != "x" and signature[index] == "1"):
P = P | 1
elif(signature[index] != "x"):
P = P & 0
index = index + 1
if(signature[index] != "x" and signature[index] == "1"):
Q = Q | 1
elif(signature[index] != "x"):
Q = Q & 0
index = index + 1
if(signature[index] != "x" and signature[index] == "1"):
R = R | 1
elif(signature[index] != "x"):
R = R & 0
index = index + 1
if(signature[index] != "x" and signature[index] == "1"):
S = S | 1
elif(signature[index] != "x"):
S = S & 0
index = index + 1
state = "" + str(P) + str(Q) + str(R) + str(S)
hidden_state = ""
for j in range(len(state)):
if(j in dont_care_list):
hidden_state = hidden_state + "x"
else:
hidden_state = hidden_state + state[j]
print(state + " " + hidden_state)
recreated_signature = recreated_signature + hidden_state
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
print(recreated_signature)
print(signature)
if(recreated_signature == signature):
print("ACCEPT")
else:
print("REJECT")
start = time.time()
for _ in range(len(circuit.data)):
circuit.data.pop(0)
recipient("1011", "x10x10xxx10x1x1xxx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11", "1111")
for _ in range(len(circuit.data)):
circuit.data.pop(0)
recipient("1011", "x00x10xxx10x1x1xxx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11", "1111")
print(time.time() - start)
```
# Scheme 2
Non-transfer of x
More secure
Requires one-time additional sharing of a dictionary
From the two dictionaries is inferred the total number of output states
(in the cell below, 2 + 2 = 4)
```
constant_index_dictionary = {}
constant_index_dictionary['0000'] = [0, 2]
constant_index_dictionary['0001'] = [2, 3]
constant_index_dictionary['0010'] = [0, 1]
constant_index_dictionary['0011'] = [1, 3]
constant_index_dictionary['0100'] = [2, 3]
constant_index_dictionary['0101'] = [1, 2]
constant_index_dictionary['0110'] = [0, 2]
constant_index_dictionary['0111'] = [0, 2]
constant_index_dictionary['1000'] = [0, 3]
constant_index_dictionary['1001'] = [0, 1]
constant_index_dictionary['1010'] = [1, 2]
constant_index_dictionary['1011'] = [0, 3]
constant_index_dictionary['1100'] = [1, 3]
constant_index_dictionary['1101'] = [2, 3]
constant_index_dictionary['1110'] = [1, 3]
constant_index_dictionary['1111'] = [0, 1]
# additional dictionary to be shared
hidden_index_dictionary = {}
hidden_index_dictionary['0000'] = [1, 3]
hidden_index_dictionary['0001'] = [0, 1]
hidden_index_dictionary['0010'] = [2, 3]
hidden_index_dictionary['0011'] = [0, 2]
hidden_index_dictionary['0100'] = [0, 1]
hidden_index_dictionary['0101'] = [0, 3]
hidden_index_dictionary['0110'] = [1, 3]
hidden_index_dictionary['0111'] = [1, 3]
hidden_index_dictionary['1000'] = [1, 2]
hidden_index_dictionary['1001'] = [2, 3]
hidden_index_dictionary['1010'] = [0, 3]
hidden_index_dictionary['1011'] = [1, 2]
hidden_index_dictionary['1100'] = [0, 2]
hidden_index_dictionary['1101'] = [0, 1]
hidden_index_dictionary['1110'] = [0, 2]
hidden_index_dictionary['1111'] = [2, 3]
import qiskit
import numpy as np
import time
CLASSICAL_REGISTER_LENGTH = 5
QUANTUM_REGISTER_LENGTH = 5
circuit_building_start_time = time.time()
simulator = qiskit.Aer.get_backend('qasm_simulator')
classical_register = qiskit.ClassicalRegister(CLASSICAL_REGISTER_LENGTH)
quantum_register = qiskit.QuantumRegister(QUANTUM_REGISTER_LENGTH)
circuit = qiskit.QuantumCircuit(quantum_register, classical_register)
circuit_building_end_time = time.time()
AND_gate_auxillary_qubit = QUANTUM_REGISTER_LENGTH - 1 # last qubit as the auxillary qubit
'''
Applies quantum AND operation to specified pair of qubits, stores the operation in AND_gate_auxillary_qubit,
and stores the result in a classical register
@PARAMS:
qubit1: position of the first qubit
qubit2: position of the second qubit
qubit1_one: whether the first qubit is NOT
qubit2_one: whether the second qubit is NOT
classical_register_position: the classical register position to store the measurement of AND_gate_auxillary_qubit
'''
def AND_2_qubit(qubit1, qubit2, qubit1_one, qubit2_one, classical_register_position):
if(qubit1_one):
circuit.x(quantum_register[qubit1])
if(qubit2_one):
circuit.x(quantum_register[qubit2])
circuit.ccx(quantum_register[qubit1], quantum_register[qubit2], quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_position])
if(qubit1_one):
circuit.x(quantum_register[qubit1])
if(qubit2_one):
circuit.x(quantum_register[qubit2])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
'''
Applies the AND gate operation on a list of n qubits
@PARAMS:
qubit_list: list of qubits to perform the operation on
qubit_one_list: whether each of those qubits is NOT
@RETURN:
result of the n-qubit AND operation
'''
def AND_n_qubits(qubit_list, qubit_one_list):
length = len(qubit_list)
if(length != len(qubit_one_list)):
print("Incorrect dimensions")
return
classical_register_index = 0 # where to store pairwise AND operation results
# handling odd number of qubits by preprocessing the last qubit
if(length % 2 != 0):
if(qubit_one_list[length - 1] == 1):
circuit.x(quantum_register[qubit_list[length-1]])
circuit.cx(quantum_register[qubit_list[length - 1]], quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_index])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
classical_register_index = classical_register_index + 1
if(qubit_one_list[length - 1] == 1):
circuit.x(quantum_register[qubit_list[length-1]])
length = length - 1
for index in range(length - 1, 0, -2):
AND_2_qubit(qubit_list[index], qubit_list[index - 1], qubit_one_list[index], qubit_one_list[index - 1], classical_register_index)
classical_register_index = classical_register_index + 1
job = qiskit.execute(circuit, simulator, shots=1)
result = job.result()
counts = str(result.get_counts())
counts = counts[counts.find('\'') + 1:]
counts = counts[:counts.find('\'')]
output = 1
for index in range(0, classical_register_index, 1):
output = output & int(counts[CLASSICAL_REGISTER_LENGTH - 1 - index])
return output
def controlled_n_qubit_h(qubit_list, qubit_one_list):
output = AND_n_qubits(qubit_list, qubit_one_list)
if(output == 1):
circuit.h(quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[0])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
job = qiskit.execute(circuit, simulator, shots=1)
result = job.result()
counts = str(result.get_counts())
counts = counts[counts.find('\'') + 1:]
counts = counts[:counts.find('\'')]
return int(counts[len(counts) - 1])
return 0
'''
the main circuit for the following truth table:
A, B, C, D = binary representation input state for the robot
P, Q, R, S = binary representation of the output state from the robot
New circuit in register...
'''
def main_circuit(STEPS, initial_state):
signature = ""
state = initial_state
for _ in range(STEPS):
dont_care_list = constant_index_dictionary[state]
state = ""
P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])
Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])
R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])
S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])
state = state + str(P) + str(Q) + str(R) + str(S)
hidden_state = ""
for j in range(len(state)):
if(j in dont_care_list):
pass
else:
hidden_state = hidden_state + state[j]
print(state + " " + hidden_state)
signature = signature + hidden_state
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
print("End state: " + str(P) + str(Q) + str(R) + str(S) )
print("Signature: " + signature)
def initialise_starting_state(P, Q, R, S):
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
print("Message: " + str(P) + str(Q) + str(R) + str(S))
def measure_time():
total_time = 0
for i in range(100):
start_time = time.time()
# output = AND_n_qubits([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
output = controlled_n_qubit_h([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
print(str(i) + " " + str(output))
end_time = time.time()
total_time = total_time + (end_time - start_time)
print("Average time: " + str(total_time/100))
start_time = time.time()
initialise_starting_state(0, 1, 0, 1) # message to be signed
STEPS = 10 # security parameter: length of the walk
main_circuit(STEPS, '0101')
# measure_time()
end_time = time.time()
print("Run in time " + str(end_time - start_time))
print(circuit_building_end_time - circuit_building_start_time)
def recipient_initialise_starting_state(P, Q, R, S):
if(P == "1"):
circuit.x(quantum_register[0])
if(Q == "1"):
circuit.x(quantum_register[1])
if(R == "1"):
circuit.x(quantum_register[2])
if(S == "1"):
circuit.x(quantum_register[3])
print("Message: " + str(P) + str(Q) + str(R) + str(S))
def recipient(message, signature, end_state):
# for every 2 bits, there are 2 additional hidden bits, by definition of the shared data structures
STEPS = (2*len(signature))/len(end_state)
STEPS = int(STEPS)
index = 0
recipient_initialise_starting_state(message[0], message[1], message[2], message[3])
state = message
recreated_signature = ""
recreated_original_signature = ""
for _ in range(STEPS):
dont_care_list = constant_index_dictionary[state]
hidden_index_list = hidden_index_dictionary[state]
# print(state + " " + str(hidden_index_list))
state = ""
P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])
Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])
R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])
S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])
for i in range(len(hidden_index_list)):
temp_index = hidden_index_list[i]
if(temp_index == 0):
if(signature[index] == '1'):
P = P | 1
else:
P = P & 0
elif(temp_index == 1):
if(signature[index] == '1'):
Q = Q | 1
else:
Q = Q & 0
elif(temp_index == 2):
if(signature[index] == '1'):
R = R | 1
else:
R = R & 0
elif(temp_index == 3):
if(signature[index] == '1'):
S = S | 1
else:
S = S & 0
index = index + 1
state = "" + str(P) + str(Q) + str(R) + str(S)
hidden_state = ""
for j in range(len(state)):
if(j in dont_care_list):
# hidden_state = hidden_state + "x"
pass
else:
hidden_state = hidden_state + state[j]
print(state + " " + hidden_state)
recreated_signature = recreated_signature + hidden_state
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
if(recreated_signature == signature and end_state == state):
print("ACCEPT")
else:
print("REJECT")
start = time.time()
# for _ in range(len(circuit.data)):
# circuit.data.pop(0)
# recipient("0101", "10011010111000010011", "1111")
for _ in range(len(circuit.data)):
circuit.data.pop(0)
recipient("0101", "1000110000100000000", "0110")
print(time.time() - start)
```
# k-Path dependent scheme
```
constant_index_dictionary = {}
constant_index_dictionary['0000'] = [0, 2]
constant_index_dictionary['0001'] = [2, 3]
constant_index_dictionary['0010'] = [0, 1]
constant_index_dictionary['0011'] = [1, 3]
constant_index_dictionary['0100'] = [2, 3]
constant_index_dictionary['0101'] = [1, 2]
constant_index_dictionary['0110'] = [0, 2]
constant_index_dictionary['0111'] = [0, 2]
constant_index_dictionary['1000'] = [0, 3]
constant_index_dictionary['1001'] = [0, 1]
constant_index_dictionary['1010'] = [1, 2]
constant_index_dictionary['1011'] = [0, 3]
constant_index_dictionary['1100'] = [1, 3]
constant_index_dictionary['1101'] = [2, 3]
constant_index_dictionary['1110'] = [1, 3]
constant_index_dictionary['1111'] = [0, 1]
import qiskit
import numpy as np
import time
CLASSICAL_REGISTER_LENGTH = 5
QUANTUM_REGISTER_LENGTH = 5
circuit_building_start_time = time.time()
simulator = qiskit.Aer.get_backend('qasm_simulator')
classical_register = qiskit.ClassicalRegister(CLASSICAL_REGISTER_LENGTH)
quantum_register = qiskit.QuantumRegister(QUANTUM_REGISTER_LENGTH)
circuit = qiskit.QuantumCircuit(quantum_register, classical_register)
circuit_building_end_time = time.time()
AND_gate_auxillary_qubit = QUANTUM_REGISTER_LENGTH - 1 # last qubit as the auxillary qubit
'''
Applies quantum AND operation to specified pair of qubits, stores the operation in AND_gate_auxillary_qubit,
and stores the result in a classical register
@PARAMS:
qubit1: position of the first qubit
qubit2: position of the second qubit
qubit1_one: whether the first qubit is NOT
qubit2_one: whether the second qubit is NOT
classical_register_position: the classical register position to store the measurement of AND_gate_auxillary_qubit
'''
def AND_2_qubit(qubit1, qubit2, qubit1_one, qubit2_one, classical_register_position):
if(qubit1_one):
circuit.x(quantum_register[qubit1])
if(qubit2_one):
circuit.x(quantum_register[qubit2])
circuit.ccx(quantum_register[qubit1], quantum_register[qubit2], quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_position])
if(qubit1_one):
circuit.x(quantum_register[qubit1])
if(qubit2_one):
circuit.x(quantum_register[qubit2])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
'''
Applies the AND gate operation on a list of n qubits
@PARAMS:
qubit_list: list of qubits to perform the operation on
qubit_one_list: whether each of those qubits is NOT
@RETURN:
result of the n-qubit AND operation
'''
def AND_n_qubits(qubit_list, qubit_one_list):
length = len(qubit_list)
if(length != len(qubit_one_list)):
print("Incorrect dimensions")
return
classical_register_index = 0 # where to store pairwise AND operation results
# handling odd number of qubits by preprocessing the last qubit
if(length % 2 != 0):
if(qubit_one_list[length - 1] == 1):
circuit.x(quantum_register[qubit_list[length-1]])
circuit.cx(quantum_register[qubit_list[length - 1]], quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_index])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
classical_register_index = classical_register_index + 1
if(qubit_one_list[length - 1] == 1):
circuit.x(quantum_register[qubit_list[length-1]])
length = length - 1
for index in range(length - 1, 0, -2):
AND_2_qubit(qubit_list[index], qubit_list[index - 1], qubit_one_list[index], qubit_one_list[index - 1], classical_register_index)
classical_register_index = classical_register_index + 1
job = qiskit.execute(circuit, simulator, shots=1)
result = job.result()
counts = str(result.get_counts())
counts = counts[counts.find('\'') + 1:]
counts = counts[:counts.find('\'')]
output = 1
for index in range(0, classical_register_index, 1):
output = output & int(counts[CLASSICAL_REGISTER_LENGTH - 1 - index])
return output
def controlled_n_qubit_h(qubit_list, qubit_one_list):
output = AND_n_qubits(qubit_list, qubit_one_list)
if(output == 1):
circuit.h(quantum_register[AND_gate_auxillary_qubit])
circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[0])
circuit.reset(quantum_register[AND_gate_auxillary_qubit])
job = qiskit.execute(circuit, simulator, shots=1)
result = job.result()
counts = str(result.get_counts())
counts = counts[counts.find('\'') + 1:]
counts = counts[:counts.find('\'')]
return int(counts[len(counts) - 1])
return 0
'''
the main circuit for the following truth table:
A, B, C, D = binary representation input state for the robot
P, Q, R, S = binary representation of the output state from the robot
New circuit in register...
'''
def main_circuit(STEPS, initial_state):
signature = ""
state = initial_state
used_states = []
step = 0
rollback_count = 0
while True:
if(step == STEPS):
break
dont_care_list = constant_index_dictionary[state]
rollback_state = state
if(state not in used_states):
used_states.append(state)
state = ""
P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])
Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])
R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])
S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])
state = state + str(P) + str(Q) + str(R) + str(S)
if(state in used_states):
rollback_count = rollback_count + 1
if(rollback_count == (len(initial_state) + 10)):
print("Aborting.")
return "ABORT"
P = rollback_state[0]
Q = rollback_state[1]
R = rollback_state[2]
S = rollback_state[3]
state = rollback_state
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == '1'):
print("Rollback reset")
circuit.x(quantum_register[0])
if(Q == '1'):
print("Rollback reset")
circuit.x(quantum_register[1])
if(R == '1'):
print("Rollback reset")
circuit.x(quantum_register[2])
if(S == '1'):
print("Rollback reset")
circuit.x(quantum_register[3])
print("Rolling back")
continue
step = step + 1
rollback = 0
hidden_state = ""
for j in range(len(state)):
if(j in dont_care_list):
hidden_state = hidden_state + "x"
else:
hidden_state = hidden_state + state[j]
signature = signature + hidden_state
# print(state + " " + hidden_state)
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
return signature
def initialise_starting_state(P, Q, R, S):
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
print("Message: " + str(P) + str(Q) + str(R) + str(S))
def measure_time():
total_time = 0
for i in range(100):
start_time = time.time()
# output = AND_n_qubits([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
output = controlled_n_qubit_h([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
print(str(i) + " " + str(output))
end_time = time.time()
total_time = total_time + (end_time - start_time)
print("Average time: " + str(total_time/100))
```
Creating a long message
100 *bits*
```
def create_random_message(NUMBER_OF_BITS):
message = ""
c = qiskit.ClassicalRegister(1)
q = qiskit.QuantumRegister(1)
s = qiskit.Aer.get_backend('qasm_simulator')
for i in range(NUMBER_OF_BITS):
print(i)
random_circuit = qiskit.QuantumCircuit(q, c)
random_circuit.h(q[0])
random_circuit.measure(q[0], c[0])
job = qiskit.execute(random_circuit, s, shots=1)
result = job.result()
counts = str(result.get_counts())
counts = counts[counts.find('\'') + 1:]
counts = counts[:counts.find('\'')]
message = message + counts
print(message)
create_random_message(100)
```
Signing a long message
```
def sign_message(message):
signature = ""
ITER = int(len(message)/4)
start_time = time.time()
STEPS = 5 # security parameter: length of the walk
iter = 0
while True:
if(iter == ITER):
break
state = message[0:4]
initialise_starting_state(int(state[0]), int(state[1]), int(state[2]), int(state[3]))
return_signature = main_circuit(STEPS, state)
if(return_signature == "ABORT"):
print("Rerun")
continue
iter = iter + 1
signature = signature + return_signature
message = message[4:]
end_time = time.time()
print("Run in time " + str(end_time - start_time))
print(signature)
sign_message('1011000001011010110011111011011100111001000010001111011101101100010100100011010010111000110101100011')
print(len('x00x10xxx10x1x1xxx01'))
def recipient_initialise_starting_state(P, Q, R, S):
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == "1"):
circuit.x(quantum_register[0])
if(Q == "1"):
circuit.x(quantum_register[1])
if(R == "1"):
circuit.x(quantum_register[2])
if(S == "1"):
circuit.x(quantum_register[3])
print("Message: " + str(P) + str(Q) + str(R) + str(S))
def recipient(message, signature, end_state):
STEPS = len(signature)/len(end_state)
STEPS = int(STEPS)
index = 0
recipient_initialise_starting_state(message[0], message[1], message[2], message[3])
state = message
recreated_signature = ""
for _ in range(STEPS):
dont_care_list = constant_index_dictionary[state]
state = ""
P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])
Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])
R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])
S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])
if(signature[index] != "x" and signature[index] == "1"):
P = P | 1
elif(signature[index] != "x"):
P = P & 0
index = index + 1
if(signature[index] != "x" and signature[index] == "1"):
Q = Q | 1
elif(signature[index] != "x"):
Q = Q & 0
index = index + 1
if(signature[index] != "x" and signature[index] == "1"):
R = R | 1
elif(signature[index] != "x"):
R = R & 0
index = index + 1
if(signature[index] != "x" and signature[index] == "1"):
S = S | 1
elif(signature[index] != "x"):
S = S & 0
index = index + 1
state = "" + str(P) + str(Q) + str(R) + str(S)
hidden_state = ""
for j in range(len(state)):
if(j in dont_care_list):
hidden_state = hidden_state + "x"
else:
hidden_state = hidden_state + state[j]
recreated_signature = recreated_signature + hidden_state
print(state + " " + hidden_state)
for _ in range(len(circuit.data)):
circuit.data.pop(0)
if(P == 1):
circuit.x(quantum_register[0])
if(Q == 1):
circuit.x(quantum_register[1])
if(R == 1):
circuit.x(quantum_register[2])
if(S == 1):
circuit.x(quantum_register[3])
print(recreated_signature)
print(signature)
if(recreated_signature == signature):
print("ACCEPT")
else:
print("REJECT")
return recreated_signature
import time
start = time.time()
for _ in range(len(circuit.data)):
circuit.data.pop(0)
STEPS = int(len('1011000001011010110011111011011100111001000010001111011101101100010100100011010010111000110101100011') / 4)
message = '1011000001011010110011111011011100111001000010001111011101101100010100100011010010111000110101100011'
signature = 'x11xx1x1xx01xx10x0x1x1x01x0x10xxxx10x1x10xx1x1x1xx11x00x00xx0xx0xx11xx00x10x1x0x0x0x1xx1xx00x11x0x0xxx11x11xx1x00x1xx0x1x01x1x0xx01x0xx0xx01x1x1xx00x10x0x0x1xx00x0xx1x101xx0xx0x1x1xx10x1x1x0x1x01x1x0xx1x101xx1xx1xx00x10xx01x1xx1x10x0xx0x0x0xx01xx10x1x1x0x1x00xx0x01xx1x00x11xx1x1xx0x10x0xx1x01x0x10xx1x1xxx00x01x0xx10x1x1xx00x1xx0x10x1xxx11xx0100xx10xxx11x0x0x0x1xxx101x0x1x1xxx0010xx0xx10x1xxx11xx01x10x0xx0x0x0xx0110xxx01x0xx10x0xx0x1xx0000xx10xxx11x0x1xx1x1x0x0xx100x0x10xx0xx10x1xxx100x1xx1x1x1x1'
temp_signature = signature
k = int(len(signature)/len(message))
end_index = k*4
recipient_signature = ""
for _ in range(STEPS):
start_state = message[0:4]
message = message[4:]
mess_signature = signature[0:end_index]
signature = signature[end_index:]
recipient_signature = recipient_signature + recipient(start_state, mess_signature, '0000')
if(recipient_signature == temp_signature):
print("ACCEPT")
else:
print("REJECT")
print(time.time() - start)
1111 1xx1 1111 1xx1
1011 xx11 1011 xx11
Rolling back
1101 x10x 0101 x10x
0001 00xx 0010 0xx0
1000 10xx 1010 xx10
print(recipient('1011', 'x11xxx01xx10x0x0xx0100xx11xx1x1xxx11x11x', '0000'))
print(recipient('1001', 'xx001x1xxx01xx0011xx1x0x0x1xx1x1xx100xx0', '0000'))
```
|
github_jupyter
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Goal" data-toc-modified-id="Goal-1"><span class="toc-item-num">1 </span>Goal</a></span></li><li><span><a href="#Var" data-toc-modified-id="Var-2"><span class="toc-item-num">2 </span>Var</a></span></li><li><span><a href="#Init" data-toc-modified-id="Init-3"><span class="toc-item-num">3 </span>Init</a></span></li><li><span><a href="#Merging" data-toc-modified-id="Merging-4"><span class="toc-item-num">4 </span>Merging</a></span><ul class="toc-item"><li><span><a href="#SV-artifact" data-toc-modified-id="SV-artifact-4.1"><span class="toc-item-num">4.1 </span>SV artifact</a></span></li><li><span><a href="#rep-seqs" data-toc-modified-id="rep-seqs-4.2"><span class="toc-item-num">4.2 </span>rep-seqs</a></span></li><li><span><a href="#Taxonomy" data-toc-modified-id="Taxonomy-4.3"><span class="toc-item-num">4.3 </span>Taxonomy</a></span></li></ul></li><li><span><a href="#Alignment" data-toc-modified-id="Alignment-5"><span class="toc-item-num">5 </span>Alignment</a></span><ul class="toc-item"><li><span><a href="#Creating-alignment" data-toc-modified-id="Creating-alignment-5.1"><span class="toc-item-num">5.1 </span>Creating alignment</a></span></li><li><span><a href="#Masking-alignment" data-toc-modified-id="Masking-alignment-5.2"><span class="toc-item-num">5.2 </span>Masking alignment</a></span></li></ul></li><li><span><a href="#Phylogeny" data-toc-modified-id="Phylogeny-6"><span class="toc-item-num">6 </span>Phylogeny</a></span><ul class="toc-item"><li><span><a href="#Unrooted-tree" data-toc-modified-id="Unrooted-tree-6.1"><span class="toc-item-num">6.1 </span>Unrooted tree</a></span></li><li><span><a href="#Rooted-tree" data-toc-modified-id="Rooted-tree-6.2"><span class="toc-item-num">6.2 </span>Rooted tree</a></span></li></ul></li><li><span><a href="#sessionInfo" data-toc-modified-id="sessionInfo-7"><span class="toc-item-num">7 </span>sessionInfo</a></span></li></ul></div>
# Goal
* Merge results from all per-MiSeq-run `LLA` jobs
* Merging feature tables for multiple sequencing runs:
* MiSeq-Run0116
* MiSeq-Run0122
* MiSeq-Run0126
* **NOT** MiSeq-Run187 (failed run)
* MiSeq-run0189
* Then running standard processing:
* dataset summary
* taxonomy
* phylogeny
# Var
```
work_dir = '/ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged/'
run_dir = '/ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/'
miseq_runs = c('Run0116', 'Run0122', 'Run0126', 'Run0189', 'Run0190')
# params
conda_env = 'qiime2-2019.10'
threads = 24
```
# Init
```
library(dplyr)
library(tidyr)
library(ggplot2)
library(LeyLabRMisc)
df.dims()
make_dir(work_dir)
```
# Merging
## SV artifact
```
# artifacts for individual runs
P = file.path(run_dir, '{run}', 'table_merged_filt.qza')
runs = miseq_runs %>% as.list %>%
lapply(function(x) glue::glue(P, run=x))
runs
# function to merge tables
merge_tables = function(in_tbls, out_tbl, conda_env){
cmd = 'qiime feature-table merge --i-tables {in_tbls} --o-merged-table {out_tbl} --p-overlap-method sum'
cmd = glue::glue(cmd, in_tbls=in_tbls, out_tbl=out_tbl)
cat('CMD:', cmd, '\n')
ret = bash_job(cmd, conda_env=conda_env, stderr=TRUE)
cat(ret, '\n')
return(out_tbl)
}
# merging
table_merged_file = file.path(work_dir, 'table_merged_filt.qza')
table_merged_file = merge_tables(paste(runs, collapse=' '), table_merged_file, conda_env)
cat('Output file:', table_merged_file, '\n')
```
## rep-seqs
```
# artifacts for individual runs
P = file.path(run_dir, '{run}', 'rep-seqs_merged_filt.qza')
runs = miseq_runs %>% as.list %>%
lapply(function(x) glue::glue(P, run=x))
runs
# function to merge seqs
merge_seqs = function(in_seqs, out_seq, conda_env){
cmd = 'qiime feature-table merge-seqs --i-data {in_seqs} --o-merged-data {out_seq}'
cmd = glue::glue(cmd, in_seqs=in_seqs, out_tbl=out_seq)
cat('CMD:', cmd, '\n')
ret = bash_job(cmd, conda_env=conda_env, stderr=TRUE)
cat(ret, '\n')
return(out_seq)
}
# merging
seqs_merged_file = file.path(work_dir, 'rep-seqs_merged_filt.qza')
seqs_merged_file = merge_seqs(paste(runs, collapse=' '), seqs_merged_file, conda_env)
cat('Output file:', seqs_merged_file, '\n')
```
## Taxonomy
```
# artifacts for individual runs
P = file.path(run_dir, '{run}', 'taxonomy.qza')
runs = miseq_runs %>% as.list %>%
lapply(function(x) glue::glue(P, run=x))
runs
# function to merge tax
merge_tax = function(in_taxs, out_tax, conda_env){
cmd = 'qiime feature-table merge-taxa --i-data {in_seqs} --o-merged-data {out_tax}'
cmd = glue::glue(cmd, in_seqs=in_taxs, out_tbl=out_tax)
cat('CMD:', cmd, '\n')
ret = bash_job(cmd, conda_env=conda_env, stderr=TRUE)
cat(ret, '\n')
return(out_tax)
}
# merging
tax_merged_file = file.path(work_dir, 'taxonomy.qza')
tax_merged_file = merge_tax(paste(runs, collapse=' '), tax_merged_file, conda_env)
cat('Output file:', tax_merged_file, '\n')
```
# Alignment
## Creating alignment
```
aln_file = file.path(work_dir, 'aligned-rep-seqs_filt.qza')
cmd = 'qiime alignment mafft --p-n-threads {threads} --i-sequences {in_seq} --o-alignment {out_aln}'
cmd = glue::glue(cmd, threads=threads, in_seq=seqs_merged_file, out_aln=aln_file)
bash_job(cmd, conda_env=conda_env, stderr=TRUE)
```
## Masking alignment
```
aln_mask_file = file.path(work_dir, 'aligned-rep-seqs_filt_masked.qza')
cmd = 'qiime alignment mask --i-alignment {in_aln} --o-masked-alignment {out_aln}'
cmd = glue::glue(cmd, in_aln=aln_file, out_aln=aln_mask_file)
bash_job(cmd, conda_env=conda_env, stderr=TRUE)
```
# Phylogeny
## Unrooted tree
```
phy_unroot_file = file.path(work_dir, 'aligned-rep-seqs_filt_masked_unroot-tree.qza')
cmd = 'qiime phylogeny fasttree --p-n-threads {threads} --i-alignment {in_aln} --o-tree {out_phy}'
cmd = glue::glue(cmd, threads=threads, in_aln=aln_mask_file, out_phy=phy_unroot_file)
bash_job(cmd, conda_env=conda_env, stderr=TRUE)
```
## Rooted tree
```
phy_root_file = file.path(work_dir, 'aligned-rep-seqs_filt_masked_midroot-tree.qza')
cmd = 'qiime phylogeny midpoint-root --i-tree {in_phy} --o-rooted-tree {out_phy}'
cmd = glue::glue(cmd, in_phy=phy_unroot_file, out_phy=phy_root_file)
bash_job(cmd, conda_env=conda_env, stderr=TRUE)
```
# sessionInfo
```
sessionInfo()
```
|
github_jupyter
|
```
# Importamos las librerías necesarias
from bs4 import BeautifulSoup
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statistics as st
# Fijamos url de la web
url = 'https://tarifaluzhora.es/'
# Hacemos la petición a la página
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
# Obtenemos las horas
horas = soup.find_all('span', itemprop="description")
# Obtenemos los precios
precios = soup.find_all('span', itemprop="price")
# Obtenemos la fecha
date = soup.find('input', {'name': 'date'}).get('value')
# Creamos un array con el contenido de las horas
columnas = ['fecha']
for h in horas:
columnas.append(h.text)
# Creamos un array con el contenido de los precios
contenido = [date]
for p in precios:
contenido.append(p.text)
# Creamos un dataset con los datos del día actual, cuyas columnas son la fecha y las horas
df = pd.DataFrame(data=[np.array(contenido)], columns=columnas)
df
# Creamos array vacio de urls
urls=[]
# Recorremos rango de fechas hacia atrás
for i in range(2022, 2020, -1):
# Si la fecha es 2022 solo queremos los tres primeros meses
if i==2022:
for j in range(3,0,-1):
# Para febrero sólo recorremos 28 días
if j==2:
for k in range(28,0,-1):
url = 'https://tarifaluzhora.es/?tarifa=pcb&fecha='+str(k).zfill(2)+'%2F'+str(j).zfill(2)+'%2F'+str(i)
urls.append(url)
else:
# Para el resto de meses recorremos 31
for k in range(31,0,-1):
url = 'https://tarifaluzhora.es/?tarifa=pcb&fecha='+str(k).zfill(2)+'%2F'+str(j).zfill(2)+'%2F'+str(i)
urls.append(url)
# Si la fecha es 2021 solo queremos hasta junio
else:
for j in range(12,5,-1):
# Para junio, septiembre y noviembre recorremos 30 días
if (j==6) | (j==9) | (j==11):
for k in range(30,0,-1):
url = 'https://tarifaluzhora.es/?tarifa=pcb&fecha='+str(k).zfill(2)+'%2F'+str(j).zfill(2)+'%2F'+str(i)
urls.append(url)
else:
# Para el resto recorremos 31 días
for k in range(31,0,-1):
url = 'https://tarifaluzhora.es/?tarifa=pcb&fecha='+str(k).zfill(2)+'%2F'+str(j).zfill(2)+'%2F'+str(i)
urls.append(url)
# Recorremos el array de urls
for i in urls:
# Fijamos url de la web
url = i
# Hacemos la petición a la página
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
# Obtenemos los precios
precios = soup.find_all('span', itemprop="price")
# Obtenemos la fecha
fecha = soup.find('input', {'name': 'date'}).get('value')
# Creamos un array con el contenido de las horas
columnas = ['fecha']
for h in horas:
columnas.append(h.text)
# Creamos un array con el contenido de los precios
contenido = [fecha]
for p in precios:
contenido.append(p.text)
# Creamos el df
df1 = pd.DataFrame(data=[np.array(contenido)], columns=columnas)
# Lo unimos al df original para crear la bd
df = pd.concat([df, df1])
print(float(precios[1].text.split(' ')[0]))
print(type(float(precios[1].text.split(' ')[0])))
# Cabecera del dataset
df.head()
# Cola del dataset
df.tail()
# Convertimos todos los precios a datos de tipo float
for i in range(1,len(df.columns)):
for j in range(0, len(df)):
df.iloc[j][i] = float(df.iloc[j][i].split(' ')[0])
# Añadimos una columna que indique en qué unindades se mide el precio de la luz
df = df.assign(unidad = ['€/kWh' for i in range(0, len(df))])
df
# Exportamos el dataframe a un documento .csv
df.to_csv(r'export_dataframe.csv', index = False, header=True)
# Graficamos el precio promedio diario (305 días)
price_all = []
price_day_all = []
for j in range(0, len(df)):
for i in range(1,len(df.columns)-1):
price_all.append(df.iloc[j][i])
price_day_all.append(st.mean(price_all))
price_day_all = list(reversed(price_day_all))
dias = list(range(0, len(price_day_all)))
plt.xlabel("Días")
plt.ylabel("Precio €/kWh")
plt.title(f'Precio promedio diario de la luz')
plt.plot(dias, price_day_all)
plt.show()
# Graficamos el precio a escala hora y diaria (meses 6, 7, 8, 9, 10, 11, 12 del 2021 y meses 1, 2, 3 del 2022)
def graficas_meses(precio_hora, precio_dia, mes, mes_num):
if (mes_num == "/01/" or mes_num == "/02/" or mes_num == "/03/"):
year = 2022
else:
year = 2021
precio_hora = list(reversed(precio_hora))
horas = list(range(0, len(precio_hora)))
print(f'El comportamiento del precio de la luz en {mes} de {year} es:\n')
# Graficamos el precio de cada hora de todos los días del mes
plt.plot(horas, precio_hora)
plt.xlabel("Horas")
plt.ylabel("Precio €/kWh")
plt.title(f'Precio por horas de la luz en {mes} de {year}')
plt.show()
precio_dia = list(reversed(precio_dia))
dia = list(range(0, len(precio_dia)))
# Graficamos el precio promedio de cada día
plt.plot(dia, precio_dia)
plt.xlabel("Días")
plt.ylabel("Precio €/kWh")
plt.title(f'Precio promedio diario de la luz en {mes} de {year}')
plt.show()
print(f'El precio mínimo en {mes} de {year} fue de {round(min(precio_hora), 3)} €/kWh y el precio diario promedio mínimo fue de {round(min(precio_dia), 3)} €/kWh\n')
print(f'El precio máximo en {mes} de {year} fue de {round(max(precio_hora), 3)} €/kWh y el precio diario promedio máximo fue de {round(max(precio_dia), 3)} €/kWh\n\n\n')
mes = {'junio':'/06/', 'julio':'/07/', 'agosto':'/08/', 'septiembre':'/09/', 'octubre':'/10/', 'noviembre':'/11/', 'diciembre':'/12/',
'enero':'/01/', 'febrero':'/02/', 'marzo':'/03/'}
# Escogemos un sub-dataframe para cada mes y graficarlo por separado
for i in range(0, len(mes)):
price_hour = []
price_day = []
df_month = df[df["fecha"].str.contains(list(mes.values())[i])]
for j in range(0, len(df_month)):
for k in range(1, len(df_month.columns)-1):
# Se selecciona el precio de cada hora
price_hour.append(df_month.iloc[j][k])
# Se promedia el precio de cada día
price_day.append(st.mean(price_hour))
# Llamamos a la función encargada de graficar
graficas_meses(price_hour, price_day, list(mes.keys())[i], list(mes.values())[i])
```
|
github_jupyter
|
# Test web application locally
This notebook pulls some images and tests them against the local web app running inside the Docker container we made previously.
```
import matplotlib.pyplot as plt
import numpy as np
from testing_utilities import *
import requests
%matplotlib inline
%load_ext autoreload
%autoreload 2
docker_login = 'fboylu'
image_name = docker_login + '/kerastf-gpu'
```
Run the Docker conatainer in the background and open port 80. Notice we are using nvidia-docker and not docker command.
```
%%bash --bg -s "$image_name"
nvidia-docker run -p 80:80 $1
```
Wait a few seconds for the application to spin up and then check that everything works.
```
!curl 'http://0.0.0.0:80/'
!curl 'http://0.0.0.0:80/version'
```
Pull an image of a Lynx to test our local web app with.
```
IMAGEURL = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg"
plt.imshow(to_img(IMAGEURL))
jsonimg = img_url_to_json(IMAGEURL)
jsonimg[:100]
headers = {'content-type': 'application/json'}
%time r = requests.post('http://0.0.0.0:80/score', data=jsonimg, headers=headers)
print(r)
r.json()
```
Let's try a few more images.
```
images = ('https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg',
'https://upload.wikimedia.org/wikipedia/commons/3/3a/Roadster_2.5_windmills_trimmed.jpg',
'http://www.worldshipsociety.org/wp-content/themes/construct/lib/scripts/timthumb/thumb.php?src=http://www.worldshipsociety.org/wp-content/uploads/2013/04/stock-photo-5495905-cruise-ship.jpg&w=570&h=370&zc=1&q=100',
'http://yourshot.nationalgeographic.com/u/ss/fQYSUbVfts-T7pS2VP2wnKyN8wxywmXtY0-FwsgxpiZv_E9ZfPsNV5B0ER8-bOdruvNfMD5EbP4SznWz4PYn/',
'https://cdn.arstechnica.net/wp-content/uploads/2012/04/bohol_tarsier_wiki-4f88309-intro.jpg',
'http://i.telegraph.co.uk/multimedia/archive/03233/BIRDS-ROBIN_3233998b.jpg')
url = 'http://0.0.0.0:80/score'
results = [requests.post(url, data=img_url_to_json(img), headers=headers) for img in images]
plot_predictions_dict(images, results)
```
Next let's quickly check what the request response performance is for the locally running Docker container.
```
image_data = list(map(img_url_to_json, images)) # Retrieve the images and data
timer_results = list()
for img in image_data:
res=%timeit -r 1 -o -q requests.post(url, data=img, headers=headers)
timer_results.append(res.best)
timer_results
print('Average time taken: {0:4.2f} ms'.format(10**3 * np.mean(timer_results)))
%%bash
docker stop $(docker ps -q)
```
We can now [deploy our web application on AKS](04_DeployOnAKS.ipynb).
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
### Homework part I: Prohibited Comment Classification (3 points)

__In this notebook__ you will build an algorithm that classifies social media comments into normal or toxic.
Like in many real-world cases, you only have a small (10^3) dataset of hand-labeled examples to work with. We'll tackle this problem using both classical nlp methods and embedding-based approach.
```
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
```
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.
It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation.
### Preprocessing and tokenization
Comments contain raw text with punctuation, upper/lowercase letters and even newline symbols.
To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.
```
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "fuck you" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = np.array(list(map(preprocess, texts_train)))
texts_test = np.array(list(map(preprocess, texts_test)))
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
```
### Solving it: bag of words

One traditional approach to such problem is to use bag of words features:
1. build a vocabulary of frequent words (use train data only)
2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).
3. consider this count a feature for some classifier
__Note:__ in practice, you can compute such features using sklearn. Please don't do that in the current assignment, though.
* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
```
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = 10000
from collections import Counter
bow_vocabulary = list(zip(*Counter((' '.join(texts_train)).split()).most_common(k)))[0]
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
counter = Counter(text.split())
return np.array(list(counter[word] for word in bow_vocabulary), 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
```
Machine learning stuff: fit, predict, evaluate. You know the drill.
```
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
```
### Task: implement TF-IDF features
Not all words are equally useful. One can prioritize rare words and downscale words like "and"/"or" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:
$$ feature_i = { Count(word_i \in x) \times { log {N \over Count(word_i \in D) + \alpha} }} $$
, where x is a single text D is your dataset (a collection of texts), N is total number of words and $\alpha$ is a smoothing hyperparameter (typically 1).
It may also be a good idea to normalize each data sample after computing tf-idf features.
__Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.
Please don't use sklearn/nltk builtin tf-idf vectorizers in your solution :) You can still use 'em for debugging though.
```
def tf_idf(texts_collection, a = 1):
global_counts = text_to_bow(' '.join(texts_collection))
return np.stack(list(map(text_to_bow, texts_train))) * np.log(len(texts_collection)/(global_counts + a))
idf = np.log(len(texts_train)/(global_counts + 1))
X_train_tfidf = X_train_bow * idf
X_test_tfidf = X_test_bow * idf
tfidf_model = LogisticRegression().fit(X_train_tfidf, y_train)
for name, X, y, model in [
('train', X_train_tfidf, y_train, tfidf_model),
('test ', X_test_tfidf, y_test, tfidf_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
```
### Solving it better: word vectors
Let's try another approach: instead of counting per-word frequencies, we shall map all words to pre-trained word vectors and average over them to get text features.
This should give us two key advantages: (1) we now have 10^2 features instead of 10^4 and (2) our model can generalize to word that are not in training dataset.
We begin with a standard approach with pre-trained word vectors. However, you may also try
* training embeddings from scratch on relevant (unlabeled) data
* multiplying word vectors by inverse word frequency in dataset (like tf-idf).
* concatenating several embeddings
* call `gensim.downloader.info()['models'].keys()` to get a list of available models
* clusterizing words by their word-vectors and try bag of cluster_ids
__Note:__ loading pre-trained model may take a while. It's a perfect opportunity to refill your cup of tea/coffee and grab some extra cookies. Or binge-watch some tv series if you're slow on internet connection
```
import gensim.downloader
embeddings = gensim.downloader.load("fasttext-wiki-news-subwords-300")
# If you're low on RAM or download speed, use "glove-wiki-gigaword-100" instead. Ignore all further asserts.
type(embeddings)
embeddings.get_vector('qweqw', )
def vectorize_sum(comment):
"""
implement a function that converts preprocessed comment to a sum of token vectors
"""
embedding_dim = embeddings.vectors.shape[1]
features = np.zeros([embedding_dim], dtype='float32')
features += np.sum(embeddings[word] for word in comment.split() if word in embeddings.vocab)
return features
assert np.allclose(
vectorize_sum("who cares anymore . they attack with impunity .")[::70],
np.array([ 0.0108616 , 0.0261663 , 0.13855131, -0.18510573, -0.46380025])
)
X_train_wv = np.stack([vectorize_sum(text) for text in texts_train])
X_test_wv = np.stack([vectorize_sum(text) for text in texts_test])
wv_model = LogisticRegression().fit(X_train_wv, y_train)
for name, X, y, model in [
('bow train', X_train_bow, y_train, bow_model),
('bow test ', X_test_bow, y_test, bow_model),
('vec train', X_train_wv, y_train, wv_model),
('vec test ', X_test_wv, y_test, wv_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
assert roc_auc_score(y_test, wv_model.predict_proba(X_test_wv)[:, 1]) > 0.92, "something's wrong with your features"
```
If everything went right, you've just managed to reduce misclassification rate by a factor of two.
This trick is very useful when you're dealing with small datasets. However, if you have hundreds of thousands of samples, there's a whole different range of methods for that. We'll get there in the second part.
|
github_jupyter
|
# Data analysis with Python, Apache Spark, and PixieDust
***
In this notebook you will:
* analyze customer demographics, such as, age, gender, income, and location
* combine that data with sales data to examine trends for product categories, transaction types, and product popularity
* load data from GitHub as well as from a public open data set
* cleanse, shape, and enrich the data, and then visualize the data with the PixieDust library
Don't worry! PixieDust graphs don't require coding.
By the end of the notebook, you will understand how to combine data to gain insights about which customers you might target to increase sales.
This notebook runs on Python 2 with Spark 2.1, and PixieDust 1.1.10.
<a id="toc"></a>
## Table of contents
#### [Setup](#Setup)
[Load data into the notebook](#Load-data-into-the-notebook)
#### [Explore customer demographics](#part1)
[Prepare the customer data set](#Prepare-the-customer-data-set)<br>
[Visualize customer demographics and locations](#Visualize-customer-demographics-and-locations)<br>
[Enrich demographic information with open data](#Enrich-demographic-information-with-open-data)<br>
#### [Summary and next steps](#summary)
## Setup
You need to import libraries and load the customer data into this notebook.
Import the necessary libraries:
```
import pixiedust
import pyspark.sql.functions as func
import pyspark.sql.types as types
import re
import json
import os
import requests
```
**If you get any errors or if a package is out of date:**
* uncomment the lines in the next cell (remove the `#`)
* restart the kernel (from the Kernel menu at the top of the notebook)
* reload the browser page
* run the cell above, and continue with the notebook
```
#!pip install jinja2 --user --upgrade
#!pip install pixiedust --user --upgrade
#!pip install -U --no-deps bokeh
```
### Load data into the notebook
The data file contains both the customer demographic data that you'll analyzed in Part 1, and the sales transaction data for Part 2.
With `pixiedust.sampleData()` you can load csv data from any url. The below loads the data in a Spark DataFrame.
> In case you wondered, this works with Pandas as well, just add `forcePandas = True` to load data in a Pandas DataFrame. *But do not add this to the below cell as in this notebook you will use Spark.*
```
raw_df = pixiedust.sampleData('https://raw.githubusercontent.com/IBM/analyze-customer-data-spark-pixiedust/master/data/customers_orders1_opt.csv')
raw_df
```
[Back to Table of Contents](#toc)
<a id="part1"></a>
# Explore customer demographics
In this part of the notebook, you will prepare the customer data and then start learning about your customers by creating multiple charts and maps.
## Prepare the customer data set
Create a new Spark DataFrame with only the data you need and then cleanse and enrich the data.
Extract the columns that you are interested in, remove duplicate customers, and add a column for aggregations:
```
# Extract the customer information from the data set
customer_df = raw_df.select("CUST_ID",
"CUSTNAME",
"ADDRESS1",
"ADDRESS2",
"CITY",
"POSTAL_CODE",
"POSTAL_CODE_PLUS4",
"STATE",
"COUNTRY_CODE",
"EMAIL_ADDRESS",
"PHONE_NUMBER",
"AGE",
"GenderCode",
"GENERATION",
"NATIONALITY",
"NATIONAL_ID",
"DRIVER_LICENSE").dropDuplicates()
customer_df.printSchema()
```
Notice that the data type of the AGE column is currently a string. Convert the AGE column to a numeric data type so you can run calculations on customer age.
```
# ---------------------------------------
# Cleanse age (enforce numeric data type)
# ---------------------------------------
def getNumericVal(col):
"""
input: pyspark.sql.types.Column
output: the numeric value represented by col or None
"""
try:
return int(col)
except ValueError:
# age-33
match = re.match('^age\-(\d+)$', col)
if match:
try:
return int(match.group(1))
except ValueError:
return None
return None
toNumericValUDF = func.udf(lambda c: getNumericVal(c), types.IntegerType())
customer_df = customer_df.withColumn("AGE", toNumericValUDF(customer_df["AGE"]))
customer_df
customer_df.show(5)
```
The GenderCode column contains salutations instead of gender values. Derive the gender information for each customer based on the salutation and rename the GenderCode column to GENDER.
```
# ------------------------------
# Derive gender from salutation
# ------------------------------
def deriveGender(col):
""" input: pyspark.sql.types.Column
output: "male", "female" or "unknown"
"""
if col in ['Mr.', 'Master.']:
return 'male'
elif col in ['Mrs.', 'Miss.']:
return 'female'
else:
return 'unknown';
deriveGenderUDF = func.udf(lambda c: deriveGender(c), types.StringType())
customer_df = customer_df.withColumn("GENDER", deriveGenderUDF(customer_df["GenderCode"]))
customer_df.cache()
```
## Explore the customer data set
Instead of exploring the data with `.printSchema()` and `.show()` you can quickly explore data sets using PixieDust'. Invoke the `display()` command and click the table icon to review the schema and preview the data. Customize the options to display only a subset of the fields or rows or apply a filter (by clicking the funnel icon).
```
display(customer_df)
```
[Back to Table of Contents](#toc)
## Visualize customer demographics and locations
Now you are ready to explore the customer base. Using simple charts, you can quickly see these characteristics:
* Customer demographics (gender and age)
* Customer locations (city, state, and country)
You will create charts with the PixieDust library:
- [View customers by gender in a pie chart](#View-customers-by-gender-in-a-pie-chart)
- [View customers by generation in a bar chart](#View-customers-by-generation-in-a-bar-chart)
- [View customers by age in a histogram chart](#View-customers-by-age-in-a-histogram-chart)
- [View specific information with a filter function](#View-specific-information-with-a-filter-function)
- [View customer density by location with a map](#View-customer-density-by-location-with-a-map)
### View customers by gender in a pie chart
Run the `display()` command and then configure the graph to show the percentages of male and female customers:
1. Run the next cell. The PixieDust interactive widget appears.
1. Click the chart button and choose **Pie Chart**. The chart options tool appears.
1. In the chart options, drag `GENDER` into the **Keys** box.
1. In the **Aggregation** field, choose **COUNT**.
1. Increase the **# of Rows to Display** to a very large number to display all data.
1. Click **OK**. The pie chart appears.
If you want to make further changes, click **Options** to return to the chart options tool.
```
display(customer_df)
```
[Back to Table of Contents](#toc)
### View customers by generation in a bar chart
Look at how many customers you have per "generation."
Run the next cell and configure the graph:
1. Choose **Bar Chart** as the chart type and configure the chart options as instructed below.
2. Put `GENERATION` into the **Keys** box.
3. Set **aggregation** to `COUNT`.
1. Increase the **# of Rows to Display** to a very large number to display all data.
4. Click **OK**
4. Change the **Renderer** at the top right of the chart to explore different visualisations.
4. You can use clustering to group customers, for example by geographic location. To group generations by country, select `COUNTRY_CODE` from the **Cluster by** list from the menu on the left of the chart.
```
display(customer_df)
```
[Back to Table of Contents](#toc)
### View customers by age in a histogram chart
A generation is a broad age range. You can look at a smaller age range with a histogram chart. A histogram is like a bar chart except each bar represents a range of numbers, called a bin. You can customize the size of the age range by adjusting the bin size. The more bins you specify, the smaller the age range.
Run the next cell and configure the graph:
1. Choose **Histogram** as the chart type.
2. Put `AGE` into the **Values** box.
1. Increase the **# of Rows to Display** to a very large number to display all data.
1. Click **OK**.
3. Use the **Bin count** slider to specify the number of the bins. Try starting with 40.
```
display(customer_df)
```
[Back to Table of Contents](#toc)
### View specific information with a filter function
You can filter records to restrict analysis by using the [PySpark DataFrame](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame) `filter()` function.
If you want to view the age distribution for a specific generation, uncomment the desired filter condition and run the next cell:
```
# Data subsetting: display age distribution for a specific generation
# (Chart type: histogram, Chart Options > Values: AGE)
# to change the filter condition remove the # sign
condition = "GENERATION = 'Baby_Boomers'"
#condition = "GENERATION = 'Gen_X'"
#condition = "GENERATION = 'Gen_Y'"
#condition = "GENERATION = 'Gen_Z'"
boomers_df = customer_df.filter(condition)
display(boomers_df)
```
PixieDust supports basic filtering to make it easy to analyse data subsets. For example, to view the age distribution for a specific gender configure the chart as follows:
1. Choose `Histogram` as the chart type.
2. Put `AGE` into the **Values** box and click OK.
3. Click the filter button (looking like a funnel), and choose **GENDER** as field and `female` as value.
The filter is only applied to the working data set and does not modify the input `customer_df`.
```
display(customer_df)
```
You can also filter by location. For example, the following command creates a new DataFrame that filters for customers from the USA:
```
condition = "COUNTRY_CODE = 'US'"
us_customer_df = customer_df.filter(condition)
```
You can pivot your analysis perspective based on aspects that are of interest to you by choosing different keys and clusters.
Create a bar chart and cluster the data.
Run the next cell and configure the graph:
1. Choose **Bar chart** as the chart type.
2. Put `COUNTRY_CODE` into the **Keys** box.
4. Set Aggregation to **COUNT**.
5. Click **OK**. The chart displays the number of US customers.
6. From the **Cluster By** list, choose **GENDER**. The chart shows the number of customers by gender.
```
display(us_customer_df)
```
Now try to cluster the customers by state.
A bar chart isn't the best way to show geographic location!
[Back to Table of Contents](#toc)
### View customer density by location with a map
Maps are a much better way to view location data than other chart types.
Visualize customer density by US state with a map.
Run the next cell and configure the graph:
1. Choose **Map** as the chart type.
2. Put `STATE` into the **Keys** box.
4. Set Aggregation to **COUNT**.
5. Click **OK**. The map displays the number of US customers.
6. From the **Renderer** list, choose **brunel**.
> PixieDust supports three map renderers: brunel, [mapbox](https://www.mapbox.com/) and Google. Note that the Mapbox renderer and the Google renderer require an API key or access token and supported features vary by renderer.
7. You can explore more about customers in each state by changing the aggregation method, for example look at customer age ranges (avg, minimum, and maximum) by state. Simply Change the aggregation function to `AVG`, `MIN`, or `MAX` and choose `AGE` as value.
```
display(us_customer_df)
```
[Back to Table of Contents](#toc)
## Enrich demographic information with open data
You can easily combine other sources of data with your existing data. There is a lot of publicly available open data sets that can be very helpful. For example, knowing the approximate income level of your customers might help you target your marketing campaigns.
Run the next cell to load [this data set](https://apsportal.ibm.com/exchange/public/entry/view/beb8c30a3f559e58716d983671b70337) from the United States Census Bureau into your notebook. The data set contains US household income statistics compiled at the zip code geography level.
```
# Load median income information for all US ZIP codes from a public source
income_df = pixiedust.sampleData('https://raw.githubusercontent.com/IBM/analyze-customer-data-spark-pixiedust/master/data/x19_income_select.csv')
income_df.printSchema()
```
Now cleanse the income data set to remove the data that you don't need. Create a new DataFrame for this data:
- The zip code, extracted from the GEOID column.
- The column B19049e1, which contains the median household income for 2013.
```
# ------------------------------
# Helper: Extract ZIP code
# ------------------------------
def extractZIPCode(col):
""" input: pyspark.sql.types.Column containing a geo code, like '86000US01001'
output: ZIP code
"""
m = re.match('^\d+US(\d\d\d\d\d)$',col)
if m:
return m.group(1)
else:
return None
getZIPCodeUDF = func.udf(lambda c: extractZIPCode(c), types.StringType())
income_df = income_df.select('GEOID', 'B19049e1').withColumnRenamed('B19049e1', 'MEDIAN_INCOME_IN_ZIP').withColumn("ZIP", getZIPCodeUDF(income_df['GEOID']))
income_df
```
Perform a left outer join on the customer data set with the income data set, using the zip code as the join condition. For the complete syntax of joins, go to the <a href="https://spark.apache.org/docs/1.5.2/api/python/pyspark.sql.html#pyspark.sql.DataFrame" target="_blank" rel="noopener noreferrer">pyspark DataFrame documentation</a> and scroll down to the `join` syntax.
```
us_customer_df = us_customer_df.join(income_df, us_customer_df.POSTAL_CODE == income_df.ZIP, 'left_outer').drop('GEOID').drop('ZIP')
display(us_customer_df)
```
Now you can visualize the income distribution of your customers by zip code.
Visualize income distribution for our customers.
Run the next cell and configure the graph:
1. Choose **Histogram** as the chart type.
2. Put `MEDIAN_INCOME_IN_ZIP` into the **Values** box and click **OK**.
The majority of your customers live in zip codes where the median income is around 40,000 USD.
[Back to Table of Contents](#toc)
Copyright © 2017, 2018 IBM. This notebook and its source code are released under the terms of the MIT License.
|
github_jupyter
|
# REINFORCE in lasagne
Just like we did before for q-learning, this time we'll design a lasagne network to learn `CartPole-v0` via policy gradient (REINFORCE).
Most of the code in this notebook is taken from approximate qlearning, so you'll find it more or less familiar and even simpler.
__Frameworks__ - we'll accept this homework in any deep learning framework. For example, it translates to TensorFlow almost line-to-line. However, we recommend you to stick to theano/lasagne unless you're certain about your skills in the framework of your choice.
```
%env THEANO_FLAGS = 'floatX=float32'
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
```
# Building the network for REINFORCE
For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.
```
import theano
import theano.tensor as T
# create input variables. We'll support multiple states at once
states = T.matrix("states[batch,units]")
actions = T.ivector("action_ids[batch]")
cumulative_rewards = T.vector("G[batch] = r + gamma*r' + gamma^2*r'' + ...")
import lasagne
from lasagne.layers import *
# input layer
l_states = InputLayer((None,)+state_dim, input_var=states)
<Your architecture. Please start with a 1-2 layers with 50-200 neurons >
# output layer
# this time we need to predict action probabilities,
# so make sure your nonlinearity forces p>0 and sum_p = 1
l_action_probas = DenseLayer( < ... > ,
num_units= < ... > ,
nonlinearity= < ... > )
```
#### Predict function
```
# get probabilities of actions
predicted_probas = get_output(l_action_probas)
# predict action probability given state
# if you use float32, set allow_input_downcast=True
predict_proba = <compile a function that takes states and returns predicted_probas >
```
#### Loss function and updates
We now need to define objective and update over policy gradient.
Our objective function is
$$ J \approx { 1 \over N } \sum _{s_i,a_i} \pi_\theta (a_i | s_i) \cdot G(s_i,a_i) $$
Following the REINFORCE algorithm, we can define our objective as follows:
$$ \hat J \approx { 1 \over N } \sum _{s_i,a_i} log \pi_\theta (a_i | s_i) \cdot G(s_i,a_i) $$
When you compute gradient of that function over network weights $ \theta $, it will become exactly the policy gradient.
```
# select probabilities for chosen actions, pi(a_i|s_i)
predicted_probas_for_actions = predicted_probas[T.arange(
actions.shape[0]), actions]
# REINFORCE objective function
J = # <policy objective as in the last formula. Please use mean, not sum.>
# all network weights
all_weights = <get all "thetas" aka network weights using lasagne >
# weight updates. maximize J = minimize -J
updates = lasagne.updates.sgd(-J, all_weights, learning_rate=0.01)
train_step = theano.function([states, actions, cumulative_rewards], updates=updates,
allow_input_downcast=True)
```
### Computing cumulative rewards
```
def get_cumulative_rewards(rewards, # rewards at each step
gamma=0.99 # discount for reward
):
"""
take a list of immediate rewards r(s,a) for the whole session
compute cumulative returns (a.k.a. G(s,a) in Sutton '16)
G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
The simple way to compute cumulative rewards is to iterate from last to first time tick
and compute G_t = r_t + gamma*G_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
<your code here >
return < array of cumulative rewards >
assert len(get_cumulative_rewards(range(100))) == 100
assert np.allclose(get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9), [
1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(get_cumulative_rewards(
[0, 0, 1, -2, 3, -4, 0], gamma=0.5), [0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(get_cumulative_rewards(
[0, 0, 1, 2, 3, 4, 0], gamma=0), [0, 0, 1, 2, 3, 4, 0])
print("looks good!")
```
### Playing the game
```
def generate_session(t_max=1000):
"""play env with REINFORCE agent and train at the session end"""
# arrays to record session
states, actions, rewards = [], [], []
s = env.reset()
for t in range(t_max):
# action probabilities array aka pi(a|s)
action_probas = predict_proba([s])[0]
a = <sample action with given probabilities >
new_s, r, done, info = env.step(a)
# record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done:
break
cumulative_rewards = get_cumulative_rewards(rewards)
train_step(states, actions, cumulative_rewards)
return sum(rewards)
for i in range(100):
rewards = [generate_session() for _ in range(100)] # generate new sessions
print("mean reward:%.3f" % (np.mean(rewards)))
if np.mean(rewards) > 300:
print("You Win!")
break
```
### Video
```
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),
directory="videos", force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices
```
|
github_jupyter
|
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
## Reflect Tables into SQLALchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///./Resources/hawaii.sqlite")
# reflect an existing database into a new model
inspector = inspect(engine)
# reflect the tables
inspector.get_table_names()
# View all of the classes that automap found
columns = inspector.get_columns('measurement')
print('\nmeasurement:')
for c in columns:
print(c['name'], c["type"])
columns = inspector.get_columns('station')
print('\nstation:')
for c in columns:
print(c['name'], c["type"])
# Save references to each table
Base = automap_base()
Base.prepare(engine, reflect=True)
Base.classes.keys()
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(bind=engine)
```
## Bonus Challenge Assignment: Temperature Analysis II
```
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, maximum, and average temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# For example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use the function `calc_temps` to calculate the tmin, tavg, and tmax
# for a year in the data set
temp_data = calc_temps('2017-08-01','2017-08-07')
print(temp_data)
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for bar height (y value)
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
avg_temp = temp_data[0][1]
err = [temp_data[0][2] - temp_data[0][0]]
fig, ax = plt.subplots(figsize=(3,6))
ax.bar(1,height=[avg_temp], yerr=err, align='center', alpha=0.5, ecolor='black')
ax.set_ylabel('Temp(F)',fontsize=12)
ax.set_title('Trip Avg Temp', fontsize=15)
ax.set_xticklabels([])
ax.yaxis.grid(True)
ax.xaxis.grid(False)
plt.tight_layout()
plt.show()
```
### Daily Rainfall Average
```
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's
# matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Use this function to calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
# For example
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
start_date = '2017-08-01'
end_date = '2017-08-07'
# Use the start and end date to create a range of dates
# Strip off the year and save a list of strings in the format %m-%d
# Use the `daily_normals` function to calculate the normals for each date string
# and append the results to a list called `normals`.
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
```
## Close Session
|
github_jupyter
|
# 📝 Exercise M3.02
The goal is to find the best set of hyperparameters which maximize the
generalization performance on a training set.
Here again with limit the size of the training set to make computation
run faster. Feel free to increase the `train_size` value if your computer
is powerful enough.
```
import numpy as np
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name, "education-num"])
from sklearn.model_selection import train_test_split
data_train, data_test, target_train, target_test = train_test_split(
data, target, train_size=0.2, random_state=42)
```
In this exercise, we will progressively define the classification pipeline
and later tune its hyperparameters.
Our pipeline should:
* preprocess the categorical columns using a `OneHotEncoder` and use a
`StandardScaler` to normalize the numerical data.
* use a `LogisticRegression` as a predictive model.
Start by defining the columns and the preprocessing pipelines to be applied
on each group of columns.
```
from sklearn.compose import make_column_selector as selector
# Write your code here.
categorical_selector = selector(dtype_include=object)
numerical_selector = selector(dtype_exclude=object)
categorical_columns = categorical_selector(data)
numerical_columns = numerical_selector(data)
from sklearn.preprocessing import OneHotEncoder, StandardScaler
# Write your code here.
cat_processor = OneHotEncoder(handle_unknown='ignore')
num_processor = StandardScaler()
```
Subsequently, create a `ColumnTransformer` to redirect the specific columns
a preprocessing pipeline.
```
from sklearn.compose import ColumnTransformer
# Write your code here.
preprocessor = ColumnTransformer(
[
('cat_process', cat_processor, categorical_columns),
('num_process', num_processor, numerical_columns)
])
```
Assemble the final pipeline by combining the above preprocessor
with a logistic regression classifier. Force the maximum number of
iterations to `10_000` to ensure that the model will converge.
```
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
# Write your code here.
model = make_pipeline(preprocessor, LogisticRegression(max_iter=11_000))
```
Use `RandomizedSearchCV` with `n_iter=20` to find the best set of
hyperparameters by tuning the following parameters of the `model`:
- the parameter `C` of the `LogisticRegression` with values ranging from
0.001 to 10. You can use a log-uniform distribution
(i.e. `scipy.stats.loguniform`);
- the parameter `with_mean` of the `StandardScaler` with possible values
`True` or `False`;
- the parameter `with_std` of the `StandardScaler` with possible values
`True` or `False`.
Once the computation has completed, print the best combination of parameters
stored in the `best_params_` attribute.
```
model.get_params()
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import loguniform
# Write your code here.
params_dict = {
'columntransformer__num_process__with_mean': [True, False],
'columntransformer__num_process__with_std': [True, False],
'logisticregression__C': loguniform(1e-3, 10)
}
model_random_search = RandomizedSearchCV(model,
param_distributions= params_dict,
n_iter=20, error_score='raise',
n_jobs=-1, verbose=1)
model_random_search.fit(data_train, target_train)
model_random_search.best_params_
```
|
github_jupyter
|
```
# Description: Plot Figure 3 (Overview of wind, wave and density stratification during the field experiment).
# Author: André Palóczy
# E-mail: paloczy@gmail.com
# Date: December/2020
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.dates as mdates
from pandas import Timestamp
from xarray import open_dataset, DataArray
from dewaveADCP.utils import fourfilt
def fitN2(T, z, g=9.8, alpha=2e-4):
fg = ~np.isnan(T)
p = np.polyfit(z[fg], T[fg], 1)
Tfit = np.polyval(p, z)
dTdz = (Tfit[0] - Tfit[-1])/z.ptp()
N = np.sqrt(g*alpha*np.abs(dTdz)) # [1/s].
return N, Tfit
plt. close('all')
head = "../../data_reproduce_figs/"
ds = open_dataset(head+"windstress-wave-tide.nc")
twind = ds['twind'].values
twave = ds['twave'].values
ttide = ds['ttide'].values
taux, tauy = ds['taux'].values, ds['tauy'].values
Hs = ds['Hs'].values
Tp = ds['Tp'].values
meandir = ds['wavedir'].values
meandirspread = ds['wavespread'].values
# Low-pass filter wind stress.
dts_wind = 2*60 # 2 min sampling frequency.
Tmax = dts_wind*taux.size*2
Tmin = 60*60*30 # 30 h low-pass filter.
taux = fourfilt(taux, dts_wind, Tmax, Tmin)
tauy = fourfilt(tauy, dts_wind, Tmax, Tmin)
dsSIO = open_dataset(head+"windstress-SIOMiniMETBuoy.nc")
twindSIO = dsSIO['taux']['t']
tauxSIO, tauySIO = dsSIO['taux'].values, dsSIO['tauy'].values
dts_windSIO = 60*60 # 1 h averages.
tauxSIO = fourfilt(tauxSIO, dts_windSIO, Tmax, Tmin)
tauySIO = fourfilt(tauySIO, dts_windSIO, Tmax, Tmin)
tl, tr = Timestamp('2017-09-08'), Timestamp('2017-11-01')
# Plot wind and wave variables.
shp = (4, 2)
fig = plt.figure(figsize=(7, 7))
ax1 = plt.subplot2grid(shp, (0, 0), colspan=2, rowspan=1)
ax2 = plt.subplot2grid(shp, (1, 0), colspan=2, rowspan=1)
ax3 = plt.subplot2grid(shp, (2, 0), colspan=2, rowspan=1)
ax4 = plt.subplot2grid(shp, (3, 0), colspan=1, rowspan=1) # T profiles from different moorings.
ax5 = plt.subplot2grid(shp, (3, 1), colspan=1, rowspan=1) # Top-bottom temperature difference.
ax1.plot(twindSIO, tauxSIO, color='gray', linestyle='--')
ax1.plot(twindSIO, tauySIO, color='k', linestyle='--')
ax1.plot(twind, taux, color='gray', label=r"$\tau_x$")
ax1.plot(twind, tauy, color='k', label=r"$\tau_y$")
ax1.axhline(color='k', linewidth=1)
ax1.set_ylabel('Wind stress [Pa]', fontsize=13)
ax1.legend(frameon=False, loc=(0.9, -0.01), handlelength=0.8)
ax1.set_ylim(-0.1, 0.1)
ax2.plot(twave, Hs, 'r', label=r'$H_s$')
ax2r = ax2.twinx()
ax2r.plot(twave, Tp, 'b', label=r'$T_p$')
ax2.set_ylabel(r'$H_s$ [m]', fontsize=15, color='r')
ax2r.set_ylabel(r'Peak period [s]', fontsize=13, color='b')
ax2r.spines['right'].set_color('b')
ax2r.spines['left'].set_color('r')
ax2.tick_params(axis='y', colors='r')
ax2r.tick_params(axis='y', colors='b')
ax3.fill_between(twave, meandir-meandirspread, meandir+meandirspread, color='k', alpha=0.2)
ax3.plot(twave, meandir, 'k')
ax3.set_ylim(240, 360)
ax3.set_ylabel(r'Wave direction [$^\circ$]', fontsize=12)
ax1.xaxis.set_ticklabels([])
ax2.xaxis.set_ticklabels([])
ax1.set_xlim(tl, tr)
ax2.set_xlim(tl, tr)
ax3.set_xlim(tl, tr)
fig.subplots_adjust(hspace=0.3)
ax2.axes.xaxis.set_tick_params(rotation=10)
ax1.text(0.01, 0.85, r'(a)', fontsize=13, transform=ax1.transAxes)
ax2.text(0.01, 0.85, r'(b)', fontsize=13, transform=ax2.transAxes)
ax3.text(0.01, 0.85, r'(c)', fontsize=13, transform=ax3.transAxes)
bbox = ax2.get_position()
offset = 0.04
ax2.set_position([bbox.x0, bbox.y0 + offset, bbox.x1-bbox.x0, bbox.y1 - bbox.y0])
bbox = ax3.get_position()
offset = 0.08
ax3.set_position([bbox.x0, bbox.y0 + offset, bbox.x1-bbox.x0, bbox.y1 - bbox.y0])
locator = mdates.AutoDateLocator(minticks=12, maxticks=24)
fmts = ['', '%Y', '%Y', '%Y', '%Y', '%Y %H:%M']
formatter = mdates.ConciseDateFormatter(locator, offset_formats=fmts)
ax3.xaxis.set_major_locator(locator)
ax3.xaxis.set_major_formatter(formatter)
# Panel with all T profiles.
wanted_ids = ['OC25M', 'OC25SA', 'OC25SB', 'OC40S', 'OC40N']
col = dict(OC25M='k', OC25SA='r', OC25SB='m', OC40S='b', OC40N='c')
for id in wanted_ids:
ds = open_dataset(head+"Tmean-"+id+".nc")
T, zab = ds["Tmean"].values, ds["z"].values
ax4.plot(T, zab, linestyle='none', marker='o', ms=5, mfc=col[id], mec=col[id], label=id)
ax4.legend(loc='upper left', bbox_to_anchor=(-0.05, 1.02), frameon=False, fontsize=10, labelspacing=0.01, handletextpad=0, borderpad=0, bbox_transform=ax4.transAxes)
ax4.set_xlabel(r'$T$ [$^o$C]', fontsize=13)
ax4.set_ylabel(r'zab [m]', fontsize=13)
# Fit a line to each mooring to estimate N2.
Navg = 0
for id in wanted_ids:
ds = open_dataset(head+"Tmean-"+id+".nc")
T, zab = ds["Tmean"].values, ds["z"].values
N, Tfit = fitN2(T, zab)
txt = r"$%s --> %.2f \times 10^{-2}$ s$^{-1}$"%(id, N*100)
print(txt)
Navg += N
Navg /= len(wanted_ids)
# Time series of top-to-bottom T difference.
Tstrat = open_dataset(head+"Tstrat-OC25M.nc")
Tstrat, tt = Tstrat["Tstrat"].values, Tstrat["t"].values
ax5.plot(tt, Tstrat, 'k')
ax1.yaxis.set_ticks([-0.1, -0.075, -0.05, -0.025, 0, 0.025, 0.05, 0.075, 0.1])
ax2r.yaxis.set_ticks([5, 10, 15, 20, 25])
ax4.yaxis.set_ticks([0, 10, 20, 30, 40])
ax5.yaxis.set_ticks(np.arange(7))
ax5.set_xlim(tl, tr)
ax5.yaxis.tick_right()
ax5.yaxis.set_label_position("right")
ax5.set_ylabel(r'$T$ difference [$^o$C]', fontsize=13)
locator = mdates.AutoDateLocator()
fmts = ['', '%Y', '%Y', '%Y', '%Y', '%Y %H:%M']
formatter = mdates.ConciseDateFormatter(locator, offset_formats=fmts)
ax5.xaxis.set_major_locator(locator)
ax5.xaxis.set_major_formatter(formatter)
ax4.text(0.90, 0.1, '(d)', fontsize=13, transform=ax4.transAxes)
ax5.text(0.02, 0.1, '(e)', fontsize=13, transform=ax5.transAxes)
offsetx = 0.03
offsety = 0.065
bbox = ax4.get_position()
ax4.set_position([bbox.x0 + offsetx, bbox.y0, bbox.x1-bbox.x0, bbox.y1 - bbox.y0 + offsety])
bbox = ax5.get_position()
ax5.set_position([bbox.x0 - offsetx, bbox.y0, bbox.x1-bbox.x0, bbox.y1 - bbox.y0 + offsety])
plt.show()
fig.savefig("fig03.png", dpi=300, bbox_inches='tight')
```
|
github_jupyter
|
## Stage 3: What do I need to install?
Maybe your experience looks like the typical python dependency management (https://xkcd.com/1987/):
<img src=https://imgs.xkcd.com/comics/python_environment.png>
Furthermore, data science packages can have all sorts of additional non-Python dependencies which makes things even more confusing, and we end up spending more time sorting out our dependencies than doing data science. If you take home nothing else out of this tutorial, learn this stage. I promise. It will save you, and everyone who works with you, many days of your life back.
### Reproducibility Issues:
* (NO-ENVIRONMENT-INSTRUCTIONS) Chicken and egg issue with environments. No environment.yml file or the like. (Even if there are some instructions in a notebook).
* (NO-VERSION-PIN) Versions not pinned. E.g. uses a dev branch without a clear indication of when it became released.
* (IMPOSSIBLE-ENVIRONMENT) dependencies are not resolvable due to version clashes. (e.g. need <=0.48 and >=0.49)
* (ARCH-DIFFERENCE) The same code runs differently on different architectures
* (MONOLITHIC-ENVIRONMENT) One environment to rule (or fail) them all.
### Default Better Principles
* **Use (at least) one virtual environment per repo**: And use the same name for the environment as the repo.
* **Generate lock files**: Lock files include every single dependency in your dependency chain. Lock files are necessarily platform specific, so you need one per platform that you support. This way you have a perfect version pin on the environment that you used for that moment in time.
* **Check in your environment creation instructions**: That means an `environment.yml` file for conda, and its matching lock file(s).
## The Easydata way: `make create_environment`
We like `conda` for environment management since it's the least bad option for most data science workflows. There are no perfect ways of doing this. Here are some basics.
### Setting up your environment
### clone the repo
```
git clone https://github.com/acwooding/easydata-tutorial
cd easydata-tutorial
```
### Initial setup
* **YOUR FIRST TASK OF THIS STAGE***: Check if there is a CONDA_EXE environment variable set with the full path to your conda binary; e.g. by doing the following:
```
export | grep CONDA_EXE
```
* **NOTE:** if there is no CONDA_EXE, you will need to find your conda binary and record its location in the CONDA_EXE line of `Makefile.include`
Recent versions of conda have made finding the actual binary harder than it should be. This might work:
```
>>> which conda
~/miniconda3/bin/conda
```
* Create and switch to the virtual environment:
```
make create_environment
conda activate easydata-tutorial
make update_environment
```
Now you're ready to run `jupyter notebook` (or jupyter lab) and explore the notebooks in the `notebooks` directory.
From within jupyter, re-open this notebook and run the cells below.
**Your next Task**: Run the next cell to ensure that the packages got added to the python environment correctly.
```
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
```
### Updating your conda and pip environments
The `make` commands, `make create_environment` and `make update_environment` are wrappers that allow you to easily manage your conda and pip environments using a file called `environment.yml` file, which lists the packages you want in your python environment.
(If you ever forget which `make` subcommand to run, you can run `make` by itself and it will provide a list of subcommands that are available.)
When adding packages to your python environment, **never do a `pip install` or `conda install` directly**. Always edit `environment.yml` and `make update_environment` instead.
Your `environment.yml` file will look something like this:
```
name: easydata-tutorial
- pip
- pip:
- -e . # conda >= 4.4 only
- python-dotenv>=0.5.1
- nbval
- nbdime
- umap-learn
- gdown
- # Add more pip dependencies here
- setuptools
- wheel
- git>=2.5 # for git worktree template updating
- sphinx
- bokeh
- click
- colorcet
- coverage
- coveralls
- datashader
- holoviews
- matplotlib
- jupyter
- # Add more conda dependencies here
...
```
Notice you can add conda and pip dependencies separately. For good reproducibility, we recommend you always try and use the conda version of a package if it is available.
Once you're done your edits, run `make update_environment` and voila, your python environment is up to date.
**Git Bonus Task:** To save or share your updated environment, check in your `environment.yml` file using git.
**YOUR NEXT TASK** in the Quest: Updating your python environment to include the `seaborn` package. But first, a quick tip with using `conda` environments in notebooks:
#### Using your conda environment in a jupyter notebook
If you make a new notebook, and your packages don't seem to be available, make sure to select the `easydata-tutorial` Kernel from within the notebook. If you are somehow in another kernel, select **Kernel -> Change kernel -> Python[conda env:easydata-tutorial]**. If you don't seem to have that option, make sure that you ran `jupyter notebooks` with the `easydata-tutorial` conda environment enabled, and that `which jupyter` points to the correct (`easydata-tutorial`) version of jupyter.
You can see what's in your notebook's conda environment by putting the following in a cell and running it:
```
%conda info
```
Another useful cell to include is the following.
If you want your environment changes to be immediately available in your running notebooks, make sure to run a notebook cell containing:
```
%load_ext autoreload
%autoreload 2
```
If you did your task correctly, the following import will succeed.
```
import seaborn as sns
```
Remember, you should **never** do a `pip install` or `conda install` manually. You want to make sure your environment changes are saved to your data science repo. Instead, edit `environment.yml` and do a `make update_environment`.
Your **NEXT TASK of this stage**: Run `make env_challenge` and follow the instructions if it works.
### BONUS Task: Lockfiles
* Do this if there's time *
Lockfiles are a way of separating the list of "packages I want" from "packages I need to install to make everything work". For reproducibility reasons, we want to keep track of both files, but not in the same place. Usually, this separating is done with something called a "lockfile."
Unlike several other virtual environment managers, conda doesn't have lockfiles. To work around this limitation, Easydata generates a basic lockfile from `environment.yml` whenever you run `make update_environment`.
This lockfile is a file called `environment.{$ARCH}.lock.yml` (e.g. `environment.i386.lock.yml`). This file keeps a record of the exact environment that is currently installed in your conda environment `easydata-tutorial`. If you ever need to reproduce an environment exactly, you can install from the `.lock.yml` file. (Note: These are architecture dependent, so don't expect a mac lockfile to work on linux, and vice versa).
For more instructions on setting up and maintaining your python environment (including how to point your environment at your custom forks and work in progress) see [Setting up and Maintaining your Conda Environment Reproducibly](../reference/easydata/conda-environments.md).
**Your BONUS Task** in the Quest: Take a look at the lockfile, and compare it's content to `environment.yml`. Then ask yourself, "aren't I glad I don't have to maintain this list manually?"
|
github_jupyter
|
```
import data
import torch
from utils.distmat import *
from utils.evaluation import *
from hitl import *
import numpy as np
import matplotlib.pyplot as plt
```
## Load Data
```
key = data.get_output_keys()[2]
key
output = data.load_output(key)
qf = torch.Tensor(output["qf"])
gf = torch.Tensor(output["gf"])
q_pids = np.array(output["q_pids"])
g_pids = np.array(output["g_pids"])
q_camids = np.array(output["q_camids"])
g_camids = np.array(output["g_camids"])
distmat = compute_distmat(qf, gf)
```
### Baseline Results
```
result = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)
result
```
### Re-ranked Results
```
all_distmat = compute_inner_distmat(torch.cat((qf, gf)))
re_distmat = rerank_distmat(all_distmat, qf.shape[0])
re_result = evaluate(re_distmat, q_pids, g_pids, q_camids, g_camids)
re_result
```
## One-Shot Evaluation Using Modules
### Rocchio
```
ntot = torch.as_tensor
rocchio.run(qf, gf, ntot(q_pids), ntot(g_pids), ntot(q_camids), ntot(g_camids), t=3)
```
### Neighborhood Expansion (Min)
```
ntot = torch.from_numpy
ne.run(qf, gf, ntot(q_pids), ntot(g_pids), ntot(q_camids), ntot(g_camids), t=3, method="min")
```
### Neighborhood Expansion (Mean)
```
ntot = torch.from_numpy
ne.run(qf, gf, ntot(q_pids), ntot(g_pids), ntot(q_camids), ntot(g_camids), t=3, method="mean")
```
## Development
## Feedback Models
```
q_pids = torch.tensor(output["q_pids"])
g_pids = torch.tensor(output["g_pids"])
q = len(q_pids)
g = len(g_pids)
m = qf.shape[1]
q_camids = np.array(output["q_camids"])
g_camids = np.array(output["g_camids"])
```
### Naive Feedback
```
if input("reset? ") == "y":
positive_indices = torch.zeros((q, g), dtype=bool)
negative_indices = torch.zeros((q, g), dtype=bool)
for i in tqdm(range(5)):
qf_adjusted = qf # no adjust, naive re-rank
distmat = compute_distmat(qf_adjusted, gf)
distmat[positive_indices] = float("inf")
distmat[negative_indices] = float("inf")
# Select feedback (top-1 from remaining gallery instances)
distances, indices = distmat.min(dim=1)
assert(tuple(distances.shape) == (q,))
assert(tuple(indices.shape) == (q,))
pmap = g_pids[indices] == q_pids
positive_q = torch.arange(0, q)[pmap]
negative_q = torch.arange(0, q)[pmap == False]
positive_g = indices[pmap]
negative_g = indices[pmap== False]
existing = positive_indices[positive_q, positive_g]
assert(not existing.any())
positive_indices[positive_q, positive_g] = True
existing = negative_indices[negative_q, negative_g]
assert(not existing.any())
negative_indices[negative_q, negative_g] = True
distmat = compute_distmat(qf_adjusted, gf)
distmat[positive_indices] = 0
distmat[negative_indices] = float("inf")
naive_new_result = evaluate(distmat.numpy(), q_pids, g_pids, q_camids, g_camids)
naive_new_result
```
### Rocchio
```
if input("reset? ") == "y":
positive_indices = torch.zeros((q, g), dtype=bool)
negative_indices = torch.zeros((q, g), dtype=bool)
alpha = 1
beta = 0.65
gamma = 0.35
qf_adjusted = qf
for i in tqdm(range(5)):
distmat = compute_distmat(qf_adjusted, gf)
distmat[positive_indices] = float("inf")
distmat[negative_indices] = float("inf")
# Select feedback (top-1 from remaining gallery instances)
distances, indices = distmat.min(dim=1)
assert(tuple(distances.shape) == (q,))
assert(tuple(indices.shape) == (q,))
# Apply feedback
pmap = g_pids[indices] == q_pids
positive_q = torch.arange(0, q)[pmap]
negative_q = torch.arange(0, q)[pmap == False]
positive_g = indices[pmap]
negative_g = indices[pmap== False]
existing = positive_indices[positive_q, positive_g]
assert(not existing.any())
positive_indices[positive_q, positive_g] = True
existing = negative_indices[negative_q, negative_g]
assert(not existing.any())
negative_indices[negative_q, negative_g] = True
# Compute new query
mean_positive_gf = positive_indices.float().mm(gf) / positive_indices.float().sum(dim=1, keepdim=True)
mean_negative_gf = negative_indices.float().mm(gf) / negative_indices.float().sum(dim=1, keepdim=True)
mean_positive_gf[mean_positive_gf.isnan()] = 0
mean_negative_gf[mean_negative_gf.isnan()] = 0
qf_adjusted = qf * alpha + mean_positive_gf * beta - mean_negative_gf * gamma
distmat = compute_distmat(qf_adjusted, gf)
distmat[positive_indices] = 0
distmat[negative_indices] = float("inf")
new_result = evaluate(distmat.numpy(), q_pids, g_pids, q_camids, g_camids)
new_result
```
### Function Tests (Rocchio)
```
def adjust_qf(qf, gf, positive_indices, negative_indices, alpha=1, beta=0.65, gamma=0.35):
assert(qf.shape[1] == gf.shape[1])
mean_positive_gf = positive_indices.float().mm(gf) / positive_indices.float().sum(dim=1, keepdim=True)
mean_negative_gf = negative_indices.float().mm(gf) / negative_indices.float().sum(dim=1, keepdim=True)
mean_positive_gf[mean_positive_gf.isnan()] = 0
mean_negative_gf[mean_negative_gf.isnan()] = 0
qf_adjusted = qf * alpha + mean_positive_gf * beta - mean_negative_gf * gamma
return qf_adjusted
def update_feedback_indices(distmat, q_pids, g_pids, positive_indices, negative_indices, inplace=True):
"""
Note that distmat is corrupted if inplace=True.
distmat: q x g Tensor (adjusted query to gallery)
q_pids: q
g_pids: g
positive_indices: q x g
negative_indices: q x g
:Returns:
positive_indices, negative_indices
"""
q, g = tuple(distmat.shape)
if not inplace:
distmat = distmat.clone().detach()
positive_indices = positive_indices.copy()
negative_indices = negative_indices.copy()
distmat[positive_indices] = float("inf")
distmat[negative_indices] = float("inf")
indices = distmat.argmin(dim=1)
pmap = g_pids[indices] == q_pids
positive_q = torch.arange(0, q)[pmap]
negative_q = torch.arange(0, q)[pmap == False]
positive_g = indices[pmap]
negative_g = indices[pmap== False]
existing = positive_indices[positive_q, positive_g]
assert(not existing.any())
positive_indices[positive_q, positive_g] = True
existing = negative_indices[negative_q, negative_g]
assert(not existing.any())
negative_indices[negative_q, negative_g] = True
return positive_indices, negative_indices
def init_feedback_indices(q, g):
return torch.zeros((q, g), dtype=bool)
def update_distmat(qf, gf, q_pids, g_pids, positive_indices=None, negative_indices=None,
inplace=True, previous_distmat=None, alpha=1, beta=0.65, gamma=0.35):
"""
previous_distmat: adjusted distmat (!= compute_distmat(qf, gf))
"""
q, g = qf.shape[0], gf.shape[0]
assert(qf.shape[1] == gf.shape[1])
if positive_indices is None:
positive_indices = init_feedback_indices(q, g)
if negative_indices is None:
negative_indices = init_feedback_indices(q, g)
distmat = previous_distmat
if distmat is None:
qf_adjusted = adjust_qf(qf, gf, positive_indices, negative_indices)
distmat = compute_distmat(qf_adjusted, gf)
positive_indices, negative_indices = update_feedback_indices(
distmat, q_pids, g_pids, positive_indices, negative_indices, inplace=inplace)
qf_adjusted = adjust_qf(qf, gf, positive_indices, negative_indices, alpha=alpha, beta=beta, gamma=gamma)
distmat = compute_distmat(qf_adjusted, gf)
return distmat, positive_indices, negative_indices
positive_indices = None
negative_indices = None
distmat = None
for i in tqdm(range(5)):
distmat, positive_indices, negative_indices = update_distmat(
qf, gf, q_pids, g_pids, positive_indices, negative_indices, previous_distmat=distmat)
distmat[positive_indices] = 0
distmat[negative_indices] = float("inf")
new_result = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)
new_result
```
### Module Test (Naive)
```
positive_indices = None
negative_indices = None
distmat = None
for i in tqdm(range(3)):
distmat, positive_indices, negative_indices = feedback.naive_round(
qf, gf, q_pids, g_pids, positive_indices, negative_indices, previous_distmat=distmat)
naive_result = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)
naive_result
```
### Module Test (Rocchio)
```
positive_indices = None
negative_indices = None
distmat = None
for i in tqdm(range(3)):
distmat, positive_indices, negative_indices = rocchio.rocchio_round(
qf, gf, q_pids, g_pids, positive_indices, negative_indices, previous_distmat=distmat)
rocchio_result = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)
rocchio_result
```
### Single Feedback Rocchio (Old)
Initial implementation test
```
g_pids = torch.Tensor(output["g_pids"])
q_pids = torch.Tensor(output["q_pids"])
match = g_pids[min_indices] == q_pids
selected_gf = gf[min_indices]
selected_gf.shape
weights = match.float() * (beta + gamma) - gamma
weighted_feedback = selected_gf * weights.reshape(-1, 1)
weighted_feedback
inverse_weights = 1 - weights
new_qf = qf * inverse_weights.reshape(-1, 1) + weighted_feedback
new_distmat = compute_distmat(new_qf, gf)
new_result = evaluate(new_distmat.numpy(), q_pids, g_pids, np.array(output["q_camids"]), np.array(output["g_camids"]), test_ratio=0.1)
new_result
```
|
github_jupyter
|
WKN strings can be converted to the following formats via the `output_format` parameter:
* `compact`: only number strings without any seperators or whitespace, like "A0MNRK"
* `standard`: WKN strings with proper whitespace in the proper places. Note that in the case of WKN, the compact format is the same as the standard one.
* `isin`: convert the number to an ISIN, like "DE000A0MNRK9".
Invalid parsing is handled with the `errors` parameter:
* `coerce` (default): invalid parsing will be set to NaN
* `ignore`: invalid parsing will return the input
* `raise`: invalid parsing will raise an exception
The following sections demonstrate the functionality of `clean_de_wkn()` and `validate_de_wkn()`.
### An example dataset containing WKN strings
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"wkn": [
'A0MNRK',
'AOMNRK',
'7542011030',
'7552A10004',
'8019010008',
"hello",
np.nan,
"NULL",
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"1111 S Figueroa St, Los Angeles, CA 90015",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
```
## 1. Default `clean_de_wkn`
By default, `clean_de_wkn` will clean wkn strings and output them in the standard format with proper separators.
```
from dataprep.clean import clean_de_wkn
clean_de_wkn(df, column = "wkn")
```
## 2. Output formats
This section demonstrates the output parameter.
### `standard` (default)
```
clean_de_wkn(df, column = "wkn", output_format="standard")
```
### `compact`
```
clean_de_wkn(df, column = "wkn", output_format="compact")
```
### `isin`
```
clean_de_wkn(df, column = "wkn", output_format="isin")
```
## 3. `inplace` parameter
This deletes the given column from the returned DataFrame.
A new column containing cleaned WKN strings is added with a title in the format `"{original title}_clean"`.
```
clean_de_wkn(df, column="wkn", inplace=True)
```
## 4. `errors` parameter
### `coerce` (default)
```
clean_de_wkn(df, "wkn", errors="coerce")
```
### `ignore`
```
clean_de_wkn(df, "wkn", errors="ignore")
```
## 4. `validate_de_wkn()`
`validate_de_wkn()` returns `True` when the input is a valid WKN. Otherwise it returns `False`.
The input of `validate_de_wkn()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.
When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated.
When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_de_wkn()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_de_wkn()` returns the validation result for the whole DataFrame.
```
from dataprep.clean import validate_de_wkn
print(validate_de_wkn('A0MNRK'))
print(validate_de_wkn('AOMNRK'))
print(validate_de_wkn('7542011030'))
print(validate_de_wkn('7552A10004'))
print(validate_de_wkn('8019010008'))
print(validate_de_wkn("hello"))
print(validate_de_wkn(np.nan))
print(validate_de_wkn("NULL"))
```
### Series
```
validate_de_wkn(df["wkn"])
```
### DataFrame + Specify Column
```
validate_de_wkn(df, column="wkn")
```
### Only DataFrame
```
validate_de_wkn(df)
```
|
github_jupyter
|
```
from google.colab import drive
drive.mount('/content/gdrive')
!git clone https://github.com/NVIDIA/pix2pixHD.git
import os
os.chdir('pix2pixHD/')
# !chmod 755 /content/gdrive/My\ Drive/Images_for_GAN/datasets/download_convert_apples_dataset.sh
# !/content/gdrive/My\ Drive/Images_for_GAN/datasets/download_convert_apples_dataset.sh
!ls
!pip install dominate
import numpy as np
import scipy
import matplotlib
import pandas as pd
import cv2
import matplotlib.pyplot as plt
# import pydmd
#from pydmd import DMD
%matplotlib inline
import scipy.integrate
from matplotlib import animation
from IPython.display import HTML
from pylab import rcParams
rcParams['figure.figsize'] = 8, 5
from PIL import Image
from skimage import io
# Example of RGB image from A
apples_example1 = cv2.imread('/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/A/20_12_26_22_15_00_Canon_top_all_on.jpg')
apples_example1 = cv2.cvtColor(apples_example1, cv2.COLOR_BGR2RGB)
plt.imshow(apples_example1)
plt.show()
print(type(apples_example1))
print("- Number of Pixels: " + str(apples_example1.size))
print("- Shape/Dimensions: " + str(apples_example1.shape))
# Example of RGB image from B
apples_example2 = cv2.imread('/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/B/set10_20201226_221732_686_00000_channel7.png')
apples_example2 = cv2.cvtColor(apples_example2, cv2.COLOR_BGR2RGB)
plt.imshow(apples_example2)
plt.show()
print(type(apples_example2))
print("- Number of Pixels: " + str(apples_example2.size))
print("- Shape/Dimensions: " + str(apples_example2.shape))
# Example of RGB image from ./train_A/
apples_example3 = cv2.imread('/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_A/20210111_171500.png')
apples_example3 = cv2.cvtColor(apples_example3, cv2.COLOR_BGR2RGB)
plt.imshow(apples_example3)
plt.show()
print(type(apples_example3))
print("- Number of Pixels: " + str(apples_example3.size))
print("- Shape/Dimensions: " + str(apples_example3.shape))
# Example of RGB image from ./train_B/
apples_example4 = cv2.imread('/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_B/20210111_134500.png')
apples_example4 = cv2.cvtColor(apples_example4, cv2.COLOR_BGR2RGB)
plt.imshow(apples_example4)
plt.show()
print(type(apples_example4))
print("- Number of Pixels: " + str(apples_example4.size))
print("- Shape/Dimensions: " + str(apples_example4.shape))
#!python train.py --loadSize 512 --fineSize 512 --label_nc 0 --no_instance --name apples_RGB_NIR --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train --checkpoints_dir /content/gdrive/MyDrive/Images_for_GAN/checkpoints --model Pix2PixHD --save_epoch_freq 5
path_train_A = '/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_A/'
print('path_train_A: ', path_train_A)
print('Number of images in path_train_A:', len(path_train_A))
path_train_B = '/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_B/'
print('path_train_B: ', path_train_B)
print('Number of images in path_train_B:', len(path_train_B))
# !python train.py --name apples_trash --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/Trash --checkpoints_dir /content/gdrive/MyDrive/Images_for_GAN/checkpoints --norm batch --loadSize 512 --fineSize 512 --label_nc 0 --no_instance
!python train.py --name apples_trash_1 --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train --label_nc 0 --no_instance --loadSize 320 --fineSize 160 --resize_or_crop resize_and_crop
# !python train.py --name apples_trash --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/Trash --checkpoints_dir /content/gdrive/MyDrive/Images_for_GAN/checkpoints --norm batch --loadSize 512 --fineSize 512 --label_nc 0 --no_instance
!python train.py --name apples_train_1 --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/Trash --label_nc 0 --no_instance --loadSize 320 --fineSize 160 --resize_or_crop resize_and_crop
```
|
github_jupyter
|
```
from bayes_opt import BayesianOptimization
from bayes_opt.util import load_logs
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern, RBF
import json
import numpy as np
from itertools import product
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.animation import FuncAnimation
%matplotlib inline
from IPython.display import HTML
gp =GaussianProcessRegressor(
kernel=Matern(length_scale= 10),#RBF(length_scale=[0.05, 1]),
alpha=1e-6,
normalize_y=True,
n_restarts_optimizer=5,
random_state=5
)
#Load data
log = "bad_logs.json"
x = []
y = []
with open(log, "r") as f:
for line in f:
line = json.loads(line)
y.append(line["target"])
x.append([line["params"]["a1"],line["params"]["t1"]])
x = np.array(x)
model = gp.fit(x,y)
label_x = "A1"
label_y = "T1"
bounds = [[0, 1.5],[300,1000]]
X1 = np.linspace(0, 1.5, 200)
X2 = np.linspace(300, 1000, 200)
x1, x2 = np.meshgrid(X1, X2)
X = np.hstack((x1.reshape(200*200,1),x2.reshape(200*200,1)))
fig = plt.figure(figsize=(13,5))
def update(i):
fig.clear()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
gp.fit(x[:i],y[:i])
m, v = gp.predict(X, return_std=True)
cf1 = ax1.contourf(X1, X2, m.reshape(200,200),100)
ax1.plot(x[:i-1,0], x[:i-1,1], 'r.', markersize=10, label=u'Observations')
ax1.plot(x[i,0], x[i,1], 'r.', markersize=25, label=u'New Point')
cb1 = fig.colorbar(cf1, ax=ax1)
ax1.set_xlabel(label_x)
ax1.set_ylabel(label_y)
ax1.set_title('Posterior mean')
##
ax2.plot(x[i,0], x[i,1], 'r.', markersize=25, label=u'New Point')
ax2.plot(x[:i-1,0], x[:i-1,1], 'r.', markersize=10, label=u'Observations')
cf2 = ax2.contourf(X1, X2, np.sqrt(v.reshape(200,200)),100)
cb2 = fig.colorbar(cf2, ax=ax2)
ax2.set_xlabel(label_x)
ax2.set_ylabel(label_y)
ax2.set_title('Posterior sd.')
return ax1, ax2
from mpl_toolkits import mplot3d
from matplotlib import cm
def update3d(i):
fig.clear()
ax1 = fig.add_subplot(121, projection='3d')
ax2 = fig.add_subplot(122, projection='3d')
gp.fit(x[:i],y[:i])
m, v = gp.predict(X, return_std=True)
ax1.plot_surface(X1, X2, m.reshape(200,200), 50, cmap=cm.coolwarm)
ax1.set_xlabel(label_x)
ax1.set_ylabel(label_y)
ax1.set_zticks()
ax1.set_title('Posterior mean')
##
ax2.plot_surface(X1, X2, v.reshape(200,200), 50, cmap=cm.coolwarm)
ax2.set_xlabel(label_x)
ax2.set_ylabel(label_y)
ax2.set_zticks()
ax2.set_title('Posterior sd.')
return ax1, ax2
import matplotlib.animation as animation
anim = FuncAnimation(fig, update, frames=np.arange(3, x.shape[0]), interval=500)
anim.save('line.gif', dpi=80, writer='imagemagick')
#plt.show()
HTML(anim.to_html5_video())
ax1 = fig.add_subplot(121, projection='3d')
ax2 = fig.add_subplot(122, projection='3d')
gp.fit(x[:10],y[:10])
m, v = gp.predict(X, return_std=True)
print(m.reshape(200,200).shape)
ax1.plot_surface(x1, x2, m.reshape(200,200), 50)
ax1.set_xlabel(label_x)
ax1.set_ylabel(label_y)
ax1.set_title('Posterior mean')
##
ax2.plot_surface(x1, x2, v.reshape(200,200), 50)
ax2.set_xlabel(label_x)
ax2.set_ylabel(label_y)
ax2.set_title('Posterior sd.')
plt.show
def prediction(a1, t1):
m, std = model.predict([[a1, t1]], return_std=True)
return m[0]
prediction(0.8, 730)
from bayes_opt import BayesianOptimization
from bayes_opt.observer import JSONLogger, ScreenLogger
from bayes_opt.event import Events
logger = JSONLogger(path="bad_logs.json")
optimizer = BayesianOptimization(
prediction,
{'a1': (0, 1.5),
't1': (300, 1000)})
optimizer.subscribe(Events.OPTMIZATION_STEP, logger)
optimizer.subscribe(Events.OPTMIZATION_STEP, ScreenLogger(verbose=2))
optimizer.set_gp_params(kernel=Matern(length_scale= 10))
optimizer.maximize( init_points=5,
n_iter=30, acq="ucb", kappa=5.0)
```
|
github_jupyter
|
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/texture.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/texture.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Image/texture.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/texture.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
import math
# Load a high-resolution NAIP image.
image = ee.Image('USDA/NAIP/DOQQ/m_3712213_sw_10_1_20140613')
# Zoom to San Francisco, display.
Map.setCenter(-122.466123, 37.769833, 17)
Map.addLayer(image, {'max': 255}, 'image')
# Get the NIR band.
nir = image.select('N')
# Define a neighborhood with a kernel.
square = ee.Kernel.square(**{'radius': 4})
# Compute entropy and display.
entropy = nir.entropy(square)
Map.addLayer(entropy,
{'min': 1, 'max': 5, 'palette': ['0000CC', 'CC0000']},
'entropy')
# Compute the gray-level co-occurrence matrix (GLCM), get contrast.
glcm = nir.glcmTexture(**{'size': 4})
contrast = glcm.select('N_contrast')
Map.addLayer(contrast,
{'min': 0, 'max': 1500, 'palette': ['0000CC', 'CC0000']},
'contrast')
# Create a list of weights for a 9x9 kernel.
list = [1, 1, 1, 1, 1, 1, 1, 1, 1]
# The center of the kernel is zero.
centerList = [1, 1, 1, 1, 0, 1, 1, 1, 1]
# Assemble a list of lists: the 9x9 kernel weights as a 2-D matrix.
lists = [list, list, list, list, centerList, list, list, list, list]
# Create the kernel from the weights.
# Non-zero weights represent the spatial neighborhood.
kernel = ee.Kernel.fixed(9, 9, lists, -4, -4, False)
# Convert the neighborhood into multiple bands.
neighs = nir.neighborhoodToBands(kernel)
# Compute local Geary's C, a measure of spatial association.
gearys = nir.subtract(neighs).pow(2).reduce(ee.Reducer.sum()) \
.divide(math.pow(9, 2))
Map.addLayer(gearys,
{'min': 20, 'max': 2500, 'palette': ['0000CC', 'CC0000']},
"Geary's C")
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
|
github_jupyter
|
## TFMA Notebook example
This notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers.
Note: Please make sure to follow the instructions in [README.md](https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi/README.md) when running this notebook
## Setup
Import necessary packages.
```
import apache_beam as beam
import os
import preprocess
import shutil
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
from trainer import task
from trainer import taxi
```
Helper functions and some constants for running the notebook locally.
```
BASE_DIR = os.getcwd()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
# Base dir containing train and eval data
TRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')
EVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')
# Base dir where TFT writes training data
TFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')
TFT_TRAIN_FILE_PREFIX = 'train_transformed'
# Base dir where TFT writes eval data
TFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')
TFT_EVAL_FILE_PREFIX = 'eval_transformed'
TF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')
# Base dir where TFMA writes eval data
TFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')
SERVING_MODEL_DIR = 'serving_model_dir'
EVAL_MODEL_DIR = 'eval_model_dir'
def get_tft_train_output_dir(run_id):
return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)
def get_tft_eval_output_dir(run_id):
return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)
def get_tf_output_dir(run_id):
return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)
def get_tfma_output_dir(run_id):
return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)
def _get_output_dir(base_dir, run_id):
return os.path.join(base_dir, 'run_' + str(run_id))
def get_schema_file():
return os.path.join(OUTPUT_DIR, 'schema.pbtxt')
```
Clean up output directories.
```
shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)
shutil.rmtree(get_schema_file(), ignore_errors=True)
```
## Compute and visualize descriptive data statistics
```
# Compute stats over training data.
train_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv'))
# Visualize training data stats.
tfdv.visualize_statistics(train_stats)
```
## Infer a schema
```
# Infer a schema from the training data stats.
schema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False)
tfdv.display_schema(schema=schema)
```
## Check evaluation data for errors
```
# Compute stats over eval data.
eval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv'))
# Compare stats of eval data with training data.
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
# Update the schema based on the observed anomalies.
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
```
## Freeze the schema
Now that the schema has been reviewed and curated, we will store it in a file to reflect its "frozen" state.
```
file_io.recursive_create_dir(OUTPUT_DIR)
file_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema))
```
## Preprocess Inputs
transform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).
```
# Transform eval data
preprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_EVAL_FILE_PREFIX,
working_dir=get_tft_eval_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
# Transform training data
preprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'),
outfile_prefix=TFT_TRAIN_FILE_PREFIX,
working_dir=get_tft_train_output_dir(0),
schema_file=get_schema_file(),
pipeline_args=['--runner=DirectRunner'])
print('Done')
```
## Compute statistics over transformed data
```
# Compute stats over transformed training data.
TRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + "*")
transformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA)
# Visualize transformed training data stats and compare to raw training data.
# Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation.
tfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW')
```
## Prepare the Model
To use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.
``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.
Contruct the **EvalSavedModel** after training is completed.
```
def run_experiment(hparams):
"""Run the training and evaluate using the high level API"""
# Train and evaluate the model as usual.
estimator = task.train_and_maybe_evaluate(hparams)
# Export TFMA's sepcial EvalSavedModel
eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)
receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)
tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
def eval_input_receiver_fn(working_dir):
# Extract feature spec from the schema.
raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec
serialized_tf_example = tf.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# First we deserialize our examples using the raw schema.
features = tf.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, we must process them through tft
_, transformed_features = (
saved_transform_io.partially_apply_saved_transform(
os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),
features))
# The key MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)])
print('Done')
```
## Train and export the model for TFMA
```
def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):
"""Helper method to train and export the model for TFMA
The caller specifies the input and output directory by providing run ids. The optional parameters
allows the user to change the modelfor time series view.
Args:
tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.
tf_run_id: The run for this training run. Identify where the exported model will be written to.
num_layers: The number of layers used by the hiden layer.
first_layer_size: The size of the first hidden layer.
scale_factor: The scale factor between each layer in in hidden layers.
"""
hparams = tf.contrib.training.HParams(
# Inputs: are tf-transformed materialized features
train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),
eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),
schema_file=get_schema_file(),
# Output: dir for trained model
job_dir=get_tf_output_dir(tf_run_id),
tf_transform_dir=get_tft_train_output_dir(tft_run_id),
# Output: dir for both the serving model and eval_model which will go into tfma
# evaluation
output_dir=get_tf_output_dir(tf_run_id),
train_steps=10000,
eval_steps=5000,
num_layers=num_layers,
first_layer_size=first_layer_size,
scale_factor=scale_factor,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40)
run_experiment(hparams)
print('Done')
run_local_experiment(tft_run_id=0,
tf_run_id=0,
num_layers=4,
first_layer_size=100,
scale_factor=0.7)
print('Done')
```
## Run TFMA to compute metrics
For local analysis, TFMA offers a helper method ``tfma.run_model_analysis``
```
help(tfma.run_model_analysis)
```
#### You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.
```
def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None):
"""A simple wrapper function that runs tfma locally.
A function that does extra transformations on the data and then run model analysis.
Args:
slice_spec: The slicing spec for how to slice the data.
tf_run_id: An id to contruct the model directories with.
tfma_run_id: An id to construct output directories with.
input_csv: The evaluation data in csv format.
schema_file: The file holding a text-serialized schema for the input data.
add_metrics_callback: Optional list of callbacks for computing extra metrics.
Returns:
An EvalResult that can be used with TFMA visualization functions.
"""
eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)
eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir,
add_metrics_callbacks=add_metrics_callbacks)
schema = taxi.read_schema(schema_file)
print(eval_model_dir)
display_only_data_location = input_csv
with beam.Pipeline() as pipeline:
csv_coder = taxi.make_csv_coder(schema)
raw_data = (
pipeline
| 'ReadFromText' >> beam.io.ReadFromText(
input_csv,
coder=beam.coders.BytesCoder(),
skip_header_lines=True)
| 'ParseCSV' >> beam.Map(csv_coder.decode))
# Examples must be in clean tf-example format.
coder = taxi.make_proto_coder(schema)
raw_data = (
raw_data
| 'ToSerializedTFExample' >> beam.Map(coder.encode))
_ = (raw_data
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
slice_spec=slice_spec,
output_path=get_tfma_output_dir(tfma_run_id),
display_only_data_location=input_csv))
return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))
print('Done')
```
#### You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.slicer.SingleSliceSpec``.
Below are examples of how slices can be specified.
```
# An empty slice spec means the overall slice, that is, the whole dataset.
OVERALL_SLICE_SPEC = tfma.slicer.SingleSliceSpec()
# Data can be sliced along a feature column
# In this case, data is sliced along feature column trip_start_hour.
FEATURE_COLUMN_SLICE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])
# Data can be sliced by crossing feature columns
# In this case, slices are computed for trip_start_day x trip_start_month.
FEATURE_COLUMN_CROSS_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])
# Metrics can be computed for a particular feature value.
# In this case, metrics is computed for all data where trip_start_hour is 12.
FEATURE_VALUE_SPEC = tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 12)])
# It is also possible to mix column cross and feature value cross.
# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.
COLUMN_CROSS_VALUE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])
ALL_SPECS = [
OVERALL_SLICE_SPEC,
FEATURE_COLUMN_SLICE_SPEC,
FEATURE_COLUMN_CROSS_SPEC,
FEATURE_VALUE_SPEC,
COLUMN_CROSS_VALUE_SPEC
]
```
#### Let's run TFMA!
```
tf.logging.set_verbosity(tf.logging.INFO)
tfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id=1,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
```
## Visualization: Slicing Metrics
To see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.
The default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.
This view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.
```
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result_1, slicing_column='trip_start_hour')
# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.
tfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)
# Show overall metrics.
tfma.view.render_slicing_metrics(tfma_result_1)
```
## Visualization: Plots
TFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks``
```
tf.logging.set_verbosity(tf.logging.INFO)
tfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='vis',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# calibration_plot_and_prediction_histogram computes calibration plot and prediction
# distribution at different thresholds.
tfma.post_export_metrics.calibration_plot_and_prediction_histogram(),
# auc_plots enables precision-recall curve and ROC visualization at different thresholds.
tfma.post_export_metrics.auc_plots()
])
print('Done')
```
Plots must be visualized for an individual slice. To specify a slice, use ``tfma.slicer.SingleSliceSpec``.
In the example below, we are using ``tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 1)])`` to specify the slice where trip_start_hour is 1.
Plots are interactive:
- Drag to pan
- Scroll to zoom
- Right click to reset the view
Simply hover over the desired data point to see more details.
```
tfma.view.render_plot(tfma_vis, tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 1)]))
```
#### Custom metrics
In addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.
All metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:
https://www.tensorflow.org/api_docs/python/tf/metrics
In the cells below, false negative rate is computed as an example.
```
# Defines a callback that adds FNR to the result.
def add_fnr_for_threshold(threshold):
def _add_fnr_callback(features_dict, predictions_dict, labels_dict):
metric_ops = {}
prediction_tensor = tf.cast(
predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)
fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict),
tf.squeeze(prediction_tensor),
[threshold])
fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])
metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op))
return metric_ops
return _add_fnr_callback
tf.logging.set_verbosity(tf.logging.INFO)
tfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=0,
tfma_run_id='fnr',
slice_spec=ALL_SPECS,
schema_file=get_schema_file(),
add_metrics_callbacks=[
# Simply add the call here.
add_fnr_for_threshold(0.75)
])
tfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)
```
## Visualization: Time Series
It is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.
**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.
```
help(tfma.multiple_model_analysis)
```
**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.
```
help(tfma.multiple_data_analysis)
```
It is also possible to compose a time series manually.
```
# Create different models.
# Run some experiments with different hidden layer configurations.
run_local_experiment(tft_run_id=0,
tf_run_id=1,
num_layers=3,
first_layer_size=200,
scale_factor=0.7)
run_local_experiment(tft_run_id=0,
tf_run_id=2,
num_layers=4,
first_layer_size=240,
scale_factor=0.5)
print('Done')
tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=1,
tfma_run_id=2,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
tfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'),
tf_run_id=2,
tfma_run_id=3,
slice_spec=ALL_SPECS,
schema_file=get_schema_file())
print('Done')
```
Like plots, time series view must visualized for a slice too.
In the example below, we are showing the overall slice.
Select a metric to see its time series graph. Hover over each data point to get more details.
```
eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)
```
Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.
```
# Visualize the results in a Time Series. In this case, we are showing the slice specified.
eval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1),
get_tfma_output_dir(2),
get_tfma_output_dir(3)],
tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)
```
|
github_jupyter
|
# Conservative remapping
```
import xgcm
import xarray as xr
import numpy as np
import xbasin
```
We open the example data and create 2 grids: 1 for the dataset we have and 1 for the remapped one.
Here '_fr' means *from* and '_to' *to* (i.e. remapped data).
```
ds = xr.open_dataset('data/nemo_output_ex.nc')
from xnemogcm import open_nemo_and_domain_cfg
ds = open_nemo_and_domain_cfg(datadir='/home/romain/Documents/Education/PhD/Courses/2019-OC6310/Project/Experiments/EXP_eos00/Rawdata')
metrics_fr = {
('X',): ['e1t', 'e1u', 'e1v', 'e1f'],
('Y',): ['e2t', 'e2u', 'e2v', 'e2f'],
('Z',): ['e3t', 'e3u', 'e3v', 'e3w']
}
metrics_to = {
('X',): ['e1t', 'e1u', 'e1v', 'e1f'],
('Y',): ['e2t', 'e2u', 'e2v', 'e2f'],
('Z',): ['e3t_1d', 'e3w_1d']
}
grid_fr = xgcm.Grid(ds, periodic=False, metrics=metrics_fr)
grid_to = xgcm.Grid(ds, periodic=False, metrics=metrics_to)
# Convert the thetao float32 to float64 for more precision
ds.thetao.values = ds.thetao.values.astype(np.float64)
print(ds)
```
## Remap a T point
```
%timeit xbasin.remap_vertical(ds.thetao, grid_fr, grid_to, axis='Z')
theta_to = xbasin.remap_vertical(ds.thetao, grid_fr, grid_to, axis='Z')
print(theta_to.coords)
```
The total heat content is conserved:
```
hc_fr = grid_fr.integrate(ds.thetao, axis='Z')
hc_to = grid_to.integrate(theta_to, axis='Z')
(hc_fr == hc_to).all()
```
## Remap a W point
```
w_to = xbasin.remap_vertical(ds.woce*0+1, grid_fr, grid_to, axis='Z')
grid_to.integrate(w_to, axis='Z')[-1].plot()
grid_fr.integrate((ds.woce*0+1), axis='Z')[-1].plot()
```
## Time comparison
The heart function of the remapping is computed from python to C++ with pythran, which improves the speed. However if pythran is not installed, the original python function is called instead.
As a user, you should not use the 2 following functions, they are only shown here for the time comparison.
```
fake_dataset = [
np.ascontiguousarray(ds.gdept_0.values.reshape(ds.gdept_0.values.shape+(1,))),
np.ascontiguousarray(ds.gdepw_0.values.reshape(ds.gdepw_0.values.shape+(1,))),
np.ascontiguousarray(ds.thetao.transpose('z_c', 'y_c', 'x_c', 't').values.flatten().reshape(ds.thetao.transpose('z_c', 'y_c', 'x_c', 't').shape)[...,0:1])
]
from xbasin._interpolation import interp_new_vertical as _interpolation_pure_python
from xbasin.interpolation_compiled import interp_new_vertical as _interpolation_pythran
```
### Pure Python
```
%timeit _interpolation_pure_python(*fake_dataset)
```
### Pythran
```
%timeit _interpolation_pythran(*fake_dataset)
```
We see that the compiled version runs about 10-100 times faster (of course this number is just a rough approximation). The pure Python version does not use vectorized arrays and is thus slower.
|
github_jupyter
|
# DeepDreaming with TensorFlow
>[Loading and displaying the model graph](#loading)
>[Naive feature visualization](#naive)
>[Multiscale image generation](#multiscale)
>[Laplacian Pyramid Gradient Normalization](#laplacian)
>[Playing with feature visualzations](#playing)
>[DeepDream](#deepdream)
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
- visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) galleries)
- embed TensorBoard graph visualizations into Jupyter notebooks
- produce high-resolution images with tiled computation ([example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg))
- use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
- generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the [GoogLeNet architecture](http://arxiv.org/abs/1409.4842), trained to classify images into one of 1000 categories of the [ImageNet](http://image-net.org/) dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) architectures.
```
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
```
<a id='loading'></a>
## Loading and displaying the model graph
The pretrained network can be downloaded [here](https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip). Unpack the `tensorflow_inception_graph.pb` file from the archive and set its path to `model_fn` variable. Alternatively you can uncomment and run the following cell to download the network:
```
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
```
To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
```
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
```
<a id='naive'></a>
## Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
```
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
```
<a id="multiscale"></a>
## Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
```
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
```
<a id="laplacian"></a>
## Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the [Laplacian pyramid](https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29#Laplacian_pyramid) decomposition. We call the resulting technique _Laplacian Pyramid Gradient Normailzation_.
```
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
```
<a id="playing"></a>
## Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
```
render_lapnorm(T(layer)[:,:,:,65])
```
Lower layers produce features of lower complexity.
```
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
```
There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
```
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
```
<a id="deepdream"></a>
## DeepDream
Now let's reproduce the [DeepDream algorithm](https://github.com/google/deepdream/blob/master/dream.ipynb) with TensorFlow.
```
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
```
Let's load some image and populate it with DogSlugs (in case you've missed them).
```
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
```
Note that results can differ from the [Caffe](https://github.com/BVLC/caffe)'s implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
```
render_deepdream(T(layer)[:,:,:,139], img0)
```
Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an [example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg) of running the flower dream over the bigger image.
We hope that the visualization tricks described here may be helpful for analyzing representations learned by neural networks or find their use in various artistic applications.
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import pickle
import matplotlib.pyplot as plt
import torch
from torch import nn, optim
from torchvision import transforms, utils
from torch.utils.data import TensorDataset, DataLoader
import time
from sklearn.model_selection import train_test_split
%matplotlib inline
with open("../input/monkeyspikes/training_data.pickle", "rb") as f:
training_data = pickle.load(f)
with open("../input/monkeyspikes/training_arm.pickle", "rb") as f:
training_arm = pickle.load(f)
with open("../input/monkeyspikes/mean_trajectory.pickle", "rb") as f:
mean_trajectory = pickle.load(f)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
data = np.concatenate((training_data, training_arm), axis=1)
data.shape
BATCH_SIZE = 24
X_train, X_test, y_train_arm, y_test_arm = train_test_split(
data[:, :297], data[:, 297:],
test_size=0.3, random_state=2022
)
y_train = y_train_arm[:, 0]
y_test = y_test_arm[:, 0]
arm_train = y_train_arm[:, 1:]
arm_test = y_test_arm[:, 1:]
train_dataset = TensorDataset(torch.Tensor(X_train), torch.Tensor(arm_train))
valid_dataset = TensorDataset(torch.Tensor(X_test), torch.Tensor(arm_test))
train_dataloader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
valid_dataloader = DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=True)
X, y = next(iter(train_dataloader))
print(X.shape)
print(y.shape)
class NeuralDecoder(nn.Module):
def __init__(self, input_size, output_size):
super().__init__()
self.main = nn.Sequential(
nn.Linear(input_size, 500),
nn.ReLU(),
nn.Linear(500, 1_000),
nn.ReLU(),
nn.Linear(1_000, 5_000),
nn.ReLU(),
nn.Linear(5_000, 10_000),
nn.ReLU(),
nn.Linear(10_000, 15_000),
nn.ReLU(),
nn.Linear(15_000, 10_000),
nn.ReLU(),
nn.Linear(10_000, 5_000),
nn.ReLU(),
nn.Linear(5_000, output_size)
)
def forward(self, x):
out = self.main(x)
return out
def trainer(model, criterion, optimizer, trainloader, validloader, epochs=50, verbose=True):
"""Simple training wrapper for PyTorch network."""
train_loss = []
valid_loss = []
for epoch in range(epochs):
losses = 0
for X, y in trainloader:
X, y = X.to(device), y.to(device)
optimizer.zero_grad() # Clear gradients w.r.t. parameters
y_hat = model(X.reshape(X.shape[0], -1))
loss = criterion(y_hat, y) # Calculate loss
loss.backward() # Getting gradients w.r.t. parameters
optimizer.step() # Update parameters
losses += loss.item() # Add loss for this batch to running total
train_loss.append(losses / len(trainloader))
# Validation
model.eval()
valid_losses = 0
with torch.no_grad():
for X, y in validloader:
X, y = X.to(device), y.to(device)
y_hat = model(X)
loss = criterion(y_hat, y)
valid_losses += loss.item()
valid_loss.append(valid_losses / len(validloader))
model.train()
if verbose:
print(f"Epoch: {epoch + 1}, "
f"Train loss: {losses / len(trainloader):.2f}, "
f"Valid loss: {valid_losses / len(validloader):.2f}")
results = {"train_loss": train_loss,
"valid_loss": valid_loss}
return results
torch.manual_seed(2022)
model = NeuralDecoder(input_size=297, output_size=3_000)
model.to(device);
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
print(time.strftime("%H:%M:%S", time.localtime()))
trainer(model, criterion, optimizer, train_dataloader, valid_dataloader, verbose=True)
print(time.strftime("%H:%M:%S", time.localtime()))
torch.save(model.state_dict(), "trained_nn.pt")
y_hat = model(torch.Tensor(X_test).to(device))
y_hat = y_hat.cpu().detach().numpy()
rmse = np.sqrt(np.mean((arm_test - y_hat)**2))
print(rmse)
good_examples = 0
bad_examples = 0
ax_good = plt.subplot(121)
ax_bad = plt.subplot(122)
for X, y in valid_dataloader:
X, y = X.to(device), y.to(device)
prediction = model(X)
y = y.cpu().detach().numpy()
prediction = prediction.cpu().detach().numpy()
while good_examples < 30 and bad_examples < 30:
for i in range(X.shape[0]):
rmse = np.sqrt(np.mean((prediction[i, :] - y[i, :])**2))
if rmse < 5:
good_examples += 1
ax_good.plot(y[i, :1000], y[i, 1000:2000], color="r")
ax_good.plot(prediction[i, :1000], prediction[i, 1000:2000], color="b")
if rmse > 30:
bad_examples += 1
ax_bad.plot(y[i, :1000], y[i, 1000:2000], color="r")
ax_bad.plot(prediction[i, :1000], prediction[i, 1000:2000], color="b")
ax_good.title.set_text("Good predictions")
ax_bad.title.set_text("Bad predictions")
ax_good.set_xlim([-150, 150])
ax_good.set_ylim([-100, 100])
ax_bad.set_xlim([-150, 150])
ax_bad.set_ylim([-100, 100])
plt.show()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/AutoViML/Auto_ViML/blob/master/Auto_ViML_Demo.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
```
import pandas as pd
datapath = 'https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/'
#### THIS SHOULD print Version Number. If it doesn't, it means you don't have latest version ##
### If you want to see the sitepackages version use this
from autoviml.Auto_ViML import Auto_ViML
df = pd.read_csv(datapath+'titanic.csv')
#test = train[-15:]
#test = pd.read_csv(datapath+'test.csv')
print(train.shape)
#print(test.shape)
print(train.head())
target = 'Survived'
num = int(0.9*df.shape[0])
train = df[:num]
test = df[num:]
sample_submission=''
scoring_parameter = 'balanced-accuracy'
#### If Boosting_Flag = True => XGBoost, Fase=>ExtraTrees, None=>Linear Model
m, feats, trainm, testm = Auto_ViML(train, target, test, sample_submission,
scoring_parameter=scoring_parameter,
hyper_param='GS',feature_reduction=True,
Boosting_Flag=True,Binning_Flag=False,
Add_Poly=0, Stacking_Flag=False,
Imbalanced_Flag=False,
verbose=1)
def reverse_dict(map_dict):
return dict([(v,k) for (k,v) in map_dict.items()])
# Use this to Test Classification Problems Only ####
ret_dict = {0: 0, 1: 1}
map_dict = reverse_dict(ret_dict)
m_thresh = 0.21
modelname='XGBoost'
#####################################################################
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import balanced_accuracy_score
try:
print('Normal Balanced Accuracy = %0.2f%%' %(
100*balanced_accuracy_score(test[target].map(map_dict).values, (
testm[target+'_proba_'+'1']>0.5).astype(int).values)))
print('Test results since target variable is present in test data:')
print(confusion_matrix(test[target].map(map_dict).values, (
testm[target+'_proba_'+'1']>0.5).astype(int).values))
print(classification_report(test[target].map(map_dict).values, (
testm[target+'_proba_'+'1']>0.5).astype(int).values))
print('Modified Threshold Balanced Accuracy = %0.2f%%' %(
100*balanced_accuracy_score(test[target].map(map_dict).values, (
testm[target+'_proba_'+'1']>m_thresh).astype(int).values)))
print(confusion_matrix(test[target].map(map_dict).values, (
testm[target+'_proba_'+'1']>m_thresh).astype(int).values))
print(classification_report(test[target].map(map_dict).values, (
testm[target+'_proba_'+'1']>m_thresh).astype(int).values))
except:
print('No target variable present in test data. No results')
```
|
github_jupyter
|
**[Introduction to Machine Learning Home Page](https://www.kaggle.com/learn/intro-to-machine-learning)**
---
## Recap
Here's the code you've written so far.
```
# code you have previously used
# load data
import pandas as pd
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# create target object and call it y
y = home_data['SalePrice']
# create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]
# split into validation and training data
from sklearn.model_selection import train_test_split
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# specify Model
from sklearn.tree import DecisionTreeRegressor
iowa_model = DecisionTreeRegressor(random_state=1)
# fit Model
iowa_model.fit(train_X, train_y)
# make validation predictions
val_predictions = iowa_model.predict(val_X)
# calculate mean absolute error
from sklearn.metrics import mean_absolute_error
val_mae = mean_absolute_error(val_y, val_predictions)
print(f"Validation MAE when not specifying max_leaf_nodes: {val_mae:,.0f}")
# print("Validation MAE when not specifying max_leaf_nodes: {:,.0f}".format(val_mae))
# using best value for max_leaf_nodes
iowa_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=1)
iowa_model.fit(train_X, train_y)
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_y, val_predictions)
print(f"Validation MAE for best value of max_leaf_nodes: {val_mae:,.0f}")
# set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex6 import *
print("\nSetup complete")
```
# Exercises
Data science isn't always this easy. But replacing the decision tree with a Random Forest is going to be an easy win.
## Step 1: Use a Random Forest
```
from sklearn.ensemble import RandomForestRegressor
# specify model. set random_state to 1
rf_model = RandomForestRegressor(random_state=1)
# fit model
rf_model.fit(train_X, train_y)
# calculate the mean absolute error of your Random Forest model on the validation data
val_ft_predictions = rf_model.predict(val_X)
rf_val_mae = mean_absolute_error(val_y, val_ft_predictions)
print(f"Validation MAE for Random Forest Model: {rf_val_mae}")
# Check your answer
step_1.check()
# The lines below will show you a hint or the solution.
# step_1.hint()
# step_1.solution()
```
So far, you have followed specific instructions at each step of your project. This helped learn key ideas and build your first model, but now you know enough to try things on your own.
Machine Learning competitions are a great way to try your own ideas and learn more as you independently navigate a machine learning project.
# Keep Going
You are ready for **[Machine Learning Competitions](https://www.kaggle.com/kernels/fork/1259198).**
---
**[Introduction to Machine Learning Home Page](https://www.kaggle.com/learn/intro-to-machine-learning)**
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum) to chat with other Learners.*
|
github_jupyter
|
We will use this notebook to calculate and visualize statistics of our chess move dataset. This will allow us to better understand our limitations and help diagnose problems we may encounter down the road when training/defining our model.
```
import pdb
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def get_move_freqs(moves, sort=True):
freq_dict = {}
for move in moves:
if move not in freq_dict:
freq_dict[move] = 0
freq_dict[move] = freq_dict[move] + 1
tuples = [(w, c) for w, c in freq_dict.items()]
if sort:
tuples = sorted(tuples, key=lambda x: -x[1])
return (tuples, moves)
def plot_frequency(counts, move_limit=1000):
# limit to the n most frequent moves
n = 1000
counts = counts[0:n]
# from: http://stackoverflow.com/questions/30690619/python-histogram-using-matplotlib-on-top-words
moves = [x[0] for x in counts]
values = [int(x[1]) for x in counts]
bar = plt.bar(range(len(moves)), values, color='green', alpha=0.4)
plt.xlabel('Move Index')
plt.ylabel('Frequency')
plt.title('Move Frequency Chart')
plt.show()
def plot_uniq_over_count(moves, interval=0.01):
xs, ys = [], []
for i in range(0, len(moves), int(len(moves) * interval)):
chunk = moves[0:i]
uniq = list(set(chunk))
xs.append(len(chunk))
ys.append(len(uniq))
plt.plot(xs, ys)
plt.ticklabel_format(style='sci', axis='x', scilimits=(0, 0))
plt.xlabel('Moves')
plt.ylabel('Unique Moves')
plt.show()
def plot_game_lengths(game_lengths):
xs = [g[0] for g in game_lengths]
ys = [g[1] for g in game_lengths]
bar = plt.bar(xs, ys, color='blue', alpha=0.4)
plt.xlabel('Half-moves per game')
plt.ylabel('Frequency')
plt.title('Game Length')
plt.show()
def plot_repeat_states(moves):
uniq_states = {}
moves_in_game = ''
for move in moves:
moves_in_game = moves_in_game + ' ' + move
if moves_in_game not in uniq_states:
uniq_states[moves_in_game] = 0
uniq_states[moves_in_game] = uniq_states[moves_in_game] + 1
if is_game_over_move(move):
moves_in_game = ''
vals = []
d = {}
for state, count in sorted(uniq_states.items(), key=lambda x: (-x[1], x[0])):
vals.append((count, state))
# move_count = len(state.split())
# if move_count not in d:
# d[move_count] = 0
# d[move_count] = d[move_count] + 1
vals.append([c for c, s in vals])
plt.plot(vals)
plt.xlim([0, 100])
plt.xlabel('Board State')
plt.ylabel('Frequency')
plt.title('Frequency of Board State')
plt.show()
# vals = [(length, count) for length, count in sorted(d.items(), key=lambda x: -x[0])]
# pdb.set_trace()
# plt.bar(vals)
# plt.xlim([0, 1000])
# plt.xlabel('Moves in State')
# plt.ylabel('Frequency')
# print('{} uniq board states'.format(len(list(uniq_states.keys()))))
def get_game_lengths(moves):
game_lengths = {}
total_games = 0
current_move = 1
for move in moves:
if is_game_over_move(move):
if current_move not in game_lengths:
game_lengths[current_move] = 0
game_lengths[current_move] = game_lengths[current_move] + 1
current_move = 1
total_games = total_games + 1
else:
current_move = current_move + 1
print(total_games)
return [(k, v) for k, v in game_lengths.items()], total_games
def is_game_over_move(move):
return move in ('0-1', '1-0', '1/2-1/2')
```
Load our concatonated moves data.
```
with open('../data/train_moves.txt', 'r') as f:
moves = f.read().split(' ')
print('{} moves loaded'.format(len(moves)))
counts, moves = get_move_freqs(moves)
game_lengths, total_games = get_game_lengths(moves)
# plot_repeat_states(moves)
```
## Plot Move Frequency
Here we can see which moves appear most frequently in the dataset. These moves are the most popular moves played by chess champions.
```
plot_frequency(counts)
```
We will list the most common few moves along with what percentage of the entire moves dataset this move represents.
```
top_n = 10
for w in counts[0:top_n]:
print((w[0]).ljust(8), '{:.2f}%'.format((w[1]/len(moves)) * 100.00))
```
## Plot Unique Moves
Here we compare the number of unique moves over the total move count. Take notice that the number of unique moves converges towards a constant as the number of total moves increase. This would suggest that there is a subset of all possible moves that actually make sense for a chess champion to play.
```
plot_uniq_over_count(moves)
```
## Plot Game Lengths
```
plot_game_lengths(game_lengths)
top_n = 10
sorted_lengths = sorted(game_lengths, key=lambda x: -x[1])
for l in sorted_lengths[0:top_n]:
print((str(l[0])).ljust(8), '{:.3f}%'.format((l[1]/total_games) * 100.00))
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/misabhishek/gcp-iam-recommender/blob/main/iam_recommender_basics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Before you begin
1. Have a GCP projrect ready.
3. [Enable Iam Recommender](https://console.cloud.google.com/flows/enableapi?apiid=recommender.googleapis.com) APIs for the project.
### Provide your credentials to the runtime
```
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
```
## Understand GCP IAM Recommender
**Declare the Cloud project ID which will be used throughout this notebook**
```
project_id = "Enter-your-project"
```
**A helper function to execute `gcloud` commands**
```
import json
import subprocess
def execute_command(command):
return json.loads(subprocess.check_output(filter(lambda x: x, command.split(" "))).decode("utf-8"))
recommender_command = f"""gcloud recommender recommendations list \
--location=global \
--recommender=google.iam.policy.Recommender \
--project={project_id} \
--format=json
"""
recommendations = execute_command(recommender_command)
recommendations[7]
```
### Getting insight for the recommendations
```
insight_command = f"""gcloud recommender insights list \
--project={project_id} \
--location=global \
--insight-type=google.iam.policy.Insight \
--format=json
"""
insights = execute_command(insight_command)
insights[0]
```
# Generate diff view
```
recommendation_name = "Enter-the-recommendation-name"
#@title A helper to generate diff view. It uses IAM roles api also.
import pandas as pd
def generate_diff_view(recommendation_name):
role_to_permission_command = "gcloud iam roles describe {} --format=json"
recommendation = [r for r in recommendations if r["name"] == recommendation_name][0]
insight_name = recommendation["associatedInsights"][0]["insight"]
added_roles = []
removed_role = []
for op in recommendation["content"]["operationGroups"][0]["operations"]:
if op["action"] == "add":
added_roles.append(op["pathFilters"]["/iamPolicy/bindings/*/role"])
if op["action"] == "remove":
removed_role.append(op["pathFilters"]["/iamPolicy/bindings/*/role"])
cur_permissions = set(execute_command(
role_to_permission_command.format(removed_role[0]))["includedPermissions"])
recommended_permisisons = set()
for r in added_roles:
recommended_permisisons.update(execute_command(
role_to_permission_command.format(r))["includedPermissions"])
removed_permisisons = cur_permissions - recommended_permisisons
insight = [insight for insight in insights
if insight["name"] == insight_name][0]
used_permissions = set(k["permission"] for k in
insight["content"]["exercisedPermissions"])
inferred_permissions = set(k["permission"] for k in
insight["content"]["inferredPermissions"])
unused_but_still_common_permissions = (recommended_permisisons - used_permissions
- inferred_permissions)
types = (["used"] * len(used_permissions)
+ ["ml-inferred"] * len(inferred_permissions)
+ ["common"] * len(unused_but_still_common_permissions)
+ ["removed"] * len(removed_permisisons))
permissions = [*used_permissions, *inferred_permissions,
*unused_but_still_common_permissions, *removed_permisisons]
return pd.DataFrame({"type": types, "permission": permissions})
diff_view = generate_diff_view(recommendation_name)
diff_view
diff_view["type"].value_counts()
```
|
github_jupyter
|
```
from presidio_analyzer import AnalyzerEngine, PatternRecognizer
from presidio_anonymizer import AnonymizerEngine
from presidio_anonymizer.entities import AnonymizerConfig
```
# Analyze Text for PII Entities
<br>Using Presidio Analyzer, analyze a text to identify PII entities.
<br>The Presidio analyzer is using pre-defined entity recognizers, and offers the option to create custom recognizers.
<br>The following code sample will:
<ol>
<li>Set up the Analyzer engine - load the NLP module (spaCy model by default) and other PII recognizers</li>
<li> Call analyzer to get analyzed results for "PHONE_NUMBER" entity type</li>
</ol>
```
text_to_anonymize = "His name is Mr. Jones and his phone number is 212-555-5555"
analyzer = AnalyzerEngine()
analyzer_results = analyzer.analyze(text=text_to_anonymize, entities=["PHONE_NUMBER"], language='en')
print(analyzer_results)
```
# Create Custom PII Entity Recognizers
<br>Presidio Analyzer comes with a pre-defined set of entity recognizers. It also allows adding new recognizers without changing the analyzer base code,
<b>by creating custom recognizers.
<br>In the following example, we will create two new recognizers of type `PatternRecognizer` to identify titles and pronouns in the analyzed text.
<br>A `PatternRecognizer` is a PII entity recognizer which uses regular expressions or deny-lists.
<br>The following code sample will:
<ol>
<li>Create custom recognizers</li>
<li>Add the new custom recognizers to the analyzer</li>
<li>Call analyzer to get results from the new recognizers</li>
</ol>
```
titles_recognizer = PatternRecognizer(supported_entity="TITLE",
deny_list=["Mr.","Mrs.","Miss"])
pronoun_recognizer = PatternRecognizer(supported_entity="PRONOUN",
deny_list=["he", "He", "his", "His", "she", "She", "hers" "Hers"])
analyzer.registry.add_recognizer(titles_recognizer)
analyzer.registry.add_recognizer(pronoun_recognizer)
analyzer_results = analyzer.analyze(text=text_to_anonymize,
entities=["TITLE", "PRONOUN"],
language="en")
print(analyzer_results)
```
Call Presidio Analyzer and get analyzed results with all the configured recognizers - default and new custom recognizers
```
analyzer_results = analyzer.analyze(text=text_to_anonymize, language='en')
analyzer_results
```
# Anonymize Text with Identified PII Entities
<br>Presidio Anonymizer iterates over the Presidio Analyzer result, and provides anonymization capabilities for the identified text.
<br>The anonymizer provides 5 types of anonymizers - replace, redact, mask, hash and encrypt. The default is **replace**
<br>The following code sample will:
<ol>
<li>Setup the anonymizer engine </li>
<li>Create an anonymizer request - text to anonymize, list of anonymizers to apply and the results from the analyzer request</li>
<li>Anonymize the text</li>
</ol>
```
anonymizer = AnonymizerEngine()
anonymized_results = anonymizer.anonymize(
text=text_to_anonymize,
analyzer_results=analyzer_results,
anonymizers_config={"DEFAULT": AnonymizerConfig("replace", {"new_value": "<ANONYMIZED>"}),
"PHONE_NUMBER": AnonymizerConfig("mask", {"type": "mask", "masking_char" : "*", "chars_to_mask" : 12, "from_end" : True}),
"TITLE": AnonymizerConfig("redact", {})}
)
print(anonymized_results)
```
|
github_jupyter
|
# ML/DL techniques for Tabular Modeling PART I
> In this part, I have explained Decision Trees.
- toc: true
- badges: true
- comments: true
```
#hide
# !pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
#hide
from fastbook import *
from kaggle import api
from pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype
from fastai.tabular.all import *
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from dtreeviz.trees import *
from IPython.display import Image, display_svg, SVG
import numpy as np
import matplotlib.pyplot as plt
pd.options.display.max_rows = 20
pd.options.display.max_columns = 8
#hide
# api.competition_download_cli('bluebook-for-bulldozers')
# file_extract('bluebook-for-bulldozers.zip')
df = pd.read_csv('/home/nitish/Downloads/bluebook-bulldozers/TrainAndValid.csv', low_memory=False)
```
## Introduction
Tabular Modelling takes data in the form of a table, where generally we want to learn about a column's value from all the other columns' values. The column we want to learn is known as a dependent variable and others are known as independent variables. The learning could be both like a classification problem or regression problem. We will look into various machine learning models such as decision trees, random forests, etc, also we'll look for what deep learning has to offer in tabular modeling.
## Dataset
I will be using [Kaggle competition](https://www.kaggle.com/c/bluebook-for-bulldozers) dataset on all the models so that it will be easier to understand and compare different models. I have loaded it into a dataframe df.
```
df.head()
```
The key fields are in train.csv are:
- SalesID: the unique identifier of the sale
- MachineID: the unique identifier of a machine. A machine can be sold multiple times
- saleprice: what the machine sold for at auction (only provided in train.csv)
- saledate: the date of the sale
For this competition, we need to predict the log of the sale price of bulldozers sold at auctions. We will try to build different ML and DL models which will be predicting $log$(sale price).
## Decision Trees
A decision tree makes a split in data based on the values of a column. For example, suppose we have data for different persons for their age, whether they eat healthy, whether they exercise, etc, and want to predict whether they are fit or unfit based on the data then we can use the following decision tree.

At each level, the data is divided into 2 groups for the next level, e.g. at the first level, whether age<30 or not divides the whole dataset into 2 smaller datasets, and similary the data is split again until we reach leaf node of 2 classes: FIT or UNFIT.
In the real world, data is way more complex containing a lot of columns. E.g., in our dataframe df, there are 53 columns. So the question arises which column to chose for each split and what should be the value at which it is split. The answer is to try for every column and each value present in a column for the split. So if there are n columns and each column have x different values then we need to try n\*x splits and chose the best one on some criteria. When trying a split, then whole data will be divided into 2 groups for that level, so we can take the average of the sale price of a group as the predicted sale price for all the rows in that group, and can calculate rmse distance between predictions and actual sale price. This will give us a number, which is our loss value, if bigger tells our predictions are far from the actual sale price and vice-versa. So the algorithm for building a decision tree could be written as:
1. Loop through all the columns in the training dataset.
1. Loop through all the possible values for a column. If the column contains categorical data then chose the condition as "equal to" a category and "not equal to" a category. If the column contains continuous data then for all the distinct values split on "less than equal to" and "greater than" the value.
1. Find the average sale price for each of the groups, this is our prediction. Calculate rmse from the actual values of the saleprice.
1. The rmse of a split could be set as the sum of rmse for all groups after the split.
1. After looping through all the columns and all possible splits for each column chose the split with the least rmse.
1. Continue the same process recursively on the child groups until some stopping criteria are reached like maximum number of the leaf nodes, minimum number of data items per group, etc.
Below is given an example of a decision tree. In the root node, the value is simply the average of all the training dataset which would be the most simple prediction we could calculate for a new datapoint is to simply give a prediction of 10.1 every time. Mean Square Error (mse) is 0.48, and there is a total of 404710 samples, which is actually the total number of samples in the training dataset.
Now for the split, it has tried all the columns and all the possible values for a column, and it came with $Coupler\_System \leq 0.5$ split. This would split the whole dataset into two smaller datasets. When the condition is True it resulted in 360847 samples, with mse of 0.42 and an average value of sale price as 10.21. When the condition is False it resulted in 43863 samples, with mse of 0.12 and an average sale price value of 9.21. It could be seen that this split has improved our prediction, and our model has learnt some pattern because now the weighted average mse is (360847 * 0.42 + 43863 * 0.12)/(404710) = 0.38 < 0.48.
Similarly, splitting the "True condition child" on $YearMade \leq 0.42$ further decreases the mse, which means our predictions are further closer to the actual values

## Overfitting and Underfitting in decision trees
Underfitting in the decision trees will be when we make very few splits, or no splits at all, e.g., in the root node the average value is 10.1, and if we use this value as prediction, then it's clearly a naive solution to a complex problem, which is therefore an underfitting. This is the case of high bias and low variance.
Overfitting will be when there are way too many splits such that in extreme case there is one training sample per leaf node, which is actually the model has memorized the training dataset. It is overfitting because although the mse will be 0 for the training dataset, it will be very high for the validation dataset, as the model will fail to generalize on unseen datapoints. This is the case of low bias and high variance.
Data Generation step:
```
x = np.linspace(0, 10, 110)
y = x + np.random.randn(110)
my_list = [0]*30 + [1]*80
random.shuffle(my_list)
my_list = [True if i==1 else False for i in my_list]
tr_x, tr_y = x[np.where(my_list)[0]],y[np.where(my_list)[0]]
my_list = [not elem for elem in my_list]
val_x, val_y = x[np.where(my_list)[0]],y[np.where(my_list)[0]]
tr_x = tr_x.reshape(tr_x.shape[0],1)
val_x = val_x.reshape(val_x.shape[0],1)
```
### Underfitting Case
In the underfitting case, I have set max_leaf_nodes=2, so that bias will be high.
```
m = DecisionTreeRegressor(max_leaf_nodes=2)
m.fit(tr_x, tr_y);
fig, ax = plt.subplots(figsize=(16,8))
ax.scatter(x,y, marker='+', label='actual data')
ax.scatter(tr_x, m.predict(tr_x), label='predicted data on training dataset')
ax.scatter(val_x, m.predict(val_x), label='predicted data on validation dataset')
ax.xaxis.set_major_locator(mpl.ticker.MultipleLocator(1))
ax.grid(which='major', axis='both', linestyle=':', linewidth = 1, color='b')
ax.set_xlabel("x", labelpad=5, fontsize=26, fontname='serif', color="blue")
ax.set_ylabel("y", labelpad=5, fontsize=26, fontname='serif', color="blue")
ax.legend(prop={"size":15})
```
In the above example, I have generated a dataset with $x = y + \epsilon$, where $\epsilon \in N(0,1)$ is the random noise. I have generated this data because it's 2-d data, much simpler, and easy to visualize than the complex Kaggle dataset.
A decision tree is implemented which tries to learn the relationship between x and y and predicts y from x. Training data is randomly chosen 80 samples from 110 samples, and the remaining 30 are in validation data. Stopping criteria is set as the max number of leaf nodes = 10. In the above figure, the orange ones are training samples and the green ones are validation samples.
```
print(f'Training rmse is {m_rmse(m, tr_x, tr_y)}, and validation rmse is {m_rmse(m, val_x, val_y)}')
draw_tree(m, pd.DataFrame([tr_x,tr_y], columns=['tr_y']))
```
### Overfitting Case
In the overfitting case, I have set max_leaf_nodes=100. This leads to a huge decision tree with each leaf node containing one training example only. Therefore, bias will be zero, the variance will be high and there will be overfitting.
```
m = DecisionTreeRegressor(max_leaf_nodes=100)
m.fit(tr_x, tr_y);
fig, ax = plt.subplots(figsize=(16,8))
ax.scatter(x,y, marker='+', label='actual data')
ax.scatter(tr_x, m.predict(tr_x), label='predicted data on training dataset')
ax.scatter(val_x, m.predict(val_x), label='predicted data on validation dataset')
ax.xaxis.set_major_locator(mpl.ticker.MultipleLocator(1))
ax.grid(which='major', axis='both', linestyle=':', linewidth = 1, color='b')
ax.set_xlabel("x", labelpad=5, fontsize=26, fontname='serif', color="blue")
ax.set_ylabel("y", labelpad=5, fontsize=26, fontname='serif', color="blue")
ax.legend(prop={"size":15})
print(f'Training rmse is {m_rmse(m, tr_x, tr_y)}, and validation rmse is {m_rmse(m, val_x, val_y)}')
draw_tree(m, pd.DataFrame([tr_x,tr_y], columns=['tr_y']))
```
### Balanced Case
In the balanced case, I have set max_leaf_nodes=10. This leads to a nice decision tree implementation with better generalization power than both above cases, which could be confirmed by seeing training and validation losses.
```
m = DecisionTreeRegressor(max_leaf_nodes=10)
m.fit(tr_x, tr_y);
fig, ax = plt.subplots(figsize=(16,8))
ax.scatter(x,y, marker='+', label='actual data')
ax.scatter(tr_x, m.predict(tr_x), label='predicted data on training dataset')
ax.scatter(val_x, m.predict(val_x), label='predicted data on validation dataset')
ax.xaxis.set_major_locator(mpl.ticker.MultipleLocator(1))
ax.grid(which='major', axis='both', linestyle=':', linewidth = 1, color='b')
ax.set_xlabel("x", labelpad=5, fontsize=26, fontname='serif', color="blue")
ax.set_ylabel("y", labelpad=5, fontsize=26, fontname='serif', color="blue")
ax.legend(prop={"size":15})
#hide
def rmse(preds, target):
return np.sqrt(np.mean((preds-target)**2))
def m_rmse(m, x, y):
return rmse(m.predict(x), y)
```
It seems like a good fit because it's neither overfitting nor underfitting.
Below is the training and validation losses and complete decision tree as generated by the algorithm.
```
print(f'Training rmse is {m_rmse(m, tr_x, tr_y)}, and validation rmse is {m_rmse(m, val_x, val_y)}')
draw_tree(m, pd.DataFrame([tr_x,tr_y], columns=['tr_y']))
```
## Extrapolation problem
The decision tree suffers from a serious drawback when trying to predict them on data outside the domain of the current dataset. Suppose we have split the dataset into training and validation such as included the first 80 datapoints in the training and the remaining 30 datapoints in the validation dataset, like:
```python
tr_x, tr_y = x[:80], y[:80]
val_x, val_y = x[80:], y[80:]
```
```
m = DecisionTreeRegressor(max_leaf_nodes=10)
tr_x, tr_y = x[:80], y[:80]
val_x, val_y = x[80:], y[80:]
tr_x = tr_x.reshape(80,1)
val_x = val_x.reshape(30,1)
m.fit(tr_x, tr_y);
fig, ax = plt.subplots(figsize=(16,8))
ax.scatter(x,y, marker='+', label='actual data')
ax.scatter(tr_x, m.predict(tr_x), label='predicted data on training dataset')
ax.scatter(val_x, m.predict(val_x), label='predicted data on validation dataset')
ax.xaxis.set_major_locator(mpl.ticker.MultipleLocator(1))
ax.grid(which='major', axis='both', linestyle=':', linewidth = 1, color='b')
ax.set_xlabel("x", labelpad=5, fontsize=26, fontname='serif', color="blue")
ax.set_ylabel("y", labelpad=5, fontsize=26, fontname='serif', color="blue")
ax.legend(prop={"size":15})
```
In the above figure, because the validation data is in the range $x>7.2$ something, and training data has only seen datapoints which are in the range $0\leq x\leq7.2$, therefore validation data is out of the domain, and hence a poor extrapolation is done by decision trees. Because of this problem, and also high variance in predictions, a single decision tree is rarely used in practice. High Variance means high variance in the predictions, and it is because a little up and down in the training data could have changed the decision tree completely, and so the predictions will vary. A linear regression model has much less variance than a decision tree but bias is also higher than a decision tree.
## Conclusion
We have covered the most basic ML method for tabular data modeling. In the next parts, I will cover Random Forests and some DL methods. Also, there is no point in training a decision tree model on the Kaggle dataset because it will give poor results as the data is complex, and it needs some more sophisticated algorithms.
|
github_jupyter
|
# Visualization principles
1. Log scale
2. Jitter
3. Set the scale
4. Text on plot
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib as mpl
```
## Plotting binary variables
Not directly connected to today's lesson. But many of you asked.
Let look at a case were we have 2 binary variables: 'sex' and 'survived'
```
titanic = sns.load_dataset("titanic")
titanic.head()
titanic.info()
```
Use a barplot (two variables) or a countplot (one variable)
```
sns.barplot(x="sex", y="survived", hue="class", data=titanic)
plt.show()
sns.countplot(x="sex", hue="class", data=titanic)
plt.show()
```
Or use a catplot for categorical data:
```
sns.catplot(x="sex", y="survived", hue="class", kind="bar", data=titanic)
plt.show()
sns.catplot(x="sex", hue="class", kind="count", data=titanic)
plt.show()
```
## Log scale
```
diamonds = sns.load_dataset("diamonds")
diamonds.head()
sns.histplot(diamonds.price[diamonds.cut == 'Ideal'])
```
##### One option:
```
sns.histplot(diamonds.price[diamonds.cut == 'Ideal'], log_scale = True)
```
##### Another option:
```
sns.histplot(np.log2(diamonds.price[diamonds.cut == 'Ideal']))
```
### Stack histogram:
```
sns.histplot(
diamonds,
x="price", hue="cut",
multiple="stack",
)
sns.set_theme(style="ticks")
f, ax = plt.subplots(figsize=(7, 5))
sns.despine(f)
sns.histplot(
diamonds,
x="price", hue="cut",
multiple="stack",
palette="colorblind",
edgecolor=".3",
linewidth=.5,
log_scale = True
)
ax.xaxis.set_major_formatter(mpl.ticker.ScalarFormatter())
ax.set_xticks([500, 1000, 2000, 5000, 10000])
plt.show()
```
## Jitter in python
Google it: [Jitter in python](https://www.google.com/search?q=jitter+in+python&sxsrf=ALeKk01NFy18kBeX8CmyToZAT-l4YIlJeQ%3A1621252840686&ei=6FqiYPSmKYzdkwXckaGgCw&oq=jitter&gs_lcp=Cgdnd3Mtd2l6EAMYADIECCMQJzIFCAAQkQIyBQgAEMsBMgUIABDLATICCAAyAggAMgUIABDLATICCAAyAggAMgIIADoECAAQQzoFCAAQsQM6CAgAELEDEJECOggILhCxAxCDAToFCC4QsQM6BwgAEIcCEBQ6AgguUJ8gWIcuYJg1aAFwAngAgAGdAYgB1giSAQMwLjiYAQCgAQGqAQdnd3Mtd2l6wAEB&sclient=gws-wiz)
Documentation contains such a good example we'll just [follow it](https://seaborn.pydata.org/generated/seaborn.stripplot.html)
```
tips = sns.load_dataset("tips")
tips.head()
sns.stripplot(x="day", y="total_bill", data=tips)
```
Use a smaller amount of jitter:
```
sns.stripplot(x="day", y="total_bill", data=tips, jitter=0.05)
```
Jitter plus a boxplot:
```
ax = sns.boxplot(x="tip", y="day", data=tips)
ax = sns.stripplot(x="tip", y="day", data=tips, color=".3")
ax = sns.boxplot(x="tip", y="day", data=tips)
```
## Set the scale
Google it: [set scale seaborn](https://www.google.com/search?q=set+scale+seaborn&sxsrf=ALeKk02NiH79RWrRRXIqusuG-vHfuyIm2A%3A1621254123926&ei=61-iYOiGOMyxkwWAiZjICQ&oq=set+scale+sea&gs_lcp=Cgdnd3Mtd2l6EAMYADIECCMQJzIGCAAQFhAeMgYIABAWEB4yBggAEBYQHjoHCCMQsAMQJzoHCAAQRxCwAzoCCAA6BQghEKABOggIABAIEA0QHlC8EVjyJmCULGgEcAJ4AIABogGIAbgHkgEDMC43mAEAoAEBqgEHZ3dzLXdpesgBCcABAQ&sclient=gws-wiz)
##### One option:
```
ax = sns.stripplot(x="day", y="total_bill", data=tips, jitter=0.05)
ax.set(ylim=(0, 100))
```
##### Another option:
```
plt.ylim(0, 400)
ax = sns.stripplot(x="day", y="total_bill", data=tips, jitter=0.05)
```
## Add labels onto the plot
Google it: [add text to plot seaborn](https://www.google.com/search?q=add+text+to+plot+seaborn&sxsrf=ALeKk01vym2w-SfYoAOBXBgUbDCr0I04Uw%3A1621255993821&ei=OWeiYObWMdCTkwXRoIngCw&oq=add+text+to+plot+seaborn&gs_lcp=Cgdnd3Mtd2l6EAMyAggAMgYIABAWEB4yBggAEBYQHjoHCCMQsAMQJzoHCAAQRxCwAzoECAAQQzoGCAAQBxAeUJAcWKgzYJs1aAFwAngAgAGeAYgBmgqSAQQwLjEwmAEAoAEBqgEHZ3dzLXdpesgBCcABAQ&sclient=gws-wiz&ved=0ahUKEwim1-ec4dDwAhXQyaQKHVFQArwQ4dUDCA4&uact=5_)
```
penguins = sns.load_dataset("penguins")
penguins.head()
```
With a legend:
```
ax = sns.scatterplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue = 'species', palette = 'colorblind')
```
Without a legend but with text:
```
ax = sns.scatterplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue = 'species', palette = 'colorblind', legend=False)
style = dict(size=12, color='black')
ax.text(35, 15, "Adelie", **style)
ax.text(55, 20, "Chinstrap", **style)
ax.text(52, 14, "Gentoo", **style)
plt.show()
```
|
github_jupyter
|
This notebook will set up colab so that you can run the SYCL blur lab for the module "Introduction to SYCYL programming" created by the TOUCH project. (https://github.com/TeachingUndergradsCHC/modules/tree/master/Programming/sycl). The initial setup instructions are created following slides by Aksel Alpay
https://www.iwocl.org/wp-content/uploads/iwocl-syclcon-2020-alpay-32-slides.pdf
and the hipSCYL documentation https://github.com/illuhad/hipSYCL/blob/develop/doc/installing.md .
Begin by setting your runtime to use a CPU (Select "Change runtime type" in the Runtime menu and choose "CPU".) Then run the first couple of instructions below. Run them one at a time, waiting for each to finish before beginning the next. This will take several minutes.
Update repositories and then get and build llvm so can build hipSYCL.
```
!apt update -qq;
!apt-get update -qq;
!add-apt-repository -y ppa:ubuntu-toolchain-r/test
!apt update
!apt install gcc-11 g++-11
!bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"
!apt-get install libboost-all-dev libclang-13-dev cmake python -qq;
!git clone --recurse-submodules https://github.com/illuhad/hipSYCL
!apt-get upgrade
```
Now build hipSYCL
```
!mkdir hipSYCL_build
%cd hipSYCL_build
!export CC=/usr/bin/gcc-11
!export CXX=/usr/bin/g++-11
!cmake -DCMAKE_INSTALL_PREFIX=/content/hipSYCL_install -DCMAKE_C_COMPILER=/usr/bin/gcc-11 -DCMAKE_CXX_COMPILER=/usr/bin/g++-11 /content/hipSYCL
!make install
%cd ..
```
Get the examples
```
!git clone https://github.com/TeachingUndergradsCHC/modules
%cd modules/Programming/sycl
```
Examine hello.cpp
```
!cat hello.cpp
```
Now compile hello.cpp
```
!/content/hipSYCL_install/bin/syclcc --hipsycl-platform=cpu -o hello hello.cpp
```
Then run it
```
!./hello
```
Now try the addVector program, first view it
```
!cat addVectors.cpp
```
Then compile it
```
!/content/hipSYCL_install/bin/syclcc --hipsycl-platform=cpu -o addVectors addVectors.cpp
```
Finally run it
```
!./addVectors
```
Next, examine the files that you'll need for the blur project. These are the library code for managing bmp files (stb_image.h and stb_image_write.h), the image that you'll be using (I provide 640x426.bmp, but you could use another file instead) and the program itself noRed.cpp. Then compile it
```
!/content/hipSYCL_install/bin/syclcc --hipsycl-platform=cpu -o noRed noRed.cpp
```
Now run the code
```
!./noRed
```
Original Image
```
from IPython.display import display
from PIL import Image
path="/content/modules/Programming/sycl/640x426.bmp"
display(Image.open(path))
```
Final Image
```
from IPython.display import display
from PIL import Image
path="/content/modules/Programming/sycl/out.bmp"
display(Image.open(path))
```
|
github_jupyter
|
# Classification of Chest and Abdominal X-rays
Code Source: Lakhani, P., Gray, D.L., Pett, C.R. et al. J Digit Imaging (2018) 31: 283. https://doi.org/10.1007/s10278-018-0079-6
The code to download and prepare dataset had been modified form the original source code.
```
# load requirements for the Keras library
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense, GlobalAveragePooling2D
from keras.models import Model
from keras.optimizers import Adam
!rm -rf /content/*
# Download dataset
!wget https://github.com/paras42/Hello_World_Deep_Learning/raw/9921a12c905c00a88898121d5dc538e3b524e520/Open_I_abd_vs_CXRs.zip
!ls /content
# unzip
!unzip /content/Open_I_abd_vs_CXRs.zip
# dimensions of our images
img_width, img_height = 299, 299
# directory and image information
train_data_dir = 'Open_I_abd_vs_CXRs/TRAIN/'
validation_data_dir = 'Open_I_abd_vs_CXRs/VAL/'
# epochs = number of passes of through training data
# batch_size = number of images processes at the same time
train_samples = 65
validation_samples = 10
epochs = 20
batch_size = 5
# build the Inception V3 network, use pretrained weights from ImgaeNet
# remove top funnly connected layers by imclude_top=False
base_model = applications.InceptionV3(weights='imagenet', include_top=False,
input_shape=(img_width, img_height,3))
# build a classifier model to put on top of the convolutional model
# This consists of a global average pooling layer and a fully connected layer with 256 nodes
# Then apply dropout and signoid activation
model_top = Sequential()
model_top.add(GlobalAveragePooling2D(input_shape=base_model.output_shape[1:],
data_format=None)),
model_top.add(Dense(256, activation='relu'))
model_top.add(Dropout(0.5))
model_top.add(Dense(1, activation='sigmoid'))
model = Model(inputs=base_model.input, outputs=model_top(base_model.output))
# Compile model using Adam optimizer with common values and binary cross entropy loss
# USe low learning rate (lr) for transfer learning
model.compile(optimizer=Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0),
loss='binary_crossentropy',
metrics=['accuracy'])
# Some on-the-fly augmentation options
train_datagen = ImageDataGenerator(
rescale = 1./255, # Rescale pixel values to 0-1 to aid CNN processing
shear_range = 0.2, # 0-1 range for shearing
zoom_range = 0.2, # 0-1 range for zoom
rotation_range = 20, # 0.180 range, degrees of rotation
width_shift_range = 0.2, # 0-1 range horizontal translation
height_shift_range = 0.2, # 0-1 range vertical translation
horizontal_flip = True # set True or false
)
val_datagen = ImageDataGenerator(
rescale=1./255 # Rescale pixel values to 0-1 to aid CNN processing
)
# Directory, image size, batch size already specied above
# Class mode is set to 'binary' for a 2-class problem
# Generator randomly shuffles and presents images in batches to the network
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary'
)
validation_generator = val_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary'
)
# Fine-tune the pretrained Inception V3 model using the data generator
# Specify steps per epoch (number of samples/batch_size)
history = model.fit_generator(
train_generator,
steps_per_epoch=train_samples//batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=validation_samples//batch_size
)
# import matplotlib library, and plot training curve
import matplotlib.pyplot as plt
print(history.history.keys())
plt.figure()
plt.plot(history.history['acc'],'orange', label='Training accuracy')
plt.plot(history.history['val_acc'],'blue', label='Validation accuracy')
plt.plot(history.history['loss'],'red', label='Training loss')
plt.plot(history.history['val_loss'],'green', label='validation loss')
plt.legend()
plt.show()
# import numpy and keras preprocessing libraries
import numpy as np
from keras.preprocessing import image
# load, resize, and display test images
img_path = 'Open_I_abd_vs_CXRs/TEST/abd2.png'
img_path2 = 'Open_I_abd_vs_CXRs/TEST/chest2.png'
img = image.load_img(img_path, target_size=(img_width, img_height))
img2 = image.load_img(img_path2, target_size=(img_width, img_height))
plt.imshow(img)
plt.show()
# convert image to numpy array, so Keras can render a prediction
img = image.img_to_array(img)
# expand array from 3 dimensions (height, width, channels) to 4 dimensions (batch size, height, width, channels)
# rescale pixel values to 0-1
x = np.expand_dims(img, axis=0) * 1./255
# get prediction on test image
score = model.predict(x)
print('Predicted:', score, 'Chest X-ray' if score < 0.5 else 'Abd X-ray')
# display and render a prediction for the 2nd image
plt.imshow(img2)
plt.show()
img2 = image.img_to_array(img2)
x = np.expand_dims(img2, axis=0) * 1./255
score = model.predict(x)
print('Predicted:', score, 'Chest X-ray' if score < 0.5 else 'Abd X-ray')
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/hadisotudeh/zestyAI_challenge/blob/main/Zesty_AI_Data_Scientist_Assignment_%7C_Hadi_Sotudeh.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<center> <h1><b>Zesty AI Data Science Interview Task - Hadi Sotudeh</b></h1> </center>
To perform this task, I had access to the [`2009 RESIDENTIAL ENERGY CONSUMPTION SURVEY`](https://www.eia.gov/consumption/residential/data/2009/index.php?view=microdata) to predict `electricity consumption`.
</br>
</br>
Libraries available in Python such as `scikit-learn` and `fastai` were employed to perform this machine learning regression task.
</br>
</br>
First, I need to install the notebook dependencies, import the relevant libraries, download the dataset, and have them available in Google Colab (next cell).
## Install Dependencies, Import Libraries, and Download the dataset
```
%%capture
# install dependencies
!pip install fastai --upgrade
# Import Libraries
# general libraries
import warnings
import os
from datetime import datetime
from tqdm import tqdm_notebook as tqdm
# machine learning libraries
import pandas as pd
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from fastai.tabular.all import *
from sklearn.ensemble import RandomForestRegressor
from pandas_profiling import ProfileReport
import joblib
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
# model interpretation library
from sklearn.inspection import plot_partial_dependence
%%capture
#download the dataset
! wget https://www.eia.gov/consumption/residential/data/2009/csv/recs2009_public.csv
```
## Set Global parameters
The electric consumption is located in the `KWH` field of the dataset.
```
#show plots inside the jupyter notebook
%matplotlib inline
# pandas settings to show more columns are rows in the jupyter notebook
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 50000)
# don't show warnings
warnings.filterwarnings('ignore')
# dataset file path
dataset = "recs2009_public.csv"
# target variable to predict
dep_var = "KWH"
```
## Read the dataset from CSV files, Perform Data Cleaning, and Feature Engineering
Following a typical machine learning project, I first clean up the dataset to prevent data-leakage related features or non-relevant features.</br></br>It is important to mention that I did not first look at each column to figure out which feature to keep or not. What I did first was to train a model and iteratively look at the feature importances and check their meanings in the dataset documentation to figure out what features to remove to prevent data leakage.</br></br>In addition, a group of features with high correlations were identified and only one of them in each group was kept.
```
# read the train file
df = pd.read_csv(dataset)
# remove data-leakage and non-relevant features
non_essential_features = ["KWHSPH","KWHCOL","KWHWTH","KWHRFG","KWHOTH","BTUEL","BTUELSPH","BTUELCOL",
"BTUELWTH","BTUELRFG","BTUELOTH","DOLLAREL","DOLELSPH","DOLELCOL","DOLELWTH",
"DOLELRFG","DOLELOTH","TOTALBTUOTH","TOTALBTURFG","TOTALDOL","ELWATER",
"TOTALBTUWTH","TOTALBTU","ELWARM","TOTALBTUCOL","TOTALDOLCOL",
"REPORTABLE_DOMAIN","TOTALDOLWTH","TOTALBTUSPH","TOTCSQFT","TOTALDOLSPH",
"BTUNG", "BTUNGSPH", "BTUNGWTH","BTUNGOTH","DOLLARNG","DOLNGSPH","DOLNGWTH","DOLNGOTH",
"DIVISION"
]
df.drop(columns = non_essential_features, inplace=True)
# take the log of dependent variable ('price'). More details are in the training step.
df[dep_var] = np.log(df[dep_var])
```
I created train and validation sets with random selection (80% vs.20% rule) from the dataset[link text](https://) file in the next step.
```
splits = RandomSplitter(valid_pct=0.2)(range_of(df))
procs = [Categorify, FillMissing]
cont, cat = cont_cat_split(df, 1, dep_var=dep_var)
to = TabularPandas(df, procs, cat, cont, y_names=dep_var, splits = splits)
```
The following cell shows 5 random instances of the dataset (after cleaning and feature engineering).
```
to.show(5)
```
## Train the ML Model
Since model interpretation is also important for me, I chose RandomForest for both prediction and interpretation and knowledge discovery.
```
def rf(xs, y, n_estimators=40, max_features=0.5, min_samples_leaf=5, **kwargs):
"randomforst regressor"
return RandomForestRegressor(n_jobs=-1, n_estimators=n_estimators, max_features=max_features, min_samples_leaf=min_samples_leaf, oob_score=True).fit(xs, y)
xs,y = to.train.xs,to.train.y
valid_xs,valid_y = to.valid.xs,to.valid.y
m = rf(xs, y)
```
The predictions are evaluated based on [Root-Mean-Squared-Error (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/overview/evaluation) (Taking logs means that errors in predicting high electricity consumptions and low ones will affect the result equally).
</br>
</br>
```
def r_mse(pred,y):
return round(math.sqrt(((pred-y)**2).mean()), 6)
def m_rmse(m, xs, y):
return r_mse(m.predict(xs), y)
```
Print the Mean Root Squared Error of the logarithmic `KWH` on the train set:
```
m_rmse(m, xs, y)
```
Print the Mean Root Squared Error of the logarithmic `KWH` on the validation set:
```
m_rmse(m, valid_xs, valid_y)
```
Calculate Feature Importance and remove non-important features and re-train the model.
```
def rf_feat_importance(m, df):
return pd.DataFrame({'cols':df.columns, 'imp':m.feature_importances_}).sort_values('imp', ascending=False)
# show the top 10 features
fi = rf_feat_importance(m, xs)
fi[:10]
```
Only keep features with importance of more than 0.005 for re-training.
```
to_keep = fi[fi.imp>0.005].cols
print(f"features to keep are : {list(to_keep)}")
```
Some of the features to keep for re-training are:
1. `TOTALDOLOTH`: Total cost for appliances, electronics, lighting, and miscellaneous
2. `PELHOTWA`: Who pays for electricity used for water heating
3. `ACROOMS`: Number of rooms cooled
4. `TOTALDOLRFG`: Total cost for refrigerators, in whole dollars
5. `REGIONC`: Census Region
6. `TEMPNITEAC`: Temperature at night (summer)
```
xs_imp = xs[to_keep]
valid_xs_imp = valid_xs[to_keep]
m = rf(xs_imp, y)
```
Print the loss function of the re-trained model on train and validation sets.
```
m_rmse(m, xs_imp, y), m_rmse(m, valid_xs_imp, valid_y)
```
Check the correlation among the final features and adjust the set of features to remove at the beginning of the code.
```
from scipy.cluster import hierarchy as hc
def cluster_columns(df, figsize=(10,6), font_size=12):
corr = np.round(scipy.stats.spearmanr(df).correlation, 4)
corr_condensed = hc.distance.squareform(1-corr)
z = hc.linkage(corr_condensed, method='average')
fig = plt.figure(figsize=figsize)
hc.dendrogram(z, labels=df.columns, orientation='left', leaf_font_size=font_size)
plt.show()
cluster_columns(xs_imp)
```
Store the re-trained model.
```
joblib.dump(m, 'model.joblib')
```
## Interpret the Model and Do Knowledge Discovery
When I plot the feature importances of the trained model, I can clearly see that `TOTALDOLOTH` (Total cost for appliances, electronics, lighting, and miscellaneous uses in whole dollars) is the most important factor for the model to make its decisions.
```
def plot_fi(fi):
return fi.plot('cols', 'imp', 'barh', figsize=(12,7), legend=False)
plot_fi(rf_feat_importance(m, xs_imp));
```
In this section, I make use of the [Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) to interpret the learned function (ML model) and understand how this function makes decisions and predicts house prices for sale.</br></br>The 1D-feature plots show by changing one unit (increase or decrease) of the feature shown in the x-axis, how much the predicted dependent variable (`log KWH`) changes on average.
```
explore_cols = ['TOTALDOLOTH','TOTALDOLRFG','ACROOMS','TEMPHOMEAC','TEMPNITEAC','CDD30YR','CUFEETNGOTH','WASHLOAD','CUFEETNG']
explore_cols_vals = ["Total cost for appliances, electronics, lighting, and miscellaneous uses, in whole dollars",
"Total cost for refrigerators, in whole dollars",
"Number of rooms cooled",
"Temperature when someone is home during the day (summer)",
"Temperature at night (summer)",
"Cooling degree days, 30-year average 1981-2010, base 65F",
"Natural Gas usage for other purposes (all end-uses except SPH and WTH), in hundred cubic feet",
"Frequency clothes washer used",
"Total Natural Gas usage, in hundred cubic feet"]
for index, col in enumerate(explore_cols):
fig,ax = plt.subplots(figsize=(12, 4))
plot_partial_dependence(m, valid_xs_imp, [col], grid_resolution=20, ax=ax);
x_label = explore_cols_vals[index]
plt.xlabel(x_label)
```
The 2D-feature plots show by changing one unit (increase or decrease) of the features shown in the x and y axes, how much the dependent variable changes.
</br>
</br>
Here, the plot shows how much the model (learned function) changes its `log KWH` prediction on average when the two dimensions on the x and y axes change.
```
paired_features = [("TEMPNITEAC","TEMPHOMEAC"),("CUFEETNG","CUFEETNGOTH")]
paired_features_vals = [("Temperature at night (summer)","Temperature when someone is home during the day (summer)"),
("Total Natural Gas usage, in hundred cubic feet","Natural Gas usage for other purposes (all end-uses except SPH and WTH), in hundred cubic feet")]
for index, pair in enumerate(paired_features):
fig,ax = plt.subplots(figsize=(8, 8))
plot_partial_dependence(m, valid_xs_imp, [pair], grid_resolution=20, ax=ax);
x_label = paired_features_vals[index][0]
y_label = paired_features_vals[index][1]
plt.xlabel(x_label)
plt.ylabel(y_label)
```
## THE END!
|
github_jupyter
|
# Day and Night Image Classifier
---
The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.
We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!
*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).*
### Import resources
Before you get started on the project code, import the libraries and resources that you'll need.
```
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
```
## Training and Testing Data
The 200 day/night images are separated into training and testing datasets.
* 60% of these images are training images, for you to use as you create a classifier.
* 40% are test images, which will be used to test the accuracy of your classifier.
First, we set some variables to keep track of some where our images are stored:
image_dir_training: the directory where our training image data is stored
image_dir_test: the directory where our test image data is stored
```
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
```
## Load the datasets
These first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night").
For example, the first image-label pair in `IMAGE_LIST` can be accessed by index:
``` IMAGE_LIST[0][:]```.
```
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
```
## Construct a `STANDARDIZED_LIST` of input images and output labels.
This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
```
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
```
## Visualize the standardized data
Display a standardized image from STANDARDIZED_LIST.
```
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
```
# Feature Extraction
Create a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image.
---
### Find the average brigtness using the V channel
This function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
```
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
```
# Classification and Visualizing Error
In this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively).
---
### TODO: Build a complete classifier
Complete this code so that it returns an estimated class label given an input RGB image.
```
# This function should take in RGB image input
def estimate_label(rgb_image):
# Extract average brightness feature from an RGB image
avg = avg_brightness(rgb_image)
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
threshold = 100
if(avg > threshold):
# if the average brightness is above the threshold value, we classify it as "day"
predicted_label = 1
# else, the pred-cted_label can stay 0 (it is predicted to be "night")
return predicted_label
```
## Testing the classifier
Here is where we test your classification algorithm using our test set of data that we set aside at the beginning of the notebook!
Since we are using a pretty simple brightess feature, we may not expect this classifier to be 100% accurate. We'll aim for around 75-85% accuracy usin this one feature.
### Test dataset
Below, we load in the test dataset, standardize it using the `standardize` function you defined above, and then **shuffle** it; this ensures that order will not play a role in testing accuracy.
```
import random
# Using the load_dataset function in helpers.py
# Load test data
TEST_IMAGE_LIST = helpers.load_dataset(image_dir_test)
# Standardize the test data
STANDARDIZED_TEST_LIST = helpers.standardize(TEST_IMAGE_LIST)
# Shuffle the standardized test data
random.shuffle(STANDARDIZED_TEST_LIST)
```
## Determine the Accuracy
Compare the output of your classification algorithm (a.k.a. your "model") with the true labels and determine the accuracy.
This code stores all the misclassified images, their predicted labels, and their true labels, in a list called `misclassified`.
```
# Constructs a list of misclassified images given a list of test images and their labels
def get_misclassified_images(test_images):
# Track misclassified images by placing them into a list
misclassified_images_labels = []
# Iterate through all the test images
# Classify each image and compare to the true label
for image in test_images:
# Get true data
im = image[0]
true_label = image[1]
# Get predicted label from your classifier
predicted_label = estimate_label(im)
# Compare true and predicted labels
if(predicted_label != true_label):
# If these labels are not equal, the image has been misclassified
misclassified_images_labels.append((im, predicted_label, true_label))
# Return the list of misclassified [image, predicted_label, true_label] values
return misclassified_images_labels
# Find all misclassified images in a given test set
MISCLASSIFIED = get_misclassified_images(STANDARDIZED_TEST_LIST)
# Accuracy calculations
total = len(STANDARDIZED_TEST_LIST)
num_correct = total - len(MISCLASSIFIED)
accuracy = num_correct/total
print('Accuracy: ' + str(accuracy))
print("Number of misclassified images = " + str(len(MISCLASSIFIED)) +' out of '+ str(total))
```
---
<a id='task9'></a>
### Visualize the misclassified images
Visualize some of the images you classified wrong (in the `MISCLASSIFIED` list) and note any qualities that make them difficult to classify. This will help you identify any weaknesses in your classification algorithm.
```
# Visualize misclassified example(s)
num = 0
test_mis_im = MISCLASSIFIED[num][0]
## TODO: Display an image in the `MISCLASSIFIED` list
plt.imshow(test_mis_im)
## TODO: Print out its predicted label -
## to see what the image *was* incorrectly classified as
print("Misslabeled as: ",MISCLASSIFIED[num][1])
```
---
<a id='question2'></a>
## (Question): After visualizing these misclassifications, what weaknesses do you think your classification algorithm has?
**Answer:** Write your answer, here.
# 5. Improve your algorithm!
* (Optional) Tweak your threshold so that accuracy is better.
* (Optional) Add another feature that tackles a weakness you identified!
---
|
github_jupyter
|
## Bengaluru House Price
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.set_option("display.max_rows", None, "display.max_columns", None)
df1=pd.read_csv("Dataset/Bengaluru_House_Data.csv")
df1.head()
```
### Data Cleaning
```
df1.info()
df1.isnull().sum()
df1.groupby('area_type')['area_type'].agg('count')
df2=df1.drop(['area_type','availability','society','balcony'], axis='columns')
df2.head()
df2.isnull().sum()
df2.shape
df2['location'].fillna(df2['location'].mode().values[0],inplace=True)
df2['size'].fillna(df2['size'].mode().values[0],inplace=True)
df2['bath'].fillna(df2['bath'].mode().values[0],inplace=True)
df2.isnull().sum()
df2['size'].unique()
df2['bhk']=df2['size'].apply(lambda x: int(x.split(' ')[0]))
df2=df2.drop(['size'],axis='columns')
df2.head()
df2['bhk'].unique()
df2['total_sqft'].unique()
```
###### Dimension Reduction
```
def infloat(x):
try:
float(x)
except:
return False
return True
df2[~df2['total_sqft'].apply(infloat)].head(10)
def convert(x):
token=x.split('-')
if(len(token)==2):
return (float(token[0])+float(token[1]))/2
try:
return float(x)
except:
return 1600
df2['total_sqft']=df2['total_sqft'].apply(convert)
df2.head()
df2.loc[410]
df2.isnull().sum()
df2['total_sqft'].agg('mean')
df2['bath'].unique()
df3=df2.copy()
df3['price_per_sqft']=(df3['price']*100000/df3['total_sqft']).round(2)
df3.head()
df3.location.unique()
stats=df3.groupby('location')['location'].agg('count').sort_values(ascending=False)
stats
location_stat_less_than_10=stats[stats<=10]
location_stat_less_than_10
df3['location']=df3['location'].apply(lambda x:'others' if x in location_stat_less_than_10 else x)
len(df3.location.unique())
df3.head(10)
df3[df3['total_sqft']/df3['bhk']<300].head()
df3.shape
df4=df3[~(df3['total_sqft']/df3['bhk']<300)]
df4.shape
df4.price_per_sqft.describe()
def remove(df):
df_out = pd.DataFrame()
for key, subdf in df.groupby('location'):
m=np.mean(subdf.price_per_sqft)
st=np.std(subdf.price_per_sqft)
reduced_df=subdf[(subdf.price_per_sqft >(m-st)) & (subdf.price_per_sqft<=(m+st))]
df_out = pd.concat([df_out, reduced_df],ignore_index=True)
return df_out
df5=remove(df4)
df5.shape
def draw(df,location):
bhk2=df[ (df.location==location) & (df.bhk==2)]
bhk3=df[ (df.location==location) & (df.bhk==3)]
plt.rcParams['figure.figsize']=(15,10)
plt.scatter(bhk2.total_sqft,bhk2.price,color='blue')
plt.scatter(bhk3.total_sqft,bhk3.price,color='green',marker='+')
draw(df5,'Rajaji Nagar')
import matplotlib
matplotlib.rcParams['figure.figsize']=(15,10)
plt.hist(df5.price_per_sqft,rwidth=.8)
df5.bath.unique()
df5[df5.bath>df5.bhk+2]
df6=df5[df5.bath<df5.bhk+2]
df6.shape
df6.head()
df6=df6.drop(['price_per_sqft'],axis='columns')
df6.head()
dummies=pd.get_dummies(df6.location)
dummies.head(3)
dummies.shape
df7=pd.concat([df6,dummies.drop('others',axis='columns')],axis='columns')
df7.shape
df7.head(3)
df8=df7.drop('location',axis='columns')
df8.head(3)
df8.shape
x=df8.drop('price',axis='columns')
x.head(2)
y=df8['price']
y.head()
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y, test_size=0.2,random_state=10)
from sklearn.linear_model import LinearRegression
lr=LinearRegression()
lr.fit(x_train,y_train)
y_pred=lr.predict(x_test)
from sklearn.metrics import r2_score
r2_score(y_pred,y_test)
lr.score(x_test,y_test)
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_val_score
cv=ShuffleSplit(n_splits=5, test_size=.2,random_state=0)
cross_val_score(LinearRegression(),x,y,cv=cv)
from sklearn.ensemble import RandomForestRegressor
rfg=RandomForestRegressor(n_estimators=50)
rfg.fit(x_train,y_train)
r2_score(y_test,rfg.predict(x_test))
rfg.score(x_test,y_test)
cross_val_score(RandomForestRegressor(),x,y,cv=cv)
x.columns
X=x
def predict_price(location,sqft,bath,bhk):
loc_index = np.where(X.columns==location)[0][0]
x=np.zeros(len(X.columns))
x[0]=sqft
x[1]=bath
x[2]=bhk
if loc_index>=0:
x[loc_index]=1
return lr.predict([x])[0]
predict_price('1st Phase JP Nagar',1000,4,5)
predict_price('Indira Nagar',1000,2,2)
import pickle
with open('banglore_home_price_model.pickle','wb') as f:
pickle.dump(lr,f)
import json
columns={
'data_columns' : [col.lower() for col in X.columns]
}
with open("columns.json","w") as f:
f.write(json.dumps(columns))
```
|
github_jupyter
|
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Multi-worker training with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
This tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API, specifically `tf.distribute.MultiWorkerMirroredStrategy`. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.
[Distributed Training in TensorFlow](../../guide/distributed_training.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs.
## Setup
First, some necessary imports.
```
import json
import os
import sys
```
Before importing TensorFlow, make a few changes to the environment.
Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
```
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
```
Reset the `TF_CONFIG` environment variable, you'll see more about this later.
```
os.environ.pop('TF_CONFIG', None)
```
Be sure that the current directory is on python's path. This allows the notebook to import the files written by `%%writefile` later.
```
if '.' not in sys.path:
sys.path.insert(0, '.')
```
Now import TensorFlow.
```
import tensorflow as tf
```
### Dataset and model definition
Next create an `mnist.py` file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
```
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
```
Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
```
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
```
## Multi-worker Configuration
Now let's enter the world of multi-worker training. In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.
Here is an example configuration:
```
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
```
Here is the same `TF_CONFIG` serialized as a JSON string:
```
json.dumps(tf_config)
```
There are two components of `TF_CONFIG`: `cluster` and `task`.
* `cluster` is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training with `MultiWorkerMirroredStrategy`, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such a worker is referred to as the `chief` worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented).
* `task` provides information of the current task and is different on each worker. It specifies the `type` and `index` of that worker.
In this example, you set the task `type` to `"worker"` and the task `index` to `0`. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.
For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with 2 workers on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.
In this example you will use 2 workers, the first worker's `TF_CONFIG` is shown above. For the second worker you would set `tf_config['task']['index']=1`
Above, `tf_config` is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the `TF_CONFIG` environment variable.
### Environment variables and subprocesses in notebooks
Subprocesses inherit environment variables from their parent. So if you set an environment variable in this `jupyter notebook` process:
```
os.environ['GREETINGS'] = 'Hello TensorFlow!'
```
You can access the environment variable from a subprocesses:
```
%%bash
echo ${GREETINGS}
```
In the next section, you'll use this to pass the `TF_CONFIG` to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example.
## Choose the right strategy
In TensorFlow there are two main forms of distributed training:
* Synchronous training, where the steps of training are synced across the workers and replicas, and
* Asynchronous training, where the training steps are not strictly synced.
`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.
To train the model, use an instance of `tf.distribute.MultiWorkerMirroredStrategy`.
`MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distributed_training.ipynb) has more details about this strategy.
```
strategy = tf.distribute.MultiWorkerMirroredStrategy()
```
Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet the above strategy is effectively single-worker training.
`MultiWorkerMirroredStrategy` provides multiple implementations via the [`CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster.
## Train the model
With the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multiple-workers is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
```
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
```
Note: Currently there is a limitation in `MultiWorkerMirroredStrategy` where TensorFlow ops need to be created after the instance of strategy is created. If you see `RuntimeError: Collective ops must be configured at program startup`, try creating the instance of `MultiWorkerMirroredStrategy` at the beginning of the program and put the code that may create ops after the strategy is instantiated.
To actually run with `MultiWorkerMirroredStrategy` you'll need to run worker processes and pass a `TF_CONFIG` to them.
Like the `mnist.py` file written earlier, here is the `main.py` that each of the workers will run:
```
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
```
In the code snippet above note that the `global_batch_size`, which gets passed to `Dataset.batch`, is set to `per_worker_batch_size * num_workers`. This ensures that each worker processes batches of `per_worker_batch_size` examples regardless of the number of workers.
The current directory now contains both Python files:
```
%%bash
ls *.py
```
So json-serialize the `TF_CONFIG` and add it to the environment variables:
```
os.environ['TF_CONFIG'] = json.dumps(tf_config)
```
Now, you can launch a worker process that will run the `main.py` and use the `TF_CONFIG`:
```
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
```
There are a few things to note about the above command:
1. It uses the `%%bash` which is a [notebook "magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to run some bash commands.
2. It uses the `--bg` flag to run the `bash` process in the background, because this worker will not terminate. It waits for all the workers before it starts.
The backgrounded worker process won't print output to this notebook, so the `&>` redirects its output to a file, so you can see what happened.
So, wait a few seconds for the process to start up:
```
import time
time.sleep(10)
```
Now look what's been output to the worker's logfile so far:
```
%%bash
cat job_0.log
```
The last line of the log file should say: `Started server with target: grpc://localhost:12345`. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed.
So update the `tf_config` for the second worker's process to pick up:
```
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
```
Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
```
%%bash
python main.py
```
Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
```
%%bash
cat job_0.log
```
Unsurprisingly this ran _slower_ than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
```
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
```
## Multi worker training in depth
So far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases.
### Dataset sharding
In multi-worker training, dataset sharding is needed to ensure convergence and performance.
The example in the previous section relies on the default autosharding provided by the `tf.distribute.Strategy` API. You can control the sharding by setting the `tf.data.experimental.AutoShardPolicy` of the `tf.data.experimental.DistributeOptions`. To learn more about auto-sharding see the [Distributed input guide](https://www.tensorflow.org/tutorials/distribute/input#sharding).
Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
```
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
```
### Evaluation
If you pass `validation_data` into `model.fit`, it will alternate between training and evaluation for each epoch. The evaluation taking `validation_data` is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set `validation_steps`. A repeated dataset is also recommended for evaluation.
Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted.
### Prediction
Currently `model.predict` doesn't work with `MultiWorkerMirroredStrategy.`
### Performance
You now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training with `MultiWorkerMirroredStrategy`.
* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationImplementation). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify `communication_options` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL)`.
* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.py#L466) of how this can be done.
### Fault tolerance
In synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with `tf.distribute.Strategy` comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.
When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.
Note:
Previously, the `ModelCheckpoint` callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new [`BackupAndRestore`](#scrollTo=kmH8uCUhfn4w) callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing `ModelCheckpoint` callback. From now on, applications that rely on this behavior should migrate to the new callback.
#### ModelCheckpoint callback
`ModelCheckpoint` callback no longer provides fault tolerance functionality, please use [`BackupAndRestore`](#scrollTo=kmH8uCUhfn4w) callback instead.
The `ModelCheckpoint` callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.
Optionally the user can choose to save and restore model/weights outside `ModelCheckpoint` callback.
### Model saving and loading
To save your model using `model.save` or `tf.saved_model.save`, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.
The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.
With `MultiWorkerMirroredStrategy`, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes `task_type` and `task_id`. `task_type` tells you what the current job is (e.g. 'worker'), and `task_id` tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.
In the code snippet below, `write_filepath` provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
```
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this colab section, we also add `task_type is None`
# case because it is effectively run with only single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
```
With that, you're now ready to save:
```
multi_worker_model.save(write_model_path)
```
As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
```
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
```
Now, when it's time to load, let's use convenient `tf.keras.models.load_model` API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call `tf.keras.models.load_model` within another `strategy.scope()`.
```
loaded_model = tf.keras.models.load_model(model_path)
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
```
### Checkpoint saving and restoring
On the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one `tf.train.Checkpoint` that tracks the model, which is managed by a `tf.train.CheckpointManager` so that only the latest checkpoint is preserved.
```
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
```
Once the `CheckpointManager` is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
```
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
```
Now, when you need to restore, you can find the latest checkpoint saved using the convenient `tf.train.latest_checkpoint` function. After restoring the checkpoint, you can continue with training.
```
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
```
#### BackupAndRestore callback
BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.
Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.
To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `tf.keras.Model.fit()` call.
With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.
`BackupAndRestore` callback uses `CheckpointManager` to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, `backup_dir` should not be re-used to store other checkpoints in order to avoid name collision.
Currently, `BackupAndRestore` callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.
Below are two examples for both multi-worker training and single worker training.
```
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
```
If you inspect the directory of `backup_dir` you specified in `BackupAndRestore`, you may notice some temporarily generated checkpoint files. Those files are needed for recovering the previously lost instances, and they will be removed by the library at the end of `tf.keras.Model.fit()` upon successful exiting of your training.
Note: Currently BackupAndRestore only supports eager mode. In graph mode, consider using [Save/Restore Model](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras#model_saving_and_loading) mentioned above, and by providing `initial_epoch` in `model.fit()`.
## See also
1. [Distributed Training in TensorFlow](https://www.tensorflow.org/guide/distributed_training) guide provides an overview of the available distribution strategies.
2. [Official models](https://github.com/tensorflow/models/tree/master/official), many of which can be configured to run multiple distribution strategies.
3. The [Performance section](../../guide/function.ipynb) in the guide provides information about other strategies and [tools](../../guide/profiler.md) you can use to optimize the performance of your TensorFlow models.
|
github_jupyter
|

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/RE_POSOLOGY.ipynb)
# **Detect posology relations**
To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
## 1. Colab Setup
Import license keys
```
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
```
Install dependencies
```
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
```
Import dependencies into Python
```
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
```
Start the Spark session
```
spark = sparknlp_jsl.start(secret)
```
## 2. Select the Relation Extraction model and construct the pipeline
Select the models:
* Posology Relation Extraction models: **posology_re**
For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
```
# Change this to the model you want to use and re-run the cells below.
RE_MODEL_NAME = "posology_re"
NER_MODEL_NAME = "ner_posology_large"
```
Create the pipeline
```
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer()\
.setInputCols(['sentences']) \
.setOutputCol('tokens')
pos_tagger = PerceptronModel()\
.pretrained("pos_clinical", "en", "clinical/models") \
.setInputCols(["sentences", "tokens"])\
.setOutputCol("pos_tags")
dependency_parser = DependencyParserModel()\
.pretrained("dependency_conllu", "en")\
.setInputCols(["sentences", "pos_tags", "tokens"])\
.setOutputCol("dependencies")
embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models')\
.setInputCols(["sentences", "tokens"])\
.setOutputCol("embeddings")
clinical_ner_model = NerDLModel().pretrained(NER_MODEL_NAME, 'en', 'clinical/models').setInputCols("sentences", "tokens", "embeddings")\
.setOutputCol("clinical_ner_tags")
clinical_ner_chunker = NerConverter()\
.setInputCols(["sentences", "tokens", "clinical_ner_tags"])\
.setOutputCol("clinical_ner_chunks")
clinical_re_Model = RelationExtractionModel()\
.pretrained(RE_MODEL_NAME, 'en', 'clinical/models')\
.setInputCols(["embeddings", "pos_tags", "clinical_ner_chunks", "dependencies"])\
.setOutputCol("clinical_relations")\
.setMaxSyntacticDistance(4)
#.setRelationPairs()#["problem-test", "problem-treatment"]) # we can set the possible relation pairs (if not set, all the relations will be calculated)
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
pos_tagger,
dependency_parser,
embeddings,
clinical_ner_model,
clinical_ner_chunker,
clinical_re_Model])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
```
## 3. Create example inputs
```
# Enter examples as strings in this array
input_list = [
"""The patient is a 40-year-old white male who presents with a chief complaint of "chest pain". The patient is diabetic and has a prior history of coronary artery disease. The patient presents today stating that his chest pain started yesterday evening and has been somewhat intermittent. He has been advised Aspirin 81 milligrams QDay. Humulin N. insulin 50 units in a.m. HCTZ 50 mg QDay. Nitroglycerin 1/150 sublingually PRN chest pain.""",
]
```
# 4. Run the pipeline
```
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
light_result = light_pipeline.fullAnnotate(input_list[0])
```
# 5. Visualize
helper function for visualization
```
def get_relations_df (results, rel='clinical_relations'):
rel_pairs=[]
for rel in results[rel]:
rel_pairs.append((
rel.result,
rel.metadata['entity1'],
rel.metadata['entity1_begin'],
rel.metadata['entity1_end'],
rel.metadata['chunk1'],
rel.metadata['entity2'],
rel.metadata['entity2_begin'],
rel.metadata['entity2_end'],
rel.metadata['chunk2'],
rel.metadata['confidence']
))
rel_df = pd.DataFrame(rel_pairs, columns=['relation','entity1','entity1_begin','entity1_end','chunk1','entity2','entity2_begin','entity2_end','chunk2', 'confidence'])
return rel_df[rel_df.relation!='O']
get_relations_df(light_result[0])
```
|
github_jupyter
|
```
from python_dict_wrapper import wrap
import sys
sys.path.append('../')
import torch
sys.path.append("../../CPC/dpc")
sys.path.append("../../CPC/backbone")
import matplotlib.pyplot as plt
import numpy as np
import scipy
def find_dominant_orientation(W):
Wf = abs(np.fft.fft2(W))
orient_sel = 1 - Wf[0, 0] / Wf.sum()
Wf[0, 0] = 0
Wf = np.fft.fftshift(Wf)
dt = W.shape[0] // 2
xi, yi = np.meshgrid(np.arange(-dt, dt+1), np.arange(-dt, dt+1))
# Check whether we should split this horizontally or vertically
if Wf[xi == 0].sum() > Wf[yi == 0].sum():
# Use a top-down split
xi_ = xi * (xi >= 0)
yi_ = yi * (xi >= 0)
x0 = (xi_ * Wf).sum() / ((xi >= 0) * Wf).sum()
y0 = (yi_ * Wf).sum() / ((xi >= 0) * Wf).sum()
else:
xi_ = xi * (yi >= 0)
yi_ = yi * (yi >= 0)
x0 = (xi_ * Wf).sum() / ((yi >= 0) * Wf).sum()
y0 = (yi_ * Wf).sum() / ((yi >= 0) * Wf).sum()
return np.arctan2(y0, x0), orient_sel
def get_spatial_slice(W, theta):
dx = W.shape[0] // 2
dt = W.shape[2] // 2
xi, yi, zi = np.meshgrid(np.arange(W.shape[0]),
np.arange(W.shape[1]),
np.arange(W.shape[2]))
xi_, zi_ = np.meshgrid(np.arange(W.shape[0]),
np.arange(W.shape[2]))
Ws = []
for i in range(W.shape[3]):
interp = scipy.interpolate.LinearNDInterpolator(np.array([xi.ravel(),
yi.ravel(),
zi.ravel()]).T, W[:, :, :, i].ravel())
probe = np.array([dx + (xi_ - dx) * np.cos(theta),
dx + (xi_ - dx) * np.sin(theta),
zi_]).T
Ws.append(interp(probe))
return np.stack(Ws, axis=2)
def plot_static_shot(W):
#assert W.shape[0] == 64
W = W / abs(W).max(axis=4).max(axis=3).max(axis=2).max(axis=1).reshape(-1, 1, 1, 1, 1) / 2 + .5
t = W.shape[2] // 2
best_thetas = []
orient_sels = []
for i in range(W.shape[0]):
theta, orient_sel = find_dominant_orientation(W[i, :, t, :, :].transpose(1, 2, 0).sum(2))
best_thetas.append(theta)
orient_sels.append(orient_sel)
best_thetas = np.array(best_thetas)
orient_sels = np.array(orient_sels)
sort_idx = np.argsort(orient_sels)[::-1]
best_thetas = best_thetas[sort_idx]
orient_sels = orient_sels[sort_idx]
W = W[sort_idx, :, :, :, :]
plt.figure(figsize=(8, 8))
for i in range(W.shape[0]):
plt.subplot(8, 8, i + 1)
plt.imshow(W[i, :, t, :, :].transpose(1, 2, 0))
theta = best_thetas[i]
#plt.plot([3 + 3 * np.sin(theta), 3 - 3 * np.sin(theta)], [3 + 3 * np.cos(theta), 3 - 3 * np.cos(theta)], 'r-')
plt.axis(False)
#plt.suptitle(f'xy filters, sliced at t = {t}')
plt.show()
dt = W.shape[-1] // 2
xi, yi = np.meshgrid(np.arange(-dt, dt+1), np.arange(-dt, dt+1))
plt.figure(figsize=(8, 8))
for i in range(W.shape[0]):
W_ = W[i, :, :, :, :].transpose((3, 2, 1, 0))
plt.subplot(8, 8, i + 1)
theta = best_thetas[i]
W_ = get_spatial_slice(W_, theta)
plt.imshow(W_)
plt.axis(False)
plt.show()
from models import get_feature_model
args = wrap({'features': 'cpc_02',
'ckpt_root': '../pretrained',
'slowfast_root': '../../slowfast',
'ntau': 1,
'subsample_layers': False})
model, _, _ = get_feature_model(args)
plot_static_shot(model.s1.conv1.weight.detach().cpu().numpy())
args = wrap({'features': 'cpc_01',
'ckpt_root': '../pretrained',
'slowfast_root': '../../slowfast',
'ntau': 1,
'subsample_layers': False})
model, _, _ = get_feature_model(args)
plot_static_shot(model.s1.conv1.weight.detach().cpu().numpy())
args = wrap({'features': 'airsim_04',
'ckpt_root': '../pretrained',
'slowfast_root': '../../slowfast',
'ntau': 1,
'subsample_layers': False})
model, _, _ = get_feature_model(args)
plot_static_shot(model.s1.conv1.weight.detach().cpu().numpy())
data = model.s1.conv1.weight.detach().cpu().numpy()
F = data.mean(axis=1).reshape((64, 5, 7*7))
sepindexes = []
for i in range(F.shape[0]):
U, S, V = np.linalg.svd(F[i, :, :])
sepindex = S[0] ** 2 / (S ** 2).sum()
sepindexes.append(sepindex)
plt.figure(figsize=(2,2))
plt.hist(sepindexes, np.arange(11)/10)
plt.xlabel('Separability index')
plt.ylabel('Count')
plt.plot([.71, .71], [0, 17], 'k--')
plt.plot([.71], [17], 'kv')
import seaborn as sns
sns.despine()
```
|
github_jupyter
|
# Лекция 7. Разреженные матрицы и прямые методы для решения больших разреженных систем
## План на сегодняшнюю лекцию
- Плотные неструктурированные матрицы и распределённое хранение
- Разреженные матрицы и форматы их представления
- Быстрая реализация умножения разреженной матрицы на вектор
- Метод Гаусса для разреженных матриц: упорядоченность
- Заполнение и графы: сепараторы
- Лапласиан графа
## Плотные матрицы большой размерности
- Если размер матрицы очень большой, то она не помещается в память
- Возможные способы работы с такими матрицами
- Если матрица **структурирована**, например блочно Тёплицева с Тёплицевыми блоками (в следующих лекциях), тогда возможно сжатое хранение
- Для неструктурированных матриц помогает **распределённая память**
- MPI для обработки распределённо хранимых матриц
### Распределённая память и MPI
- Разбиваем матрицу на блоки и храним их на различных машинах
- Каждая машина имеет своё собственное адресное пространство и не может повредить данные на других машинах
- В этом случае машины передают друг другу данные для агрегирования результата вычислений
- [MPI (Message Passing Interface)](https://en.wikipedia.org/wiki/Message_Passing_Interface) – стандарт в параллельных вычислениях с распределённой памятью
### Пример: умножение матрицы на вектор
- Предположим, вы хотите посчитать произведение $Ax$ и матрица $A$ не помещается в памяти
- В этом случае вы можете разбить матрицу на блоки и поместить их на разные машины
- Возможные стратегии:
- Одномерное деление на блоки использует только строки
- Двумерное деление на блоки использует и строки и столбцы
#### Пример одномерного деления на блоки
<img src="./1d_block.jpg">
#### Общее время вычисления произведения матрицы на вектор для одномерного разбиения на блоки
- Каждая машина хранит $n / p $ полных строк и $n / p$ элементов вектора $x$
- Общее число операций $n^2 / p$
- Общее время для отправки и записи данных $t_s \log p + t_w n$, где $t_s$ – единица времени на отправку и $t_w$ – единица времени на запись
#### Пример двумерного деления на блоки
<img src="./2d_block.png" width=400>
#### Общее время вычисления умножения матрицы на вектор с использованием двумерного разбиения на блоки
- Каждая машина хранит блок размера $n / \sqrt{p} $ и $n / \sqrt{p}$ элементов вектора
- Общее число операций $n^2 / p$
- Общее время для отправки и записи данных примерно равно $t_s \log p + t_w (n/\sqrt{p}) \log p$, где $t_s$ – единица времени на отправку и $t_w$ – единица времени на запись
### Пакеты с поддержкой распределённого хранения данных
- [ScaLAPACK](http://www.netlib.org/scalapack/)
- [Trilinos](https://trilinos.org/)
В Python вы можете использовать [mpi4py](https://mpi4py.readthedocs.io/en/stable/) для параллельной реализации ваших алгоритмов.
- PyTorch поддерживает распределённое обучение и хранение данных, см подробности [тут](https://pytorch.org/tutorials/intermediate/dist_tuto.html)
### Резюме про работу с большими плотными неструктурированными матрицами
- Распределённое хранение матриц
- MPI
- Пакеты, которые используют блочные вычисления
- Различные подходы к блочным вычислениям
## Разреженные матрицы
- Ограничением в решении задач линейной алгебры с плотными матрицами является память, требуемая для хранения плотных матриц, $N^2$ элементов.
- Разреженные матрицы, где большинство элементов нулевые позволяют по крайней мере хранить их в памяти.
- Основные вопросы: можем ли мы решать следующие задачи для разреженных матриц?
- решение линейных систем
- вычисление собственных значений и собственных векторов
- вычисление матричных функций
## Приложения разреженных матриц
Разреженные матрицы возникают в следующих областях:
- математическое моделирование и решение уравнений в частных производных
- обработка графов, например анализ социальных сетей
- рекомендательные системы
- в целом там, где отношения между объектами "разрежены".
### Разреженные матрицы помогают в вычислительной теории графов
- Графы представляют в виде матриц смежности, которые чаще всего разрежены
- Численное решение задач теории графов сводится к операциям с этими разреженными матрицами
- Кластеризация графа и выделение сообществ
- Ранжирование
- Случайные блуждатели
- И другие....
- Пример: возможно, самый большой доступный граф гиперссылок содержит 3.5 миллиарда веб-страниц и 128 миллиардов гиперссылок, больше подробностей см. [тут](http://webdatacommons.org/hyperlinkgraph/)
- Различные графы среднего размера для тестирования ваших алгоритмов доступны в [Stanford Large Network Dataset Collection](https://snap.stanford.edu/data/)
### Florida sparse matrix collection
- Большое количество разреженных матриц из различных приложений вы можете найти в [Florida sparse matrix collection](http://www.cise.ufl.edu/research/sparse/matrices/).
```
from IPython.display import IFrame
IFrame('http://yifanhu.net/GALLERY/GRAPHS/search.html', 500, 500)
```
### Разреженные матрицы и глубокое обучение
- DNN имеют очень много параметров
- Некоторые из них могут быть избыточными
- Как уменьшить число параметров без серьёзной потери в точности?
- [Sparse variational dropout method](https://github.com/ars-ashuha/variational-dropout-sparsifies-dnn) даёт существенно разреженные фильтры в DNN почти без потери точности!
## Построение разреженных матриц
- Мы можем генерировать разреженные матрицы с помощью пакета **scipy.sparse**
- Можно задать матрицы очень большого размера
Полезные функции при создании разреженных матриц:
- для созданий диагональной матрицы с заданными диагоналями ```spdiags```
- Кронекерово произведение (определение будет далее) разреженных матриц ```kron```
- также арифметические операции для разреженных матриц перегружены
### Кронекерово произведение
Для матриц $A\in\mathbb{R}^{n\times m}$ и $B\in\mathbb{R}^{l\times k}$ Кронекерово произведение определяется как блочная матрица следующего вида
$$
A\otimes B = \begin{bmatrix}a_{11}B & \dots & a_{1m}B \\ \vdots & \ddots & \vdots \\ a_{n1}B & \dots & a_{nm}B\end{bmatrix}\in\mathbb{R}^{nl\times mk}.
$$
Основные свойства:
- билинейность
- $(A\otimes B) (C\otimes D) = AC \otimes BD$
- Пусть $\mathrm{vec}(X)$ оператор векторизации матрицы по столбцам. Тогда
$\mathrm{vec}(AXB) = (B^T \otimes A) \mathrm{vec}(X).$
```
import numpy as np
import scipy as sp
import scipy.sparse
from scipy.sparse import csc_matrix, csr_matrix
import matplotlib.pyplot as plt
import scipy.linalg
import scipy.sparse.linalg
%matplotlib inline
n = 5
ex = np.ones(n);
lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr');
e = sp.sparse.eye(n)
A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)
A = csc_matrix(A)
plt.spy(A, aspect='equal', marker='.', markersize=5)
```
### Шаблон разреженности
- Команда ```spy``` рисует шаблон разреженности данной матрицы: пиксель $(i, j)$ отображается на рисунке, если соответствующий элемент матрицы ненулевой.
- Шаблон разреженности действительно очень важен для понимания сложности алгоритмов линейной алгебры для разреженных матриц.
- Зачастую шаблона разреженности достаточно для анализа того, насколько "сложно" работать с этой матрицей.
### Определение разреженных матриц
- Разреженные матрицы – это матрицы, такие что количество ненулевых элементов в них существенно меньше общего числа элементов в матрице.
- Из-за этого вы можете выполнять базовые операции линейной алгебры (прежде всего решать линейные системы) гораздо быстрее по сравнению с использованием плотных матриц.
## Что нам необходимо, чтобы увидеть, как это работает
- **Вопрос 1:** Как хранить разреженные матрицы в памяти?
- **Вопрос 2:** Как умножить разреженную матрицу на вектор быстро?
- **Вопрос 3:** Как быстро решать линейные системы с разреженными матрицами?
### Хранение разреженных матриц
Существет много форматов хранения разреженных матриц, наиболее важные:
- COO (координатный формат)
- LIL (список списков)
- CSR (compressed sparse row)
- CSC (compressed sparse column)
- блочные варианты
В ```scipy``` представлены конструкторы для каждого из этих форматов, например
```scipy.sparse.lil_matrix(A)```.
#### Координатный формат (COO)
- Простейший формат хранения разреженной матрицы – координатный.
- В этом формате разреженная матрица – это набор индексов и значений в этих индексах.
```python
i, j, val
```
где ```i, j``` массивы индексов, ```val``` массив элементов матрицы. <br>
- Таким образом, нам нужно хранить $3\cdot$**nnz** элементов, где **nnz** обозначает число ненулевых элементов в матрице.
**Q:** Что хорошего и что плохого в использовании такого формата?
#### Основные недостатки
- Он неоптимален по памяти (почему?)
- Он неоптимален для умножения матрицы на вектор (почему?)
- Он неоптимален для удаления элемента (почему?)
Первые два недостатка решены в формате CSR.
**Q**: какой формат решает третий недостаток?
#### Compressed sparse row (CSR)
В формате CSR матрица хранится также с помощью трёх массивов, но других:
```python
ia, ja, sa
```
где:
- **ia** (начало строк) массив целых чисел длины $n+1$
- **ja** (индексы столбцов) массив целых чисел длины **nnz**
- **sa** (элементы матрицы) массив действительных чисел длины **nnz**
<img src="https://www.karlrupp.net/wp-content/uploads/2016/02/csr_storage_sparse_marix.png" width=60% />
Итак, всего необходимо хранить $2\cdot{\bf nnz} + n+1$ элементов.
### Разреженные матрицы в PyTorch и Tensorflow
- PyTorch поддерживает разреженные матрицы в формате COO
- Неполная поддержка вычисления градиентов в операциях с такими матрицами, список и обсуждение см. [тут](https://github.com/pytorch/pytorch/issues/9674)
- Tensorflow также поддерживает разреженные матрицы в COO формате
- Список поддерживаемых операций приведён [здесь](https://www.tensorflow.org/api_docs/python/tf/sparse) и поддержка вычисления градиентов также ограничена
### CSR формат позволяет быстро умножить разреженную матрицу на вектор (SpMV)
```python
for i in range(n):
for k in range(ia[i]:ia[i+1]):
y[i] += sa[k] * x[ja[k]]
```
```
import numpy as np
import scipy as sp
import scipy.sparse
import scipy.sparse.linalg
from scipy.sparse import csc_matrix, csr_matrix, coo_matrix
import matplotlib.pyplot as plt
%matplotlib inline
n = 1000
ex = np.ones(n);
lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr');
e = sp.sparse.eye(n)
A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)
A = csr_matrix(A)
rhs = np.ones(n * n)
B = coo_matrix(A)
%timeit A.dot(rhs)
%timeit B.dot(rhs)
```
Видно, что **CSR** быстрее, и чем менее структурирован шаблон разреженности, тем выше выигрыш в скорости.
### Разреженные матрицы и эффективность
- Использование разреженных матриц приводит к уменьшению сложности
- Но они не очень подходят для параллельных/GPU реализаций
- Они не показывают максимальную эффективность из-за случайного доступа к данным.
- Обычно, пиковая производительность порядка $10\%-15\%$ считается хорошей.
### Вспомним как измеряется эффективность операций
- Стандартный способ измерения эффективности операций линейной алгебры – это использование **flops** (число опраций с плавающей точкой в секунду)
- Измерим эффективность умножения матрицы на вектор в случае плотной и разреженной матрицы
```
import numpy as np
import time
n = 4000
a = np.random.randn(n, n)
v = np.random.randn(n)
t = time.time()
np.dot(a, v)
t = time.time() - t
print('Time: {0: 3.1e}, Efficiency: {1: 3.1e} Gflops'.\
format(t, ((2 * n ** 2)/t) / 10 ** 9))
n = 4000
ex = np.ones(n);
a = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr');
rhs = np.random.randn(n)
t = time.time()
a.dot(rhs)
t = time.time() - t
print('Time: {0: 3.1e}, Efficiency: {1: 3.1e} Gflops'.\
format(t, (3 * n) / t / 10 ** 9))
```
### Случайный доступ к данным и промахи в обращении к кешу
- Сначала все элементы матрицы и вектора хранятся в оперативной памяти (RAM – Random Access Memory)
- Если вы хотите вычислить произведение матрицы на вектор, часть элементов матрицы и вектора перемещаются в кеш (быстрой памяти малого объёма), см. [лекцию об алгоритме Штрассена и умножении матриц](https://github.com/amkatrutsa/nla2020_ozon/blob/master/lectures/lecture4/lecture4.ipynb)
- После этого CPU берёт данные из кеша, обрабатывает их и возвращает результат снова в кеш
- Если CPU требуются данные, которых ещё нет в кеше, это называется промах в обращении к кешу (cache miss)
- Если случается промах в обращении к кешу, необходимые данные перемещаются из оперативной памяти в кеш
**Q**: что если в кеше нет свободного места?
- Чем больше промахов в обращении к кешу, тем медленнее выполняются вычисления
### План кеша и LRU
<img src="./cache_scheme.png" width="500">
#### Умножение матрицы в CSR формате на вектор
```python
for i in range(n):
for k in range(ia[i]:ia[i+1]):
y[i] += sa[k] * x[ja[k]]
```
- Какая часть операций приводит к промахам в обращении к кешу?
- Как эту проблему можно решить?
### Переупорядочивание уменьшает количество промахов в обращении к кешу
- Если ```ja``` хранит последовательно элементы, тогда они могут быть перемещены в кеш одновременно и количество промахов в обращении к кешу уменьшится
- Так происходит, когда разреженная матрица является **ленточной** или хотя бы блочно-диагональной
- Мы можем превратить данную разреженную матрицу в ленточную или блочно-диагональную с помощью *перестановок*
- Пусть $P$ матрица перестановок строк матрицы и $Q$ матрица перестановок столбцов матрицы
- $A_1 = PAQ$ – матрица с шириной ленты меньшей, чем у матрицы $A$
- $y = Ax \to \tilde{y} = A_1 \tilde{x}$, где $\tilde{x} = Q^{\top}x$ и $\tilde{y} = Py$
- [Separated block diagonal form](http://albert-jan.yzelman.net/PDFs/yzelman09-rev.pdf) призван минимизировать количество промахов в обращении к кешу
- Он также может быть расширен на двумерный случай, где разделяются не только строки, но и столбцы
#### Пример
- SBD в одномерном случае
<img src="./sbd.png" width="400">
## Методы решения линейных систем с разреженными матрицами
- Прямые методы
- LU разложение
- Различные методы переупорядочивания для минимизации заполнения факторов
- Крыловские методы
```
n = 10
ex = np.ones(n);
lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr');
e = sp.sparse.eye(n)
A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)
A = csr_matrix(A)
rhs = np.ones(n * n)
sol = sp.sparse.linalg.spsolve(A, rhs)
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot(sol)
ax1.set_title('Not reshaped solution')
ax2.contourf(sol.reshape((n, n), order='f'))
ax2.set_title('Reshaped solution')
```
## LU разложение разреженной матрицы
- Почему разреженная линейная система может быть решена быстрее, чем плотная? С помощью какого метода?
- В LU разложении матрицы $A$ факторы $L$ и $U$ могут быть также разреженными:
$$A = L U$$
- А решение линейной системы с разреженной треугольной матрицей может быть вычислено очень быстро.
<font color='red'> Заметим, что обратная матрица от разреженной матрицы НЕ разрежена! </font>
```
n = 7
ex = np.ones(n);
a = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr');
b = np.array(np.linalg.inv(a.toarray()))
print(a.toarray())
print(b)
```
## А факторы...
- $L$ и $U$ обычно разрежены
- В случае трёхдиагональной матрицы они даже бидиагональны!
```
from scipy.sparse.linalg import splu
T = splu(a.tocsc(), permc_spec="NATURAL")
plt.spy(T.L)
```
Отметим, что ```splu``` со значением параметра ```permc_spec``` по умолчанию даёт перестановку, которая не даёт бидиагональные факторы:
```
from scipy.sparse.linalg import splu
T = splu(a.tocsc())
plt.spy(T.L)
print(T.perm_c)
```
## Двумерный случай
В двумерном случае всё гораздо хуже:
```
n = 20
ex = np.ones(n);
lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr');
e = sp.sparse.eye(n)
A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)
A = csc_matrix(A)
T = scipy.sparse.linalg.spilu(A)
plt.spy(T.L, marker='.', color='k', markersize=8)
```
Для правильной перестановки в двумерном случае число ненулевых элементов в $L$ растёт как $\mathcal{O}(N \log N)$. Однако сложность равна $\mathcal{O}(N^{3/2})$.
## Разреженные матрицы и теория графов
- Число ненулей в факторах из LU разложения тесно связано с теорией графов.
- Пакет ``networkx`` можно использовать для визуализации графов, имея только матрицу смежности.
```
import networkx as nx
n = 10
ex = np.ones(n);
lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr');
e = sp.sparse.eye(n)
A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1)
A = csc_matrix(A)
G = nx.Graph(A)
nx.draw(G, pos=nx.spectral_layout(G), node_size=10)
```
## Заполнение (fill-in)
- Заполнение матрицы – это элементы, которые были **нулями**, но стали **ненулями** в процессе выполнения алгоритма.
- Заполнение может быть различным для различных перестановок. Итак, до того как делать факторизацию матрицы нам необходимо переупорядочить её элементы так, чтобы заполнение факторов было наименьшим.
**Пример**
$$A = \begin{bmatrix} * & * & * & * & *\\ * & * & 0 & 0 & 0 \\ * & 0 & * & 0 & 0 \\ * & 0 & 0& * & 0 \\ * & 0 & 0& 0 & * \end{bmatrix} $$
- Если мы исключаем элементы сверху вниз, тогда мы получим плотную матрицу.
- Однако мы можем сохранить разреженность, если исключение будет проводиться снизу вверх.
- Подробности на следующих слайдах
## Метод Гаусса для разреженных матриц
- Дана матрица $A$ такая что $A=A^*>0$.
- Вычислим её разложение Холецкого $A = LL^*$.
Фактор $L$ может быть плотным даже если $A$ разреженная:
$$
\begin{bmatrix} * & * & * & * \\ * & * & & \\ * & & * & \\ * & & & * \end{bmatrix} =
\begin{bmatrix} * & & & \\ * & * & & \\ * & * & * & \\ * & * & * & * \end{bmatrix}
\begin{bmatrix} * & * & * & * \\ & * & * & * \\ & & * & * \\ & & & * \end{bmatrix}
$$
**Q**: как сделать факторы разреженными, то есть минимизировать заполнение?
## Метод Гаусса и перестановка
- Нам нужно найти перестановку индексов такую что факторы будут разреженными, то есть мы будем вычислять разложение Холецкого для матрицы $PAP^\top$, где $P$ – матрица перестановки.
- Для примера с предыдущего слайда
$$
P \begin{bmatrix} * & * & * & * \\ * & * & & \\ * & & * & \\ * & & & * \end{bmatrix} P^\top =
\begin{bmatrix} * & & & * \\ & * & & * \\ & & * & * \\ * & * & * & * \end{bmatrix} =
\begin{bmatrix} * & & & \\ & * & & \\ & & * & \\ * & * & * & * \end{bmatrix}
\begin{bmatrix} * & & & * \\ & * & & * \\ & & * & * \\ & & & * \end{bmatrix}
$$
где
$$
P = \begin{bmatrix} & & & 1 \\ & & 1 & \\ & 1 & & \\ 1 & & & \end{bmatrix}
$$
- Такая форма матрицы даёт разреженные факторы в LU разложении
```
import numpy as np
import scipy.sparse as spsp
import scipy.sparse.linalg as spsplin
import scipy.linalg as splin
import matplotlib.pyplot as plt
%matplotlib inline
A = spsp.coo_matrix((np.random.randn(10), ([0, 0, 0, 0, 1, 1, 2, 2, 3, 3],
[0, 1, 2, 3, 0, 1, 0, 2, 0, 3])))
print("Original matrix")
plt.spy(A)
plt.show()
lu = spsplin.splu(A.tocsc(), permc_spec="NATURAL")
print("L factor")
plt.spy(lu.L)
plt.show()
print("U factor")
plt.spy(lu.U)
plt.show()
print("Column permutation:", lu.perm_c)
print("Row permutation:", lu.perm_r)
```
### Блочный случай
$$
PAP^\top = \begin{bmatrix} A_{11} & & A_{13} \\ & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33}\end{bmatrix}
$$
тогда
$$
PAP^\top = \begin{bmatrix} A_{11} & 0 & 0 \\ 0 & A_{22} & 0 \\ A_{31} & A_{32} & A_{33} - A_{31}A_{11}^{-1} A_{13} - A_{32}A_{22}^{-1}A_{23} \end{bmatrix} \begin{bmatrix} I & 0 & A_{11}^{-1}A_{13} \\ 0 & I & A_{22}^{-1}A_{23} \\ 0 & 0 & I\end{bmatrix}
$$
- Блок $ A_{33} - A_{31}A_{11}^{-1} A_{13} - A_{32}A_{22}^{-1}A_{23}$ является дополнением по Шуру для блочно-диагональной матрицы $\begin{bmatrix} A_{11} & 0 \\ 0 & A_{22} \end{bmatrix}$
- Мы свели задачу к решению меньших линейных систем с матрицами $A_{11}$ и $A_{22}$
### Как найти перестановку?
- Основная идея взята из теории графов
- Разреженную матрицы можно рассматривать как **матрицу смежности** некоторого графа: вершины $(i, j)$ связаны ребром, если соответствующий элемент матрицы не ноль.
### Пример
Графы для матрицы $\begin{bmatrix} * & * & * & * \\ * & * & & \\ * & & * & \\ * & & & * \end{bmatrix}$ и для матрицы $\begin{bmatrix} * & & & * \\ & * & & * \\ & & * & * \\ * & * & * & * \end{bmatrix}$ имеют следующий вид:
<img src="./graph_dense.png" width=300 align="center"> и <img src="./graph_sparse.png" width=300 align="center">
* Почему вторая упорядоченность лучше, чем первая?
### Сепаратор графа
**Определение.** Сепаратором графа $G$ называется множество вершин $S$, таких что их удаление оставляет как минимум две связные компоненты.
Сепаратор $S$ даёт следующий метод нумерации вершин графа $G$:
- Найти сепаратор $S$, удаление которого оставляет связные компоненты $T_1$, $T_2$, $\ldots$, $T_k$
- Номера вершин в $S$ от $N − |S| + 1$ до $N$
- Рекурсивно, номера вершин в каждой компоненте:
- в $T_1$ от $1$ до $|T_1|$
- в $T_2$ от $|T_1| + 1$ до $|T_1| + |T_2|$
- и так далее
- Если компонента достаточно мала, то нумерация внутри этой компоненты произвольная
### Сепаратор и структура матрицы: пример
Сепаратор для матрицы двумерного лапласиана
$$
A_{2D} = I \otimes A_{1D} + A_{1D} \otimes I, \quad A_{1D} = \mathrm{tridiag}(-1, 2, -1),
$$
имеет следующий вид
<img src='./separator.png' width=300> </img>
Если мы пронумеруем сначала индексы в $\alpha$, затем в $\beta$, и наконец индексы в сепараторе $\sigma$ получим следующую матрицу
$$
PAP^\top = \begin{bmatrix} A_{\alpha\alpha} & & A_{\alpha\sigma} \\ & A_{\beta\beta} & A_{\beta\sigma} \\ A_{\sigma\alpha} & A_{\sigma\beta} & A_{\sigma\sigma}\end{bmatrix},
$$
которая имеет подходящую структуру.
- Таким образом, задача поиска перестановки была сведена к задаче поиска сепаратора графа!
### Nested dissection
- Для блоков $A_{\alpha\alpha}$, $A_{\beta\beta}$ можем продолжить разбиение рекурсивно
- После завершения рекурсии нужно исключить блоки $A_{\sigma\alpha}$ и $A_{\sigma\beta}$.
- Это делает блок в положении $A_{\sigma\sigma}\in\mathbb{R}^{n\times n}$ **плотным**.
- Вычисление разложения Холецкого этого блока стоит $\mathcal{O}(n^3) = \mathcal{O}(N^{3/2})$, где $N = n^2$ – общее число вершин.
- В итоге сложность $\mathcal{O}(N^{3/2})$
## Пакеты для nested dissection
- MUltifrontal Massively Parallel sparse direct Solver ([MUMPS](http://mumps.enseeiht.fr/))
- [Pardiso](https://www.pardiso-project.org/)
- [Umfpack как часть пакета SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
У них есть интефейс для C/C++, Fortran и Matlab
### Резюме про nested dissection
- Нумерация свелась к поиску сепаратора
- Подход разделяй и властвуй
- Рекурсивно продолжается на два (или более) подмножества вершин после разделения
- В теории nested dissection даёт оптимальную сложность (почему?)
- На практике этот метод лучше других только на очень больших задачах
## Сепараторы на практике
- Вычисление сепаратора – это **нетривиальная задача!**
- Построение методов разбиения графа было активной сферой научных исследований долгие годы
Существующие подходы:
- Спектральное разбиение (использует собственные векторы **Лапласиана графа**) – подробности далее
- Геометрическое разбиение (для сеток с заданными координатами вершин) [обзор и анализ](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.4886&rep=rep1&type=pdf)
- Итеративные перестановки ([(Kernighan-Lin, 1970)](http://xilinx.asia/_hdl/4/eda.ee.ucla.edu/EE201A-04Spring/kl.pdf), [(Fiduccia-Matheysses, 1982](https://dl.acm.org/citation.cfm?id=809204))
- Поиск в ширину [(Lipton, Tarjan 1979)](http://www.cs.princeton.edu/courses/archive/fall06/cos528/handouts/sepplanar.pdf)
- Многоуровневая рекурсивная бисекция (наиболее практичная эвристика) ([обзор](https://people.csail.mit.edu/jshun/6886-s18/lectures/lecture13-1.pdf) и [статья](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.499.4130&rep=rep1&type=pdf)). Пакет для подобного рода разбиений называется METIS, написан на C, и доступен [здесь](http://glaros.dtc.umn.edu/gkhome/views/metis)
## Спектральное разбиение графа
- Идея спектрального разбиения восходит к работам Мирослава Фидлера, который изучал связность графов ([статья](https://dml.cz/bitstream/handle/10338.dmlcz/101168/CzechMathJ_23-1973-2_11.pdf)).
- Нам нужно разбить вершинеы графа на 2 множества
- Рассмотрим метки вершин +1/-1 и **функцию потерь**
$$E_c(x) = \sum_{j} \sum_{i \in N(j)} (x_i - x_j)^2, \quad N(j) \text{ обозначает множество соседей вершины } j. $$
Нам нужно сбалансированное разбиение, поэтому
$$\sum_i x_i = 0 \quad \Longleftrightarrow \quad x^\top e = 0, \quad e = \begin{bmatrix}1 & \dots & 1\end{bmatrix}^\top,$$
и поскольку мы ввели метки +1/-1, то выполнено
$$\sum_i x^2_i = n \quad \Longleftrightarrow \quad \|x\|_2^2 = n.$$
## Лапласиан графа
Функция потерь $E_c$ может быть записана в виде (проверьте почему)
$$E_c = (Lx, x)$$
где $L$ – **Лапласиан графа**, который определяется как симметричная матрица с элементами
$$L_{ii} = \mbox{степень вершины $i$},$$
$$L_{ij} = -1, \quad \mbox{если $i \ne j$ и существует ребро},$$
и $0$ иначе.
- Строчные суммы в матрице $L$ равны нулю, поэтому существует собственное значение $0$, которое даёт собственный вектор из всех 1.
- Собственные значения неотрицательны (почему?).
## Разбиение как задача оптимизации
- Минимизация $E_c$ с упомянутыми ограничениями приводит к разбиению, которое минимизирует число вершин в сепараторе, но сохраняет разбиение сбалансированным
- Теперь мы запишем релаксацию целочисленной задачи квадратичного программирования в форме непрерывной задачи квадратичного программирования
$$E_c(x) = (Lx, x)\to \min_{\substack{x^\top e =0, \\ \|x\|_2^2 = n}}$$
## Вектор Фидлера
- Решение этой задачи минимизации – собственный вектор матрицы $L$, соответствующий **второму** минимальному собственному значению (он называется вектором Фидлера)
- В самом деле,
$$
\min_{\substack{x^\top e =0, \\ \|x\|_2^2 = n}} (Lx, x) = n \cdot \min_{{x^\top e =0}} \frac{(Lx, x)}{(x, x)} = n \cdot \min_{{x^\top e =0}} R(x), \quad R(x) \text{ отношение Релея}
$$
- Поскольку $e$ – собственный вектор, соответствующий наименьшему собственному значению, то на подпространстве $x^\top e =0$ мы получим второе минимальное собственное значение.
- Знак $x_i$ обозначает разбиение графа.
- Осталось понять, как вычислить этот вектор. Мы знаем про степенной метод, но он ищет собственный вектор для максимального по модулю собственного значения.
- Итерационные методы для задачи на собственные значения будут рассмотрены далее в курсе...
```
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
kn = nx.read_gml('karate.gml')
print("Number of vertices = {}".format(kn.number_of_nodes()))
print("Number of edges = {}".format(kn.number_of_edges()))
nx.draw_networkx(kn, node_color="red") #Draw the graph
Laplacian = nx.laplacian_matrix(kn).asfptype()
plt.spy(Laplacian, markersize=5)
plt.title("Graph laplacian")
plt.axis("off")
plt.show()
eigval, eigvec = spsplin.eigsh(Laplacian, k=2, which="SM")
print("The 2 smallest eigenvalues =", eigval)
plt.scatter(np.arange(len(eigvec[:, 1])), np.sign(eigvec[:, 1]))
plt.show()
print("Sum of elements in Fiedler vector = {}".format(np.sum(eigvec[:, 1].real)))
nx.draw_networkx(kn, node_color=np.sign(eigvec[:, 1]))
```
### Резюме по примеру использования спектрального разбиения графа
- Мы вызвали функцию из SciPy для поиска фиксированного числа собственных векторов и собственных значений, которые минимальны (возможны другие опции)
- Детали методов, которые реализованы в этих функциях, обсудим уже скоро
- Вектор Фидлера даёт простой способ разбиения графа
- Для разбиения графа на большее количество частей следует использовать собственные векторы Лапласиана как векторы признаков и запустить какой-нибудь алгоритм кластеризации, например $k$-means
### Вектор Фидлера и алгебраическая связность графа
**Определение.** Алгебраическая связность графа – это второе наименьшее собственное значение матрицы Лапласиана графа.
**Утверждение.** Алгебраическая связность графа больше 0 тогда и только тогда, когда граф связный.
## Minimal degree orderings
- Идея в том, чтобы исклоючить строки и/или столбцы с малым числом ненулей, обновить заполнение и повторить.
- Эффективная реализация является отдельной задачей (добавление/удаление элементов).
- На практике часто лучше всего для задач среднего размера
- SciPy [использует](https://docs.scipy.org/doc/scipy-1.3.0/reference/generated/scipy.sparse.linalg.splu.html) этот подход для различных матриц ($A^{\top}A$, $A + A^{\top}$)
## Главное в сегодняшней лекции
- Плотные матрицы большого размера и распределённые вычисления
- Разреженные матрицы, приложения и форматы их хранения
- Эффективные способы умножения разреженной матрицы на вектор
- LU разложение разреженной матрицы: заполнение и перестановки строк
- Минимизация заполнения: сепараторы и разбиение графа
- Nested dissection
- Спектральное разбиение графа: Лапласиан графа и вектор Фидлера
```
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
|
github_jupyter
|
# The stereology module
The main purpose of stereology is to extract quantitative information from microscope images relating two-dimensional measures obtained on sections to three-dimensional parameters defining the structure. The aim of stereology is not to reconstruct the 3D geometry of the material (as in tomography) but to estimate a particular 3D feature. In this case, we aim to approximate the actual (3D) grain size distribution from the apparent (2D) grain size distribution obtained in sections.
GrainSizeTools script includes two stereological methods: 1) the Saltykov, and 2) the two-step methods. Before looking at its functionalities, applications and limitations, let's import the example dataset.
```
# Load the script first (change the path to GrainSizeTools_script.py accordingly!)
%run C:/Users/marco/Documents/GitHub/GrainSizeTools/grain_size_tools/GrainSizeTools_script.py
# Import the example dataset
filepath = 'C:/Users/marco/Documents/GitHub/GrainSizeTools/grain_size_tools/DATA/data_set.txt'
dataset = pd.read_csv(filepath, sep='\t')
dataset['diameters'] = 2 * np.sqrt(dataset['Area'] / np.pi) # estimate ECD
```
## The Saltykov method
> **What is it?**
>
> It is a stereological method that approximates the actual grain size distribution from the histogram of the apparent grain size distribution. The method is distribution-free, meaning that no assumption is made upon the type of statistical distribution, making the method very versatile.
>
> **What do I use it for?**
>
> Its main use (in geosciences) is to estimate the volume fraction of a specific range of grain sizes.
>
> **What are its limitations?**
>
> The method presents several limitations for its use in rocks
>
> - It assumes that grains are non-touching spheres uniformly distributed in a matrix (e.g. bubbles within a piece of glass). This never holds for polycrystalline rocks. To apply the method, the grains should be at least approximately equiaxed, which is normally fulfilled in recrystallized grains.
> - Due to the use of the histogram, the number of classes determines the accuracy and success of the method. There is a trade-off here because the smaller the number of classes, the better the numerical stability of the method, but the worse the approximation of the targeted distribution and vice versa. The issue is that no method exists to find an optimal number of classes and this has to be set by the user. The use of the histogram also implies that we cannot obtain a complete description of the grain size distribution.
> - The method lacks a formulation for estimating errors during the unfolding procedure.
> - You cannot obtain an estimate of the actual average grain size (3D) as individual data is lost when using the histogram (i.e. The Saltykov method reconstructs the 3D histogram, not every apparent diameter in the actual one as this is mathematically impossible).
>
TODO: explain the details of the method
```
fig1, (ax1, ax2) = stereology.Saltykov(dataset['diameters'], numbins=11, calc_vol=50)
fig1.savefig("saltykov_plot.png", dpi=150)
```
Now let's assume that we want to use the class densities estimated by Saltykov's method to calculate the specific volume of each or one of the classes. We have two options here.
```
stereology.Saltykov?
```
The input parameter ``text_file`` allows you to save a text file with the data in tabular format, you only have to declare the name of the file and the file type, either txt or csv (as in the function documentation example). Alternatively, you can use the Saltykov function to directly return the density and the midpoint values of the classes as follows:
```
mid_points, densities = stereology.Saltykov(dataset['diameters'], numbins=11, return_data=True)
print(densities)
```
As you may notice, these density values do not add up to 1 or 100.
```
np.sum(densities)
```
This is because the script normalized the frequencies of the different classes so that the integral over the range (not the sum) is one (see FAQs for an explanation on this). If you want to calculate the relative proportion for each class you must multiply the value of the densities by the bin size. After doing this, you can check that the relative densities sum one (i.e. they are proportions relative to one).
```
corrected_densities = densities * 14.236
np.sum(corrected_densities)
```
So for example if you have a volume of rock of say 100 cm2 and you want to estimate what proportion of that volume is occupied by each grain size class/range, you could estimate it as follows:
```
# I use np.around to round the values
np.around(corrected_densities * 100, 2)
```
## The two-step method
> **What is it?**
>
> It is a stereological method that approximates the actual grain size distribution. The method is distribution-dependent, meaning that it is assumed that the distribution of grain sizes follows a lognormal distribution. The method fit a lognormal distribution on top of the Saltykov output, hence the name two-step method.
>
> **What do I use it for?**
>
> Its main use is to estimate the shape of the lognormal distribution, the average grain size (3D), and the volume fraction of a specific range of grain sizes (not yet implemented).
>
> **What are its limitations?**
>
> The method is partially based on the Saltykov method and therefore inherits some of its limitations. The method however do not require to define a specific number of classes.
```
fig2, ax = stereology.calc_shape(dataset['diameters'])
fig2.savefig("2step_plot.png", dpi=150)
```
|
github_jupyter
|
# Convolutional Neural Network
### Author: Ivan Bongiorni, Data Scientist at GfK.
[LinkedIn profile](https://www.linkedin.com/in/ivan-bongiorni-b8a583164/)
In this Notebook I will implement a **basic CNN in TensorFlow 2.0**. I will use the famous **Fashion MNIST** dataset, [published by Zalando](https://github.com/zalandoresearch/fashion-mnist) and made [available on Kaggle](https://www.kaggle.com/zalando-research/fashionmnist). Images come already preprocessed in 28 x 28 black and white format.
It is a multiclass classification task on the following labels:
0. T-shirt/top
1. Trouser
2. Pullover
3. Dress
4. Coat
5. Sandal
6. Shirt
7. Sneaker
8. Bag
9. Ankle boot

Summary:
0. Import data + Dataprep
0. CNN architecture
0. Training with Mini Batch Gradient Descent
0. Test
```
# Import necessary modules
import numpy as np
import pandas as pd
import tensorflow as tf
print(tf.__version__)
from sklearn.utils import shuffle
from matplotlib import pyplot as plt
import seaborn
```
# 0. Import data + Dataprep
The dataset comes already divided in 60k and 10k Train and Test images. I will now import Training data, and leave Test for later. In order to dataprep image data, I need to reshape the pixel into `(, 28, 28, 1)` arrays; the 1 at the end represents the channel: 1 for black and white images, 3 (red, green, blue) for colored images. Pixel data are also scaled to the `[0, 1]` interval.
```
df = pd.read_csv('fashion-mnist_train.csv')
# extract labels, one-hot encode them
label = df.label
label = pd.get_dummies(label)
label = label.values
label = label.astype(np.float32)
df.drop('label', axis = 1, inplace = True)
df = df.values
df = df.astype(np.float32)
# reshape and scale data
df = df.reshape((len(df), 28, 28, 1))
df = df / 255.
```
# 1. CNN architecture
I will feed images into a set of **convolutional** and **max-pooling layers**:
- Conv layers are meant to extract relevant informations from pixel data. A number of *filters* scroll through the image, learning the most relevant informations to extract.
- Max-Pool layers instead are meant to drastically reduce the number of pixel data. For each (2, 2) window size, Max-Pool will save only the pixel with the highest activation value. Max-Pool is meant to make the model lighter by removing the least relevant observations, at the cost of course of loosing a lot of data!
Since I'm focused on the implementation, rather than on the theory behind it, please refer to [this good article](https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d) on how Conv and Max-Pool work in practice. If you are a die-hard, check [this awesome page from a CNN Stanford Course](http://cs231n.github.io/convolutional-networks/?source=post_page---------------------------#overview).
Con and Max-Pool will extract and reduce the size of the input, so that the following feed-forward part could process it. The first convolutional layer requires a specification of the input shape, corresponding to the shape of each image.
Since it's a multiclass classification tasks, softmax activation must be placed at the output layer in order to transform the Network's output into a probability distribution over the ten target categories.
```
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, Flatten, Dense, BatchNormalization, Dropout
from tensorflow.keras.activations import relu, elu, softmax
CNN = Sequential([
Conv2D(32, kernel_size = (3, 3), activation = elu,
kernel_initializer = 'he_normal', input_shape = (28, 28, 1)),
MaxPool2D((2, 2)),
Conv2D(64, kernel_size = (3, 3), kernel_initializer = 'he_normal', activation = elu),
BatchNormalization(),
Conv2D(128, kernel_size = (3, 3), kernel_initializer = 'he_normal', activation = elu),
BatchNormalization(),
Dropout(0.2),
Flatten(),
Dense(400, activation = elu),
BatchNormalization(),
Dropout(0.2),
Dense(400, activation = elu),
BatchNormalization(),
Dropout(0.2),
Dense(10, activation = softmax)
])
CNN.summary()
```
# 2. Training with Mini Batch Gradient Descent
The training part is no different from mini batch gradient descent training of feed-forward classifiers. I wrote [a Notebook on this technique](https://github.com/IvanBongiorni/TensorFlow2.0_Tutorial/blob/master/TensorFlow2.0_02_MiniBatch_Gradient_Descent.ipynb) in which I explain it in more detail.
Assuming you already know how it works, I will define a function to fetch mini batches into the Network and process with training in eager execution.
```
@tf.function
def fetch_batch(X, y, batch_size, epoch):
start = epoch*batch_size
X_batch = X[start:start+batch_size, :, :]
y_batch = y[start:start+batch_size, :]
return X_batch, y_batch
# To measure execution time
import time
start = time.time()
```
There is one big difference with respect to previous training exercises. Since this Network has a high number of parameters (approx 4.4 million) it will require comparatively longer training times. For this reason, I will measure training not just in *epochs*, but also in *cycles*.
At each training cycle, I will shuffle the dataset and feed it into the Network in mini batches until it's completed. At the following cycle, I will reshuffle the data using a different random seed and repeat the process. (What I called cycles are nothing but Keras' "epochs".)
Using 50 cycles and batches of size 120 on a 60.000 images dataset, I will be able to train my CNN for an overall number of 25.000 epochs.
```
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.metrics import CategoricalAccuracy
loss = tf.keras.losses.CategoricalCrossentropy()
accuracy = tf.keras.metrics.CategoricalAccuracy()
optimizer = tf.optimizers.Adam(learning_rate = 0.0001)
### TRAINING
cycles = 50
batch_size = 120
loss_history = []
accuracy_history = []
for cycle in range(cycles):
df, label = shuffle(df, label, random_state = cycle**2)
for epoch in range(len(df) // batch_size):
X_batch, y_batch = fetch_batch(df, label, batch_size, epoch)
with tf.GradientTape() as tape:
current_loss = loss(CNN(X_batch), y_batch)
gradients = tape.gradient(current_loss, CNN.trainable_variables)
optimizer.apply_gradients(zip(gradients, CNN.trainable_variables))
loss_history.append(current_loss.numpy())
current_accuracy = accuracy(CNN(X_batch), y_batch).numpy()
accuracy_history.append(current_accuracy)
accuracy.reset_states()
print(str(cycle + 1) + '.\tTraining Loss: ' + str(current_loss.numpy())
+ ',\tAccuracy: ' + str(current_accuracy))
#
print('\nTraining complete.')
print('Final Loss: ' + str(current_loss.numpy()) + '. Final accuracy: ' + str(current_accuracy))
end = time.time()
print(end - start) # around 3.5 hours :(
plt.figure(figsize = (15, 4)) # adjust figures size
plt.subplots_adjust(wspace=0.2) # adjust distance
from scipy.signal import savgol_filter
# loss plot
plt.subplot(1, 2, 1)
plt.plot(loss_history)
plt.plot(savgol_filter(loss_history, len(loss_history)//3, 3))
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('Categorical Cross-Entropy')
# accuracy plot
plt.subplot(1, 2, 2)
plt.plot(accuracy_history)
plt.plot(savgol_filter(accuracy_history, len(loss_history)//3, 3))
plt.title('Accuracy')
plt.xlabel('epochs')
plt.ylabel('Accuracy')
plt.show()
```
# 3. Test
Let's now test the model on the Test set. I will repeat the dataprep part on it:
```
# Dataprep Test data
test = pd.read_csv('fashion-mnist_test.csv')
test_label = pd.get_dummies(test.label)
test_label = test_label.values
test_label = test_label.astype(np.float32)
test.drop('label', axis = 1, inplace = True)
test = test.values
test = test.astype(np.float32)
test = test / 255.
test = test.reshape((len(test), 28, 28, 1))
prediction = CNN.predict(test)
prediction = np.argmax(prediction, axis=1)
test_label = np.argmax(test_label, axis=1) # reverse one-hot encoding
from sklearn.metrics import confusion_matrix
CM = confusion_matrix(prediction, test_label)
print(CM)
print('\nTest Accuracy: ' + str(np.sum(np.diag(CM)) / np.sum(CM)))
seaborn.heatmap(CM, annot=True)
plt.show()
```
My Convolutional Neural Network classified 91.7% of Test data correctly. The confusion matrix showed that category no. 6 (Shirt) has been misclassified the most. The next goal is to correct it; one possible solution would be to increase regularization, another to build an ensemble of models.
Thanks for coming thus far. In the next Notebooks I will implement more advanced Convolutional models, among other things.
|
github_jupyter
|
# quant-econ Solutions: Modeling Career Choice
Solutions for http://quant-econ.net/py/career.html
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from quantecon import DiscreteRV, compute_fixed_point
from career import CareerWorkerProblem
```
## Exercise 1
Simulate job / career paths.
In reading the code, recall that `optimal_policy[i, j]` = policy at
$(\theta_i, \epsilon_j)$ = either 1, 2 or 3; meaning 'stay put', 'new job' and
'new life'.
```
wp = CareerWorkerProblem()
v_init = np.ones((wp.N, wp.N))*100
v = compute_fixed_point(wp.bellman_operator, v_init, verbose=False)
optimal_policy = wp.get_greedy(v)
F = DiscreteRV(wp.F_probs)
G = DiscreteRV(wp.G_probs)
def gen_path(T=20):
i = j = 0
theta_index = []
epsilon_index = []
for t in range(T):
if optimal_policy[i, j] == 1: # Stay put
pass
elif optimal_policy[i, j] == 2: # New job
j = int(G.draw())
else: # New life
i, j = int(F.draw()), int(G.draw())
theta_index.append(i)
epsilon_index.append(j)
return wp.theta[theta_index], wp.epsilon[epsilon_index]
theta_path, epsilon_path = gen_path()
fig, axes = plt.subplots(2, 1, figsize=(10, 8))
for ax in axes:
ax.plot(epsilon_path, label='epsilon')
ax.plot(theta_path, label='theta')
ax.legend(loc='lower right')
plt.show()
```
## Exercise 2
The median for the original parameterization can be computed as follows
```
wp = CareerWorkerProblem()
v_init = np.ones((wp.N, wp.N))*100
v = compute_fixed_point(wp.bellman_operator, v_init)
optimal_policy = wp.get_greedy(v)
F = DiscreteRV(wp.F_probs)
G = DiscreteRV(wp.G_probs)
def gen_first_passage_time():
t = 0
i = j = 0
while 1:
if optimal_policy[i, j] == 1: # Stay put
return t
elif optimal_policy[i, j] == 2: # New job
j = int(G.draw())
else: # New life
i, j = int(F.draw()), int(G.draw())
t += 1
M = 25000 # Number of samples
samples = np.empty(M)
for i in range(M):
samples[i] = gen_first_passage_time()
print(np.median(samples))
```
To compute the median with $\beta=0.99$ instead of the default value $\beta=0.95$,
replace `wp = CareerWorkerProblem()` with `wp = CareerWorkerProblem(beta=0.99)`
The medians are subject to randomness, but should be about 7 and 11
respectively. Not surprisingly, more patient workers will wait longer to settle down to their final job
## Exercise 3
Here’s the code to reproduce the original figure
```
from matplotlib import cm
wp = CareerWorkerProblem()
v_init = np.ones((wp.N, wp.N))*100
v = compute_fixed_point(wp.bellman_operator, v_init)
optimal_policy = wp.get_greedy(v)
fig, ax = plt.subplots(figsize=(6,6))
tg, eg = np.meshgrid(wp.theta, wp.epsilon)
lvls=(0.5, 1.5, 2.5, 3.5)
ax.contourf(tg, eg, optimal_policy.T, levels=lvls, cmap=cm.winter, alpha=0.5)
ax.contour(tg, eg, optimal_policy.T, colors='k', levels=lvls, linewidths=2)
ax.set_xlabel('theta', fontsize=14)
ax.set_ylabel('epsilon', fontsize=14)
ax.text(1.8, 2.5, 'new life', fontsize=14)
ax.text(4.5, 2.5, 'new job', fontsize=14, rotation='vertical')
ax.text(4.0, 4.5, 'stay put', fontsize=14)
```
Now we want to set `G_a = G_b = 100` and generate a new figure with these parameters.
To do this replace:
wp = CareerWorkerProblem()
with:
wp = CareerWorkerProblem(G_a=100, G_b=100)
In the new figure, you will see that the region for which the worker will stay put has grown because the distribution for $\epsilon$ has become more concentrated around the mean, making high-paying jobs less realistic
|
github_jupyter
|
```
cd /tf/src/data/gpt-2/
!pip3 install -r requirements.txt
import fire
import json
import os
import numpy as np
import tensorflow as tf
import regex as re
from functools import lru_cache
import tqdm
from tensorflow.core.protobuf import rewriter_config_pb2
import glob
import pickle
tf.__version__
```
# Encoding
```
"""Byte pair encoding utilities"""
@lru_cache()
def bytes_to_unicode():
"""
Returns list of utf-8 byte and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings.
This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
This is a signficant percentage of your normal, say, 32K bpe vocab.
To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.
"""
bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
cs = bs[:]
n = 0
for b in range(2**8):
if b not in bs:
bs.append(b)
cs.append(2**8+n)
n += 1
cs = [chr(n) for n in cs]
return dict(zip(bs, cs))
def get_pairs(word):
"""Return set of symbol pairs in a word.
Word is represented as tuple of symbols (symbols being variable-length strings).
"""
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs
class Encoder:
def __init__(self, encoder, bpe_merges, errors='replace'):
self.encoder = encoder
self.decoder = {v:k for k,v in self.encoder.items()}
self.errors = errors # how to handle errors in decoding
self.byte_encoder = bytes_to_unicode()
self.byte_decoder = {v:k for k, v in self.byte_encoder.items()}
self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
self.cache = {}
# Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")
def bpe(self, token):
if token in self.cache:
return self.cache[token]
word = tuple(token)
pairs = get_pairs(word)
if not pairs:
return token
while True:
bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
if bigram not in self.bpe_ranks:
break
first, second = bigram
new_word = []
i = 0
while i < len(word):
try:
j = word.index(first, i)
new_word.extend(word[i:j])
i = j
except:
new_word.extend(word[i:])
break
if word[i] == first and i < len(word)-1 and word[i+1] == second:
new_word.append(first+second)
i += 2
else:
new_word.append(word[i])
i += 1
new_word = tuple(new_word)
word = new_word
if len(word) == 1:
break
else:
pairs = get_pairs(word)
word = ' '.join(word)
self.cache[token] = word
return word
def encode(self, text):
bpe_tokens = []
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
return bpe_tokens
def decode(self, tokens):
text = ''.join([self.decoder[token] for token in tokens])
text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors=self.errors)
return text
def get_encoder(model_name, models_dir):
with open(os.path.join(models_dir, model_name, 'encoder.json'), 'r') as f:
encoder = json.load(f)
with open(os.path.join(models_dir, model_name, 'vocab.bpe'), 'r', encoding="utf-8") as f:
bpe_data = f.read()
bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split('\n')[1:-1]]
return Encoder(
encoder=encoder,
bpe_merges=bpe_merges,
)
```
# Model
```
class HParams():
n_vocab=50257
n_ctx=1024
n_embd=768
n_head=12
n_layer=12
def __init__(self, n_vocab, n_ctx, n_embd, n_head, n_layer):
self.n_vocab = n_vocab
self.n_ctx = n_ctx
self.n_embd = n_embd
self.n_head = n_head
self.n_layer = n_layer
def default_hparams():
return HParams(
n_vocab=50257,
n_ctx=1024,
n_embd=768,
n_head=12,
n_layer=12,
)
def shape_list(x):
"""Deal with dynamic shape in tensorflow cleanly."""
static = x.shape.as_list()
dynamic = tf.shape(input=x)
return [dynamic[i] if s is None else s for i, s in enumerate(static)]
def gelu(x):
return 0.5 * x * (1 + tf.tanh(np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3))))
def norm(x, scope, *, axis=-1, epsilon=1e-5):
"""Normalize to mean = 0, std = 1, then do a diagonal affine transform."""
with tf.compat.v1.variable_scope(scope):
n_state = x.shape[-1]
g = tf.compat.v1.get_variable('g', [n_state], initializer=tf.compat.v1.constant_initializer(1), use_resource=False)
b = tf.compat.v1.get_variable('b', [n_state], initializer=tf.compat.v1.constant_initializer(0), use_resource=False)
u = tf.reduce_mean(input_tensor=x, axis=axis, keepdims=True)
s = tf.reduce_mean(input_tensor=tf.square(x-u), axis=axis, keepdims=True)
x = (x - u) * tf.math.rsqrt(s + epsilon)
x = x*g + b
return x
def split_states(x, n):
"""Reshape the last dimension of x into [n, x.shape[-1]/n]."""
*start, m = shape_list(x)
return tf.reshape(x, start + [n, m//n])
def merge_states(x):
"""Smash the last two dimensions of x into a single dimension."""
*start, a, b = shape_list(x)
return tf.reshape(x, start + [a*b])
def conv1d(x, scope, nf, *, w_init_stdev=0.02):
with tf.compat.v1.variable_scope(scope):
*start, nx = shape_list(x)
w = tf.compat.v1.get_variable('w', [1, nx, nf], initializer=tf.compat.v1.random_normal_initializer(stddev=w_init_stdev), use_resource=False)
b = tf.compat.v1.get_variable('b', [nf], initializer=tf.compat.v1.constant_initializer(0), use_resource=False)
c = tf.reshape(tf.matmul(tf.reshape(x, [-1, nx]), tf.reshape(w, [-1, nf]))+b, start+[nf])
return c
def attention_mask(nd, ns, *, dtype):
"""1's in the lower triangle, counting from the lower right corner.
Same as tf.matrix_band_part(tf.ones([nd, ns]), -1, ns-nd), but doesn't produce garbage on TPUs.
"""
i = tf.range(nd)[:,None]
j = tf.range(ns)
m = i >= j - ns + nd
return tf.cast(m, dtype)
def attn(x, scope, n_state, *, past, hparams):
assert x.shape.ndims == 3 # Should be [batch, sequence, features]
assert n_state % hparams.n_head == 0
if past is not None:
assert past.shape.ndims == 5 # Should be [batch, 2, heads, sequence, features], where 2 is [k, v]
def split_heads(x):
# From [batch, sequence, features] to [batch, heads, sequence, features]
return tf.transpose(a=split_states(x, hparams.n_head), perm=[0, 2, 1, 3])
def merge_heads(x):
# Reverse of split_heads
return merge_states(tf.transpose(a=x, perm=[0, 2, 1, 3]))
def mask_attn_weights(w):
# w has shape [batch, heads, dst_sequence, src_sequence], where information flows from src to dst.
_, _, nd, ns = shape_list(w)
b = attention_mask(nd, ns, dtype=w.dtype)
b = tf.reshape(b, [1, 1, nd, ns])
w = w*b - tf.cast(1e10, w.dtype)*(1-b)
return w
def multihead_attn(q, k, v):
# q, k, v have shape [batch, heads, sequence, features]
w = tf.matmul(q, k, transpose_b=True)
w = w * tf.math.rsqrt(tf.cast(v.shape[-1], w.dtype))
w = mask_attn_weights(w)
w = tf.nn.softmax(w, axis=-1)
a = tf.matmul(w, v)
return a
with tf.compat.v1.variable_scope(scope):
c = conv1d(x, 'c_attn', n_state*3)
q, k, v = map(split_heads, tf.split(c, 3, axis=2))
present = tf.stack([k, v], axis=1)
if past is not None:
pk, pv = tf.unstack(past, axis=1)
k = tf.concat([pk, k], axis=-2)
v = tf.concat([pv, v], axis=-2)
a = multihead_attn(q, k, v)
a = merge_heads(a)
a = conv1d(a, 'c_proj', n_state)
return a, present
def mlp(x, scope, n_state, *, hparams):
with tf.compat.v1.variable_scope(scope):
nx = x.shape[-1]
h = gelu(conv1d(x, 'c_fc', n_state))
h2 = conv1d(h, 'c_proj', nx)
return h2
def block(x, scope, *, past, hparams):
with tf.compat.v1.variable_scope(scope):
nx = x.shape[-1]
a, present = attn(norm(x, 'ln_1'), 'attn', nx, past=past, hparams=hparams)
x = x + a
m = mlp(norm(x, 'ln_2'), 'mlp', nx*4, hparams=hparams)
x = x + m
return x, present
def past_shape(*, hparams, batch_size=None, sequence=None):
return [batch_size, hparams.n_layer, 2, hparams.n_head, sequence, hparams.n_embd // hparams.n_head]
def expand_tile(value, size):
"""Add a new axis of given size."""
value = tf.convert_to_tensor(value=value, name='value')
ndims = value.shape.ndims
return tf.tile(tf.expand_dims(value, axis=0), [size] + [1]*ndims)
def positions_for(tokens, past_length):
batch_size = tf.shape(input=tokens)[0]
nsteps = tf.shape(input=tokens)[1]
return expand_tile(past_length + tf.range(nsteps), batch_size)
def model(hparams, X, past=None, scope='model', reuse=tf.compat.v1.AUTO_REUSE):
with tf.compat.v1.variable_scope(scope, reuse=reuse):
results = {}
batch, sequence = shape_list(X)
wpe = tf.compat.v1.get_variable('wpe', [hparams.n_ctx, hparams.n_embd],
initializer=tf.compat.v1.random_normal_initializer(stddev=0.01), use_resource=False)
wte = tf.compat.v1.get_variable('wte', [hparams.n_vocab, hparams.n_embd],
initializer=tf.compat.v1.random_normal_initializer(stddev=0.02), use_resource=False)
past_length = 0 if past is None else tf.shape(input=past)[-2]
h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
# print(h.shape)
# Transformer
presents = []
pasts = tf.unstack(past, axis=1) if past is not None else [None] * hparams.n_layer
assert len(pasts) == hparams.n_layer
for layer, past in enumerate(pasts):
h, present = block(h, 'h%d' % layer, past=past, hparams=hparams)
presents.append(present)
results['present'] = tf.stack(presents, axis=1)
h = norm(h, 'ln_f')
results['hidden_state'] = h
# Language model loss. Do tokens <n predict token n?
h_flat = tf.reshape(h, [batch*sequence, hparams.n_embd])
logits = tf.matmul(h_flat, wte, transpose_b=True)
logits = tf.reshape(logits, [batch, sequence, hparams.n_vocab])
results['logits'] = logits
return results
```
# Sample from Model
```
def top_k_logits(logits, k):
if k == 0:
# no truncation
return logits
def _top_k():
values, _ = tf.nn.top_k(logits, k=k)
min_values = values[:, -1, tf.newaxis]
return tf.compat.v1.where(
logits < min_values,
tf.ones_like(logits, dtype=logits.dtype) * -1e10,
logits,
)
return tf.cond(
pred=tf.equal(k, 0),
true_fn=lambda: logits,
false_fn=lambda: _top_k(),
)
def sample_sequence(*, hparams, length, start_token=None, batch_size=None, context=None, past=None, temperature=1, top_k=0):
if start_token is None:
assert context is not None, 'Specify exactly one of start_token and context!'
else:
assert context is None, 'Specify exactly one of start_token and context!'
context = tf.fill([batch_size, 1], start_token)
def step(hparams, tokens, past=None):
lm_output = model(hparams=hparams, X=tokens, past=past, reuse=tf.compat.v1.AUTO_REUSE)
logits = lm_output['logits'][:, :, :hparams.n_vocab]
presents = lm_output['present']
presents.set_shape(past_shape(hparams=hparams, batch_size=batch_size))
return {
'logits': logits,
'presents': presents,
'hidden_state': lm_output['hidden_state']
}
def body(past, prev, output, embedding):
next_outputs = step(hparams, prev, past=past)
logits = next_outputs['logits'][:, -1, :] / tf.cast(temperature, dtype=tf.float32)
logits = top_k_logits(logits, k=top_k)
samples = tf.random.categorical(logits=logits, num_samples=1, dtype=tf.int32)
return [
next_outputs['presents'] if past is None else tf.concat([past, next_outputs['presents']], axis=-2),
samples,
tf.concat([output, samples], axis=1),
next_outputs['hidden_state']
]
past, prev, output, h = body(past, context, context, context)
def cond(*args):
return True
return output, past, h
```
# Embedding Methods
```
import math
class Embedder:
def __init__(self, chkpt_path, chunk_size):
tf.compat.v1.disable_eager_execution()
self.g = tf.Graph()
with self.g.as_default():
self.context = tf.compat.v1.placeholder(tf.int32, [1, None])
self.sess = tf.compat.v1.Session(graph=self.g)
self.MAX_CHUNK = chunk_size
self.enc = get_encoder("117M", "models")
hparams = default_hparams()
with self.g.as_default():
self.output, self.past, self.hidden_state = sample_sequence(
hparams=hparams, length=None,
context=self.context,
past=None,
batch_size=1,
temperature=1, top_k=1
)
if chkpt_path is not None:
self.restore(chkpt_path)
def restore(self, chkpt_path):
with self.g.as_default():
saver = tf.compat.v1.train.Saver()
chkpt = tf.train.latest_checkpoint(chkpt_path)
saver.restore(self.sess, chkpt)
def __call__(self, method):
with self.g.as_default():
p = None
for i in range(math.ceil(len(method) / self.MAX_CHUNK)):
chunk = method[i * self.MAX_CHUNK : (i + 1) * self.MAX_CHUNK]
context_tokens = self.enc.encode(chunk)
if p is None:
out, p, h = self.sess.run([self.output, self.past, self.hidden_state], feed_dict={
self.context: [context_tokens]
}, options = tf.compat.v1.RunOptions(report_tensor_allocations_upon_oom = True))
else:
out, p, h = self.sess.run([self.output, self.past, self.hidden_state], feed_dict={
self.context: [context_tokens],
self.past: p
}, options = tf.compat.v1.RunOptions(report_tensor_allocations_upon_oom = True))
return h
emb = Embedder("/tf/src/data/gpt-2/checkpoint/run3", 1024)
with open("/tf/src/data/methods/DATA00M_[god-r]/after.java.~186835~", 'r') as fp:
method = fp.read()
emb(method).shape
```
|
github_jupyter
|
# Задание 2.2 - Введение в PyTorch
Для этого задания потребуется установить версию PyTorch 1.0
https://pytorch.org/get-started/locally/
В этом задании мы познакомимся с основными компонентами PyTorch и натренируем несколько небольших моделей.<br>
GPU нам пока не понадобится.
Основные ссылки:
https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
https://pytorch.org/docs/stable/nn.html
https://pytorch.org/docs/stable/torchvision/index.html
```
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler, Sampler
from torchvision import transforms
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
```
## Как всегда, начинаем с загрузки данных
PyTorch поддерживает загрузку SVHN из коробки.
```
# First, lets load the dataset
data_train = dset.SVHN('./data/', split='train',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN('./data/', split='test',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
```
Теперь мы разделим данные на training и validation с использованием классов `SubsetRandomSampler` и `DataLoader`.
`DataLoader` подгружает данные, предоставляемые классом `Dataset`, во время тренировки и группирует их в батчи.
Он дает возможность указать `Sampler`, который выбирает, какие примеры из датасета использовать для тренировки. Мы используем это, чтобы разделить данные на training и validation.
Подробнее: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
```
batch_size = 64
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
```
В нашей задаче мы получаем на вход изображения, но работаем с ними как с одномерными массивами. Чтобы превратить многомерный массив в одномерный, мы воспользуемся очень простым вспомогательным модулем `Flattener`.
```
sample, label = data_train[0]
print("SVHN data sample shape: ", sample.shape)
# As you can see, the data is shaped like an image
# We'll use a special helper module to shape it into a tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
```
И наконец, мы создаем основные объекты PyTorch:
- `nn_model` - собственно, модель с нейросетью
- `loss` - функцию ошибки, в нашем случае `CrossEntropyLoss`
- `optimizer` - алгоритм оптимизации, в нашем случае просто `SGD`
```
nn_model = nn.Sequential(
Flattener(),
nn.Linear(3*32*32, 100),
nn.ReLU(inplace=True),
nn.Linear(100, 10),
)
nn_model.type(torch.FloatTensor)
# We will minimize cross-entropy between the ground truth and
# network predictions using an SGD optimizer
loss = nn.CrossEntropyLoss().type(torch.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-2, weight_decay=1e-1)
```
## Тренируем!
Ниже приведена функция `train_model`, реализующая основной цикл тренировки PyTorch.
Каждую эпоху эта функция вызывает функцию `compute_accuracy`, которая вычисляет точность на validation, эту последнюю функцию предлагается реализовать вам.
```
# This is how to implement the same main train loop in PyTorch. Pretty easy, right?
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
model.to(device)
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x = x.to(device)
y = y.to(device)
prediction = model(x)
loss_value = loss(prediction, y)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / (i_step + 1)
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
trueAnswerCounter = 0
totalAnswerCounter = 0
with torch.no_grad():
for i_step, (x, y) in enumerate(loader):
x = x.to(device)
y = y.to(device)
prediction = torch.argmax(model(x) , 1)
for i in range(len(prediction)):
if prediction[i] == y[i]:
trueAnswerCounter += float(1)
totalAnswerCounter += float(len(prediction))
del prediction
return float(trueAnswerCounter/totalAnswerCounter)
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 3)
```
## После основного цикла
Посмотрим на другие возможности и оптимизации, которые предоставляет PyTorch.
Добавьте еще один скрытый слой размера 100 нейронов к модели
```
# Since it's so easy to add layers, let's add some!
# TODO: Implement a model with 2 hidden layers of the size 100
nn_model = nn.Sequential(
Flattener(),
nn.Linear(3*32*32, 100),
nn.ReLU(inplace=True),
nn.Linear(100, 100),
nn.ReLU(inplace=True),
nn.Linear(100, 10),
)
nn_model.type(torch.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-2, weight_decay=1e-1)
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
```
Добавьте слой с Batch Normalization
```
# We heard batch normalization is powerful, let's use it!
# TODO: Add batch normalization after each of the hidden layers of the network, before or after non-linearity
# Hint: check out torch.nn.BatchNorm1d
nn_model = nn.Sequential(
Flattener(),
nn.Linear(3*32*32, 100),
nn.BatchNorm1d(100),
nn.ReLU(inplace=True),
nn.Linear(100, 100),
nn.BatchNorm1d(100),
nn.ReLU(inplace=True),
nn.Linear(100, 10),
nn.BatchNorm1d(10),
)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-3, weight_decay=1e-1)
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
def my_train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler):
loss_history = []
train_history = []
val_history = []
model.to(device)
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x = x.to(device)
y = y.to(device)
prediction = model(x)
loss_value = loss(prediction, y)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / (i_step + 1)
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
scheduler.step(ave_loss)
del loss_accum,correct_samples, total_samples
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
```
Добавьте уменьшение скорости обучения по ходу тренировки.
```
# Learning rate annealing
# Reduce your learning rate 2x every 2 epochs
# Hint: look up learning rate schedulers in PyTorch. You might need to extend train_model function a little bit too!
nn_model = nn.Sequential(
Flattener(),
nn.Linear(3*32*32, 100),
nn.BatchNorm1d(100),
nn.ReLU(inplace=True),
nn.Linear(100, 10),
nn.BatchNorm1d(10)
)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-3, weight_decay=1e-1)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer , factor = 0.5 , patience= 3 , verbose= True)
loss_history, train_history, val_history = my_train_model(nn_model, train_loader, val_loader, loss, optimizer, 5, scheduler)
```
# Визуализируем ошибки модели
Попробуем посмотреть, на каких изображениях наша модель ошибается.
Для этого мы получим все предсказания модели на validation set и сравним их с истинными метками (ground truth).
Первая часть - реализовать код на PyTorch, который вычисляет все предсказания модели на validation set.
Чтобы это сделать мы приводим код `SubsetSampler`, который просто проходит по всем заданным индексам последовательно и составляет из них батчи.
Реализуйте функцию `evaluate_model`, которая прогоняет модель через все сэмплы validation set и запоминает предсказания модели и истинные метки.
```
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of ints - model predictions
grount_truth: np array of ints - actual labels of the dataset
"""
model.eval() # Evaluation mode
model.to(device)
val_set = torch.utils.data.DataLoader(dataset, batch_size=len(indices),sampler=SubsetSampler(indices))
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
with torch.no_grad():
for i_step, (x, y) in enumerate(val_set):
x = x.to(device)
y = y.to(device)
predictions = torch.argmax(model(x) , 1)
ground_truth = y
return np.array(predictions.cpu()), np.array(ground_truth.cpu())
# Evaluate model on validation
predictions, gt = evaluate_model(nn_model, data_train, val_indices)
assert len(predictions) == len(val_indices)
assert len(gt) == len(val_indices)
assert gt[100] == data_train[val_indices[100]][1]
assert np.any(np.not_equal(gt, predictions))
```
## Confusion matrix
Первая часть визуализации - вывести confusion matrix (https://en.wikipedia.org/wiki/Confusion_matrix ).
Confusion matrix - это матрица, где каждой строке соответствуют классы предсказанный, а столбцу - классы истинных меток (ground truth). Число с координатами `i,j` - это количество сэмплов класса `j`, которые модель считает классом `i`.

Для того, чтобы облегчить вам задачу, ниже реализована функция `visualize_confusion_matrix` которая визуализирует такую матрицу.
Вам осталось реализовать функцию `build_confusion_matrix`, которая ее вычислит.
Результатом должна быть матрица 10x10.
```
def visualize_confusion_matrix(confusion_matrix):
"""
Visualizes confusion matrix
confusion_matrix: np array of ints, x axis - predicted class, y axis - actual class
[i][j] should have the count of samples that were predicted to be class i,
but have j in the ground truth
"""
# Adapted from
# https://stackoverflow.com/questions/2897826/confusion-matrix-with-number-of-classified-misclassified-instances-on-it-python
assert confusion_matrix.shape[0] == confusion_matrix.shape[1]
size = confusion_matrix.shape[0]
fig = plt.figure(figsize=(10,10))
plt.title("Confusion matrix")
plt.ylabel("predicted")
plt.xlabel("ground truth")
res = plt.imshow(confusion_matrix, cmap='GnBu', interpolation='nearest')
cb = fig.colorbar(res)
plt.xticks(np.arange(size))
plt.yticks(np.arange(size))
for i, row in enumerate(confusion_matrix):
for j, count in enumerate(row):
plt.text(j, i, count, fontsize=14, horizontalalignment='center', verticalalignment='center')
def build_confusion_matrix(predictions, ground_truth):
"""
Builds confusion matrix from predictions and ground truth
predictions: np array of ints, model predictions for all validation samples
ground_truth: np array of ints, ground truth for all validation samples
Returns:
np array of ints, (10,10), counts of samples for predicted/ground_truth classes
"""
confusion_matrix = np.zeros((10,10), int)
print(predictions)
print(ground_truth)
for i in range(predictions.shape[0]):
confusion_matrix[predictions[i]][ground_truth[i]] += 1
return confusion_matrix
confusion_matrix = build_confusion_matrix(predictions, gt)
visualize_confusion_matrix(confusion_matrix)
```
Наконец, посмотрим на изображения, соответствующие некоторым элементам этой матрицы.
Как и раньше, вам дана функция `visualize_images`, которой нужно воспрользоваться при реализации функции `visualize_predicted_actual`. Эта функция должна вывести несколько примеров, соответствующих заданному элементу матрицы.
Визуализируйте наиболее частые ошибки и попробуйте понять, почему модель их совершает.
```
data_train_images = dset.SVHN('./data/', split='train')
def visualize_images(indices, data, title='', max_num=10):
"""
Visualizes several images from the dataset
indices: array of indices to visualize
data: torch Dataset with the images
title: string, title of the plot
max_num: int, max number of images to display
"""
to_show = min(len(indices), max_num)
fig = plt.figure(figsize=(10,1.5))
fig.suptitle(title)
for i, index in enumerate(indices[:to_show]):
plt.subplot(1,to_show, i+1)
plt.axis('off')
sample = data[index][0]
plt.imshow(sample)
def visualize_predicted_actual(predicted_class, gt_class, predictions, groud_truth, val_indices, data):
"""
Visualizes images of a ground truth class which were predicted as the other class
predicted: int 0-9, index of the predicted class
gt_class: int 0-9, index of the ground truth class
predictions: np array of ints, model predictions for all validation samples
ground_truth: np array of ints, ground truth for all validation samples
val_indices: np array of ints, indices of validation samples
"""
# TODO: Implement visualization using visualize_images above
# predictions and ground_truth are provided for validation set only, defined by val_indices
# Hint: numpy index arrays might be helpful
# https://docs.scipy.org/doc/numpy/user/basics.indexing.html#index-arrays
# Please make the title meaningful!
indices = np.where((predictions == predicted_class) & (groud_truth == gt_class))
visualize_images(val_indices[indices], data)
visualize_predicted_actual(6, 8, predictions, gt, np.array(val_indices), data_train_images)
visualize_predicted_actual(1, 7, predictions, gt, np.array(val_indices), data_train_images)
```
# Переходим к свободным упражнениям!
Натренируйте модель как можно лучше - экспериментируйте сами!
Что следует обязательно попробовать:
- перебор гиперпараметров с помощью валидационной выборки
- другие оптимизаторы вместо SGD
- изменение количества слоев и их размеров
- наличие Batch Normalization
Но ограничиваться этим не стоит!
Точность на тестовой выборке должна быть доведена до **80%**
```
nn_model = nn.Sequential(
Flattener(),
nn.Linear(3*32*32, 100),
nn.BatchNorm1d(100),
nn.ReLU(inplace=True),
nn.Linear(100, 100),
nn.BatchNorm1d(100),
nn.ReLU(inplace=True),
nn.Linear(100, 10)
)
optimizer = optim.Adam(nn_model.parameters(), lr=0.8e-3, weight_decay=1e-4)
sheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer , factor = 0.5 , patience= 3 , verbose= True)
loss_history, train_history, val_history = my_train_model(nn_model, train_loader, val_loader, loss, optimizer, 30, sheduler)
# Как всегда, в конце проверяем на test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
test_accuracy = compute_accuracy(nn_model, test_loader)
print("Test accuracy: %2.4f" % test_accuracy)
```
|
github_jupyter
|
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m)) * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + lambd/m * W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + lambd/m * W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + lambd/m * W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 < keep_prob)
# Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
|
github_jupyter
|
# Table of Contents
* [1c. Fixed flux spinodal decomposition on a T shaped domain](#1c.-Fixed-flux-spinodal-decomposition-on-a-T-shaped-domain)
* [Use Binder For Live Examples](#Use-Binder-For-Live-Examples)
* [Define $f_0$](#Define-$f_0$)
* [Define the Equation](#Define-the-Equation)
* [Solve the Equation](#Solve-the-Equation)
* [Run the Example Locally](#Run-the-Example-Locally)
* [Movie of Evolution](#Movie-of-Evolution)
# 1c. Fixed flux spinodal decomposition on a T shaped domain
## Use Binder For Live Examples
[](http://mybinder.org/repo/wd15/fipy-hackathon1)
The free energy is given by,
$$ f_0\left[ c \left( \vec{r} \right) \right] =
- \frac{A}{2} \left(c - c_m\right)^2
+ \frac{B}{4} \left(c - c_m\right)^4
+ \frac{c_{\alpha}}{4} \left(c - c_{\alpha} \right)^4
+ \frac{c_{\beta}}{4} \left(c - c_{\beta} \right)^4 $$
In FiPy we write the evolution equation as
$$ \frac{\partial c}{\partial t} = \nabla \cdot \left[
D \left( c \right) \left( \frac{ \partial^2 f_0 }{ \partial c^2} \nabla c - \kappa \nabla \nabla^2 c \right)
\right] $$
Let's start by calculating $ \frac{ \partial^2 f_0 }{ \partial c^2} $ using sympy. It's easy for this case, but useful in the general case for taking care of difficult book keeping in phase field problems.
```
%matplotlib inline
import sympy
import fipy as fp
import numpy as np
A, c, c_m, B, c_alpha, c_beta = sympy.symbols("A c_var c_m B c_alpha c_beta")
f_0 = - A / 2 * (c - c_m)**2 + B / 4 * (c - c_m)**4 + c_alpha / 4 * (c - c_alpha)**4 + c_beta / 4 * (c - c_beta)**4
print f_0
sympy.diff(f_0, c, 2)
```
The first step in implementing any problem in FiPy is to define the mesh. For [Problem 1a]({{ site.baseurl }}/hackathons/hackathon1/problems.ipynb/#1.a-Square-Periodic) the solution domain is just a square domain, but the boundary conditions are periodic, so a `PeriodicGrid2D` object is used. No other boundary conditions are required.
```
mesh = fp.Grid2D(dx=0.5, dy=0.5, nx=40, ny=200) + (fp.Grid2D(dx=0.5, dy=0.5, nx=200, ny=40) + [[-40],[100]])
```
The next step is to define the parameters and create a solution variable.
```
c_alpha = 0.05
c_beta = 0.95
A = 2.0
kappa = 2.0
c_m = (c_alpha + c_beta) / 2.
B = A / (c_alpha - c_m)**2
D = D_alpha = D_beta = 2. / (c_beta - c_alpha)
c_0 = 0.45
q = np.sqrt((2., 3.))
epsilon = 0.01
c_var = fp.CellVariable(mesh=mesh, name=r"$c$", hasOld=True)
```
Now we need to define the initial conditions given by,
Set $c\left(\vec{r}, t\right)$ such that
$$ c\left(\vec{r}, 0\right) = \bar{c}_0 + \epsilon \cos \left( \vec{q} \cdot \vec{r} \right) $$
```
r = np.array((mesh.x, mesh.y))
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
viewer = fp.Viewer(c_var)
```
## Define $f_0$
To define the equation with FiPy first define `f_0` in terms of FiPy. Recall `f_0` from above calculated using Sympy. Here we use the string representation and set it equal to `f_0_var` using the `exec` command.
```
out = sympy.diff(f_0, c, 2)
exec "f_0_var = " + repr(out)
#f_0_var = -A + 3*B*(c_var - c_m)**2 + 3*c_alpha*(c_var - c_alpha)**2 + 3*c_beta*(c_var - c_beta)**2
f_0_var
```
## Define the Equation
```
eqn = fp.TransientTerm(coeff=1.) == fp.DiffusionTerm(D * f_0_var) - fp.DiffusionTerm((D, kappa))
eqn
```
## Solve the Equation
To solve the equation a simple time stepping scheme is used which is decreased or increased based on whether the residual decreases or increases. A time step is recalculated if the required tolerance is not reached.
```
elapsed = 0.0
steps = 0
dt = 0.01
total_sweeps = 2
tolerance = 1e-1
total_steps = 10
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
c_var.updateOld()
from fipy.solvers.pysparse import LinearLUSolver as Solver
solver = Solver()
while steps < total_steps:
res0 = eqn.sweep(c_var, dt=dt, solver=solver)
for sweeps in range(total_sweeps):
res = eqn.sweep(c_var, dt=dt, solver=solver)
if res < res0 * tolerance:
steps += 1
elapsed += dt
dt *= 1.1
c_var.updateOld()
else:
dt *= 0.8
c_var[:] = c_var.old
viewer.plot()
print 'elapsed_time:',elapsed
```
## Run the Example Locally
The following cell will dumpy a file called `fipy_hackathon1c.py` to the local file system to be run. The images are saved out at each time step.
```
%%writefile fipy_hackathon_1c.py
import fipy as fp
import numpy as np
mesh = fp.Grid2D(dx=0.5, dy=0.5, nx=40, ny=200) + (fp.Grid2D(dx=0.5, dy=0.5, nx=200, ny=40) + [[-40],[100]])
c_alpha = 0.05
c_beta = 0.95
A = 2.0
kappa = 2.0
c_m = (c_alpha + c_beta) / 2.
B = A / (c_alpha - c_m)**2
D = D_alpha = D_beta = 2. / (c_beta - c_alpha)
c_0 = 0.45
q = np.sqrt((2., 3.))
epsilon = 0.01
c_var = fp.CellVariable(mesh=mesh, name=r"$c$", hasOld=True)
r = np.array((mesh.x, mesh.y))
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
f_0_var = -A + 3*B*(c_var - c_m)**2 + 3*c_alpha*(c_var - c_alpha)**2 + 3*c_beta*(c_var - c_beta)**2
eqn = fp.TransientTerm(coeff=1.) == fp.DiffusionTerm(D * f_0_var) - fp.DiffusionTerm((D, kappa))
elapsed = 0.0
steps = 0
dt = 0.01
total_sweeps = 2
tolerance = 1e-1
total_steps = 600
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
c_var.updateOld()
from fipy.solvers.pysparse import LinearLUSolver as Solver
solver = Solver()
viewer = fp.Viewer(c_var)
while steps < total_steps:
res0 = eqn.sweep(c_var, dt=dt, solver=solver)
for sweeps in range(total_sweeps):
res = eqn.sweep(c_var, dt=dt, solver=solver)
print ' '
print 'steps',steps
print 'res',res
print 'sweeps',sweeps
print 'dt',dt
if res < res0 * tolerance:
steps += 1
elapsed += dt
dt *= 1.1
if steps % 1 == 0:
viewer.plot('image{0}.png'.format(steps))
c_var.updateOld()
else:
dt *= 0.8
c_var[:] = c_var.old
```
## Movie of Evolution
The movie of the evolution for 600 steps.
The movie was generated with the output files of the form `image*.png` using the following commands,
$ rename 's/\d+/sprintf("%05d",$&)/e' image*
$ ffmpeg -f image2 -r 6 -i 'image%05d.png' output.mp4
```
from IPython.display import YouTubeVideo
scale = 1.5
YouTubeVideo('aZk38E7OxcQ', width=420 * scale, height=315 * scale, rel=0)
```
|
github_jupyter
|
# IMPORTS
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
```
# READ THE DATA
```
data = pd.read_csv('./input/laptops.csv', encoding='latin-1')
data.head(10)
```
# MAIN EDA BLOCK
```
print(f'Data Shape\nRows: {data.shape[0]}\nColumns: {data.shape[1]}')
print('=' * 30)
data.info()
data.describe()
data['Product'] = data['Product'].str.split('(').apply(lambda x: x[0])
data['CPu_Speed'] = data['Cpu'].str.split(' ').apply(lambda x: x[-1]).str.replace('GHz', '')
data['Cpu_Vender'] = data['Cpu'].str.split(' ').apply(lambda x: x[0])
data['Cpu_Type'] = data['Cpu'].str.split(' ').apply(lambda x: x[1:4] if x[1]=='Celeron' and 'Pentium' and 'Xeon' else (x[1:3] if (x[1]=='Core' or x[0]=='AMD') else x[0]))
data['Cpu_Type'] = data['Cpu_Type'].apply(lambda x: ' '.join(x))
data['Cpu_Type']
data.head(10)
split_mem = data['Memory'].str.split(' ', 1, expand=True)
data['Storage_Type'] = split_mem[1]
data['Memory'] = split_mem[0]
data['Memory'].unique()
data.head(10)
data['Ram'] = data['Ram'].str.replace('GB', '')
df_mem = data['Memory'].str.split('(\d+)', expand=True)
data['Memory'] = pd.to_numeric(df_mem[1])
data.rename(columns={'Memory': 'Memory (GB or TB)'}, inplace=True)
def mem(x):
if x == 1:
return 1024
elif x == 2:
return 2048
data['Memory (GB or TB)'] = data['Memory (GB or TB)'].apply(lambda x: 1024 if x==1 else x)
data['Memory (GB or TB)'] = data['Memory (GB or TB)'].apply(lambda x: 2048 if x==2 else x)
data.rename(columns={'Memory (GB or TB)': 'Storage (GB)'}, inplace=True)
data.head(10)
data['Weight'] = data['Weight'].str.replace('kg', '')
data.head(10)
gpu_distr_list = data['Gpu'].str.split(' ')
data['Gpu_Vender'] = data['Gpu'].str.split(' ').apply(lambda x: x[0])
data['Gpu_Type'] = data['Gpu'].str.split(' ').apply(lambda x: x[1:])
data['Gpu_Type'] = data['Gpu_Type'].apply(lambda x: ' '.join(x))
data.head(10)
data['Touchscreen'] = data['ScreenResolution'].apply(lambda x: 1 if 'Touchscreen' in x else 0)
data['Ips'] = data['ScreenResolution'].apply(lambda x: 1 if 'IPS' in x else 0)
def cat_os(op_s):
if op_s =='Windows 10' or op_s == 'Windows 7' or op_s == 'Windows 10 S':
return 'Windows'
elif op_s =='macOS' or op_s == 'Mac OS X':
return 'Mac'
else:
return 'Other/No OS/Linux'
data['OpSys'] = data['OpSys'].apply(cat_os)
data = data.reindex(columns=["Company", "TypeName", "Inches", "Touchscreen",
"Ips", "Cpu_Vender", "Cpu_Type","Ram", "Storage (GB)",
"Storage Type", "Gpu_Vender", "Gpu_Type", "Weight", "OpSys", "Price_euros" ])
data.head(10)
data['Ram'] = data['Ram'].astype('int')
data['Storage (GB)'] = data['Storage (GB)'].astype('int')
data['Weight'] = data['Weight'].astype('float')
data.info()
sns.set(rc={'figure.figsize': (9,5)})
data['Company'].value_counts().plot(kind='bar')
sns.barplot(x=data['Company'], y=data['Price_euros'])
data['TypeName'].value_counts().plot(kind='bar')
sns.barplot(x=data['TypeName'], y=data['Price_euros'])
cpu_distr = data['Cpu_Type'].value_counts()[:10].reset_index()
cpu_distr
sns.barplot(x=cpu_distr['index'], y=cpu_distr['Cpu_Type'], hue='Cpu_Vender', data=data)
gpu_distr = data['Gpu_Type'].value_counts()[:10].reset_index()
gpu_distr
sns.barplot(x=gpu_distr['index'], y=gpu_distr['Gpu_Type'], hue='Gpu_Vender', data=data)
sns.barplot(x=data['OpSys'], y=data['Price_euros'])
corr_data = data.corr()
corr_data['Price_euros'].sort_values(ascending=False)
sns.heatmap(data.corr(), annot=True)
X = data.drop(columns=['Price_euros'])
y = np.log(data['Price_euros'])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=42)
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import r2_score, mean_absolute_error
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor, ExtraTreesRegressor
from xgboost import XGBRegressor
from sklearn.ensemble import VotingRegressor, StackingRegressor
step1 = ColumnTransformer(transformers=[
('col_inf', OneHotEncoder(sparse=False, handle_unknown='ignore'), [0,1,5,6,9,10,11,13])
],remainder='passthrough')
rf = RandomForestRegressor(n_estimators=350, random_state=3, max_samples=0.5, max_features=0.75, max_depth=15)
gbdt = GradientBoostingRegressor(n_estimators=100, max_features=0.5)
xgb = XGBRegressor(n_estimators=25, learning_rate=0.3, max_depth=5)
et = ExtraTreesRegressor(n_estimators=100, random_state=3, max_samples=0.5, max_features=0.75, max_depth=10)
step2 = VotingRegressor([('rf', rf), ('gbdt', gbdt), ('xgb', xgb), ('et', et)], weights=[5,1,1,1])
pipe = Pipeline([
('step1', step1),
('step2', step2)])
pipe.fit(X_train, y_train)
y_pred = pipe.predict(X_test)
print('R2 score', r2_score(y_test, y_pred))
print('MAE', mean_absolute_error(y_test, y_pred))
```
|
github_jupyter
|
# <center>АНАЛИЗ ЗВУКА И ГОЛОСА</center>
**Преподаватель**: Рыбин Сергей Витальевич
**Группа**: 6304
**Студент**: Белоусов Евгений Олегович
## <center>Классификация акустических шумов</center>
*Необоходимый результат: неизвестно*
```
import os
import IPython
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import librosa
import librosa.display
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from tqdm.notebook import tqdm
from tensorflow.keras import losses, models, optimizers
from tensorflow.keras.activations import relu, softmax
from tensorflow.keras.callbacks import (EarlyStopping, ModelCheckpoint, TensorBoard)
from tensorflow.keras.layers import (Input, Dense, Convolution2D, BatchNormalization,
Flatten, MaxPool2D, Activation)
from tensorflow.keras.utils import Sequence
from tensorflow.keras import backend as K
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold, train_test_split
from sklearn.metrics import classification_report, confusion_matrix
# from google.colab import drive
# drive.mount('/content/drive')
# Ручная часть работы - директория с набором аудиофайлов, набор меток классов, учёт разновидностей имён файлов
predictions = "predictions"
directory = "./content/drive/MyDrive/Training"
labels = ["background",
"bags",
"door",
"keyboard",
"knocking_door",
"ring",
"speech",
"tool"]
num_classes = len(labels)
filename_search = {"background": ["background_"],
"bags": ["bags_", "bg_", "t_bags_"],
"door": ["door_", "d_", "t_door_"],
"keyboard": ["keyboard_", "t_keyboard_", "k_"],
"knocking_door": ["knocking_door_", "tt_kd_", "t_knocking_door_"],
"ring": ["ring_", "t_ring_"],
"speech": ["speech_"],
"tool": ["tool_"]}
# Параметры конфигурации для будущей модели нейросети
class Config(object):
def __init__(self,
sampling_rate=16000, audio_duration=7, n_classes=10, use_mfcc=True,
n_mfcc=20, n_folds=10, n_features=100, learning_rate=0.0001, max_epochs=50):
self.sampling_rate = sampling_rate
self.audio_duration = audio_duration
self.n_classes = n_classes
self.use_mfcc = use_mfcc
self.n_mfcc = n_mfcc
self.n_folds = n_folds
self.learning_rate = learning_rate
self.max_epochs = max_epochs
self.n_features = n_features
self.audio_length = self.sampling_rate * self.audio_duration
if self.use_mfcc:
self.dim = (self.n_mfcc, 1 + int(np.floor(self.audio_length / 512)), 1)
else:
self.dim = (self.audio_length, 1)
# Извлечение метки аудиошума из названия аудиофайла
def get_label_from_filename(filename):
for key, value in filename_search.items():
for val in value:
if (filename.find(val) == 0):
return key
# Подготовка датафрейма
def prepare_dataframe(directory):
files = ([f.path for f in os.scandir(directory) if f.is_file()])
# Создание датафрейма по предоставленной в условии задачи схеме
df = pd.DataFrame(columns=["filename", "label"])
# Проход по всем аудиофайлам в наборе
for path in tqdm(files[:]):
filename = os.path.splitext(os.path.basename(path).strip())[0]
label = get_label_from_filename(filename)
# Добавляем обработанный аудиофайл в датафрейм
row = pd.Series([filename, label], index = df.columns)
df = df.append(row, ignore_index=True)
return df
# Извлечение признаков из набора аудиофайлов
def prepare_data(config, directory, df):
X = np.empty(shape=(df.shape[0], config.dim[0], config.dim[1], 1))
files = ([f.path for f in os.scandir(directory) if f.is_file()])
# Задаём длительность аудиофайла
input_length = config.audio_length
i = 0
# Проход по всем аудиофайлам в наборе
for path in tqdm(files[:]):
filename = os.path.splitext(os.path.basename(path).strip())[0]
data, sr = librosa.load(path, sr=config.sampling_rate)
# Обрезка/приведение длительности аудиофайла к указанной в параметрах конфигурации
if len(data) > input_length:
max_offset = len(data) - input_length
offset = np.random.randint(max_offset)
data = data[offset:(input_length+offset)]
else:
if input_length > len(data):
max_offset = input_length - len(data)
offset = np.random.randint(max_offset)
else:
offset = 0
data = np.pad(data, (offset, input_length - len(data) - offset), "constant")
# Извлечение признаков MFCC с помощью библиотеки librosa
data = librosa.feature.mfcc(data, sr=config.sampling_rate, n_mfcc=config.n_mfcc)
data = np.expand_dims(data, axis=-1)
X[i,] = data
i = i + 1
return X
# Модель свёрточной нейросети
def get_2d_conv_model(config):
num_classes = config.n_classes
inp = Input(shape=(config.dim[0], config.dim[1], 1))
x = Convolution2D(32, (4,10), padding="same")(inp)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = MaxPool2D()(x)
x = Convolution2D(32, (4,10), padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = MaxPool2D()(x)
x = Convolution2D(32, (4,10), padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = MaxPool2D()(x)
x = Flatten()(x)
x = Dense(64)(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
out = Dense(num_classes, activation=softmax)(x)
model = models.Model(inputs=inp, outputs=out)
opt = optimizers.Adam(config.learning_rate)
model.compile(optimizer=opt, loss=losses.SparseCategoricalCrossentropy(), metrics=['acc'])
return model
# Матрица ошибок классификации
def plot_confusion_matrix(predictions, y):
max_test = y
max_predictions = np.argmax(predictions, axis=1)
matrix = confusion_matrix(max_test, max_predictions)
plt.figure(figsize=(12, 8))
sns.heatmap(matrix, xticklabels=labels, yticklabels=labels, annot=True,
linewidths = 0.1, fmt="d", cmap = 'YlGnBu');
plt.title("Матрица ошибок классификации", fontsize = 15)
plt.ylabel("Настоящий класс")
plt.xlabel("Предсказанный")
plt.show()
# Подготовим датафрейм
df = prepare_dataframe(directory)
df.head()
# Сериализуем датафрейм в целях дальнейшей экономии времени
df.to_pickle("./content/drive/MyDrive/SVA_lab_2_dataframe.pkl")
# Десериализация ранее сохранённого датафрейма
df = pd.read_pickle("./content/drive/MyDrive/SVA_lab_2_dataframe.pkl")
# Подсчёт количества аудиозаписей каждого класса
df["label"].value_counts()
# Представим значения меток классов в виде целых чисел
encode = LabelEncoder()
encoded_labels = encode.fit_transform(df['label'].to_numpy())
df = df.assign(label=encoded_labels)
df.head()
# Задаём параметры конфигурации
config = Config(n_classes=num_classes, n_folds=10, n_mfcc=20)
X_train = prepare_data(config, directory, df)
print(X_train.shape)
# Нормализация данных
mean = np.mean(X_train, axis=0)
std = np.std(X_train, axis=0)
X_train = (X_train - mean)/std
X_train
# ПРОВЕРКА НА ТЕСТОВОМ НАБОРЕ ДАННЫХ
files = ([f.path for f in os.scandir("./content/drive/MyDrive/Test") if f.is_file()])
# Создание датафрейма по предоставленной в условии задачи схеме
submission = pd.DataFrame(columns=["fname"])
# Проход по всем аудиофайлам в наборе
for path in tqdm(files[:]):
filename = os.path.splitext(os.path.basename(path).strip())[0]
# Добавляем имя аудиофайла в датафрейм
row = pd.Series([filename], index = submission.columns)
submission = submission.append(row, ignore_index=True)
submission.head()
X_test = prepare_data(config, "./content/drive/MyDrive/Test", submission)
# Нормализация данных
mean = np.mean(X_test, axis=0)
std = np.std(X_test, axis=0)
X_test = (X_test - mean)/std
X_test
if not os.path.exists(predictions):
os.mkdir(predictions)
if os.path.exists("./content/drive/MyDrive/" + predictions):
shutil.rmtree("./content/drive/MyDrive/" + predictions)
# Для кросс-валидации используется StratifiedKFold - разновдность KFold алгоритма, которая возвращает
# стратифицированные папки c данными: каждый набор в папке содержит примерно такой же процент выборок каждого целевого класса,
# что и полный набор.
skf = StratifiedKFold(n_splits=config.n_folds)
y_train = df["label"].values
y_train = np.stack(y_train[:])
model = get_2d_conv_model(config)
i = 0
for train_split, val_split in skf.split(X_train, y_train):
K.clear_session()
# Разделение имеющегося набора данных на тренировочную и валидационные выборки
X, y, X_val, y_val = X_train[train_split], y_train[train_split], X_train[val_split], y_train[val_split]
# Callback-функции для модели Keras
# В ходе обучения сохраняем веса лучшей модели для потенциального дальнейшего использования
checkpoint = ModelCheckpoint('best_%d.h5'%i, monitor='val_loss', verbose=1, save_best_only=True)
early = EarlyStopping(monitor="val_loss", mode="min", patience=5)
callbacks_list = [checkpoint, early]
print("#"*50)
print("Fold: ", i)
model = get_2d_conv_model(config)
history = model.fit(X, y, validation_data=(X_val, y_val), callbacks=callbacks_list, batch_size=256, epochs=config.max_epochs)
model.load_weights('best_%d.h5'%i)
# Сохраняем предсказания модели по тренировочным данным
print("TRAIN PREDICTIONS: ", i)
predictions = model.predict(X_train, batch_size=256)
save_train_preds_path = "./predictions/train_predictions_{:d}.npy".format(i)
np.save(save_train_preds_path, predictions)
plot_confusion_matrix(predictions, y_train)
# Сохраняем предсказания модели по тестовым данным
print("TEST PREDICTIONS: ", i)
predictions = model.predict(X_test, batch_size=256)
save_test_preds_path = "./predictions/test_predictions_{:d}.npy".format(i)
np.save(save_test_preds_path, predictions)
# # Создание файла с результатами (submission)
# top_3 = np.array(labels)[np.argsort(-predictions, axis=1)[:, :3]]
# predicted_labels = [' '.join(list(x)) for x in top_3]
# df_test['label'] = predicted_labels
# save_preds_path = "./predictions/predictions_{:d}.npy".format(i)
# df_test[['label']].to_csv(save_preds_path)
j = 0
for prob in predictions:
#print(prob)
#print(np.argmax(prob))
submission.loc[j,'score'] = max(prob)
prob_index = list(prob).index(max(prob))
#print(prob_index)
submission.loc[j,'label'] = prob_index
j += 1
submission_result = submission.copy()
submission_result['label'] = encode.inverse_transform(np.array(submission['label']).astype(int))
submission = submission_result
save_submission_path = "./predictions/submission_{:d}.npy".format(i)
submission.to_csv(save_submission_path.format(i), index=False)
i += 1
```
|
github_jupyter
|
# COMP5318 - Machine Learning and Data Mining: Assignment 2
<div style="text-align: right"> Group 86 </div>
<div style="text-align: right"> tlin4302 | 470322974 | Jenny Tsai-chen Lin </div>
<div style="text-align: right"> jsun4242 | 500409987 | Jiawei Sun </div>
<div style="text-align: right"> jyan2937 | 480546614 | Jinxuan Yang </div>
## The notebook includes sections :
Section 0. Hardware and software specifications
Section 1. Library and general functions
Section 2. Data pre-processing
Section 3. Implement algorithms
3.1 AdaBoost Classifier
3.2 Convolutional Neural Network Classifier
3.3 Support-Vector-Machine Classifier
Section 4. Compare result between algorithms in train dataset
Section 5: Best perfroming algorithms in testing data (we will submit this in seperate notebook as well)
## CODE RUNNING INSTRUCTIONS:
Instruction:
Simply change the switches in **0.Switches** blocks and run all.
Dataset directory:
Same directory as the jupyter notebook, in the format of :
---[current dir]
|----[This file]
|----[dataset]
|----test
|----train
The default parameters will used the saved model to run simple tests,
and load confusion matrices, accuracies, etc. from disk and plot them
for display purpose.
### Hardware and software specifcations
hardware:
1. CPU: Intel i7-8700K @ 3.70GHz
2. RAM: 64G DDR4 3000MHz
3. Graphics: NVidia GeForce GTX 1080Ti
4. Chipset: Z370
### Software specifications
```
import os, platform
print('OS name:', os.name, ', system:', platform.system(), ', release:', platform.release())
import sys
print("Anaconda version:")
#!conda list anaconda
print("Python version: ", sys.version)
print("Python version info: ", sys.version_info)
import PIL
from PIL import Image
print("PIL version: ", PIL.__version__)
import matplotlib
import matplotlib.pyplot as plt
print("Matplotlib version: ", matplotlib.__version__)
#import tensorflow as tf
#print("Keras version:", tf.keras.__version__)
import cv2
print("OpenCV version: ", cv2.__version__)
import numpy as np
print("nump version: ", np.__version__)
```
## Section 0. Switches (Default settings is great for demo purpose)
#### Load saved model or run training?
```
load_saved_model = True
```
#### Run preprocessing benchmark or not?
```
run_preprocessing_benchmark = True
```
#### Run test code for 3 classifiers?
```
run_test_code_for_classifiers = True
```
#### number of threads when preprocessing images
```
g_thread_num = 6
```
#### Run hyper parameter tuning? (Slow if turned on!)
```
# Caution: Slow if turned on.
do_hyper_parameter_tuning = False
```
#### Run 10-fold cross validation? (Slow if turned on)
```
# Caution: Slow if turned on.
run_ten_fold = False
```
## Section 1. Library and general functions
```
# Go to anaconda prompt to install package imblearn
# anaconda: conda install -c glemaitre imbalanced-learn
#pip install kmeans-smote
from skimage import io, transform
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import cv2
import time
```
### global variables
```
# choose one of below two line depend file location******
g_dataset_dir = "./dataset/"
#g_dataset_dir = "../dataset/"
a_random_file = "./dataset/train/1b1B1b2-2pK2q1-4p1rB-7k-8-8-3B4-3rb3.jpeg"
#a_random_file = "../dataset/train/1b1B1b2-2pK2q1-4p1rB-7k-8-8-3B4-3rb3.jpeg"
saved_model_path = "./saved_model/"
abc_model_file = saved_model_path + "abc_dump.pkl"
svc_model_file = saved_model_path + "svc_dump.pkl"
cnn_model_file = saved_model_path + "cnn_weights"
ten_fold_result_path = "./ten_fold_results/"
# define global variable
g_train_dir = g_dataset_dir + "/train/"
g_test_dir = g_dataset_dir + "/test/"
g_image_size = 400
g_grid_row = 8
g_grid_col = 8
g_grid_num = g_grid_row * g_grid_col
g_grid_size = int(g_image_size / g_grid_row)
#Processing 1 - scale down
g_down_sampled_size = 200
g_down_sampled_grid_size = int(g_grid_size / (g_image_size / g_down_sampled_size))
# global instance of mapping of char vs chess pieces
# reference: Forsyth–Edwards Notation, https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation
#
# pawn = "P", knight = "N", bishop = "B", rook = "R", queen = "Q" and king = "K"
# White pieces are designated using upper-case letters ("PNBRQK") while black pieces use lowercase ("pnbrqk")
# we use 0 to note an empty grid.
# 13 items in total.
g_piece_mapping = {
"P" : "pawn",
"N" : "knight",
"B" : "bishop",
"R" : "rook",
"Q" : "queen",
"K" : "king",
"p" : "pawn",
"n" : "knight",
"b" : "bishop",
"r" : "rook",
"q" : "queen",
"k" : "king",
"0" : "empty_grid"
}
g_num_labels = len(g_piece_mapping)
g_labels = ["P",
"N",
"B",
"R",
"Q",
"K",
"p",
"n",
"b",
"r",
"q",
"k",
"0"]
```
### Helper codes for label & board
```
#DataHelper.py
import os
import cv2
from skimage import io
import numpy as np
import glob
import h5py
# get clean name by a path, where in our case this gets the FEN conviniently
def GetCleanNameByPath(file_name):
return os.path.splitext(os.path.basename(file_name))[0]
# get full paths to the files in a directory.
def GetFileNamesInDir(path_name, extension="*", num_return = 0):
if num_return == 0:
return glob.glob(path_name + "/*." + extension)
else:
return glob.glob(path_name + "/*." + extension)[:num_return]
# get name list
def GetCleanNamesInDir(path_name, extension = "*", num_return = 0):
names = GetFileNamesInDir(path_name, extension)
offset = len(extension) + 1
clean_names = [os.path.basename(x)[:-offset] for x in names]
if num_return == 0:
return clean_names
else:
return clean_names[:num_return]
# read dataset
def ReadImages(file_names, path = "", format = cv2.IMREAD_COLOR):
if path == "":
return [cv2.imread(f, format) for f in file_names]
else:
return [cv2.imread(path + "/" + f, format) for f in file_names]
# read image by name
def ReadImage(file_name, gray = False):
return io.imread(file_name, as_gray = gray)
# h5py functions
# read h5py file
# we assume the labels and
def ReadH5pyFile(file_name, data_name):
h5_buffer = h5py.File(file_name)
return h5_buffer[data_name].copy()
# write h5py file
def WriteH5pyFile(file_name, mat, data_name = "dataset"):
with h5py.File(file_name, 'w') as f:
f.create_dataset(data_name, data = mat)
#BoardHelper.py
import re
import string
from collections import OrderedDict
import numpy as np
import skimage.util
from skimage.util.shape import view_as_blocks
#from ChessGlobalDefs import *
#FEN TO LABELS OF SQUARES
def FENtoL(fen):
rules = {
r"-": r"",
r"1": r"0",
r"2": r"00",
r"3": r"000",
r"4": r"0000",
r"5": r"00000",
r"6": r"000000",
r"7": r"0000000",
r"8": r"00000000",
}
for key in rules.keys():
fen = re.sub(key, rules[key], fen)
return list(fen)
# Label array to char list:
def LabelArrayToL(arr):
rules = {
0 : "P",
1 : "N",
2 : "B",
3 : "R",
4 : "Q",
5 : "K",
6 : "p",
7 : "n",
8 : "b",
9 : "r",
10 : "q",
11 : "k",
12 : "0"
}
flattened = arr.flatten(order = "C")
L = []
for x in flattened:
L.append(rules[x])
return L
# char list to FEN
def LtoFEN(L):
FEN = ""
for y in range(8):
counter = 0
for x in range(8):
idx = x + y * 8
char = L[idx]
if char == "0":
counter += 1
if x == 7:
FEN += str(counter)
else:
if counter:
FEN += str(counter)
counter = 0
FEN += char
if y != 7:
FEN += "-"
return FEN
# FEN to one-hot encoding, in our case, it returns an 64 by 13 array, with each row as a one-hot to a grid.
def FENtoOneHot(fen):
# this rule is in the same format as g_piece_mapping
#rules = {
# "P" : np.array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
# "N" : np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
# "B" : np.array([0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
# "R" : np.array([0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
# "Q" : np.array([0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]),
# "K" : np.array([0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]),
#
# "p" : np.array([0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]),
# "n" : np.array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]),
# "b" : np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]),
# "r" : np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]),
# "q" : np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]),
# "k" : np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0]),
#
# "0" : np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1])
#}
rules = {
"P" : 0,
"N" : 1,
"B" : 2,
"R" : 3,
"Q" : 4,
"K" : 5,
"p" : 6,
"n" : 7,
"b" : 8,
"r" : 9,
"q" : 10,
"k" : 11,
"0" : 12
}
L = FENtoL(fen)
one_hot_array = np.zeros((g_grid_num, g_num_labels), dtype = np.int32) # 64 by 13
for i, c in enumerate(L):
one_hot_array[i, rules[c]] = 1
return one_hot_array
# get 8*8 char matrix
def LtoCharMat(l):
if type(l) == list:
return np.array(l).reshape((8,8))
if type(l) == str:
return np.array([l]).reshape((8,8))
def GetBoardCell(board_image, row = 0, col = 0, size = 50):
return np.array(board_image)[row*size:(row+1)*size,col*size:(col+1)*size]
# get grids of image
def ImageToGrids(image, grid_size_x, grid_size_y):
return skimage.util.shape.view_as_blocks(image, block_shape = (grid_size_y, grid_size_x, 3)).squeeze(axis = 2)
# get grids of image
def ImageToGrids_grey(image, grid_size_x, grid_size_y):
return skimage.util.shape.view_as_blocks(image, block_shape = (grid_size_y, grid_size_x, 1)).squeeze(axis = 2)
```
## Section 2. Data pre-processing
### Pre-processing - generic
```
# split into 64 small square from 1 board
# image resized to 400x 400 to 200x 200. 64 square at 25x 25 each
def PreprocessImage(image):
image = transform.resize(image, (g_down_sampled_size, g_down_sampled_size), mode='constant')
# 1st and 2nd dim is 8
grids = ImageToGrids(image, g_down_sampled_grid_size, g_down_sampled_grid_size)
return grids.reshape(g_grid_row * g_grid_col, g_down_sampled_grid_size, g_down_sampled_grid_size, 3)
# split into 64 small square from 1 board -
# output of x: number of image x 64 x 25 x 25 x 3 , y: number of image x 64 x 13
def func_generator(train_file_names):
x = []
y = []
for image_file_name in train_file_names:
img = ReadImage(image_file_name)
x.append(PreprocessImage(img))
y.append(np.array(FENtoOneHot(GetCleanNameByPath(image_file_name))))
return np.array(x), np.array(y)
# Example output for a file - generic
num_train = 1
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg",num_return = num_train)
x, y = func_generator(train_file_names)
print("x type :", type(x))
print("x shape:", x.shape)
print("y type :", type(y))
print("y shape:", y.shape)
print()
plt.imshow(x[0][1])
```
### Pre-processing - canny
```
# Processing image with canny
import cv2
def PreprocessImage_canny(image):
image = cv2.Canny(image,100,200)
image = transform.resize(image, (g_down_sampled_size, g_down_sampled_size), mode='constant')
# 1st and 2nd dim is 8
image = image[..., np.newaxis]
grids = ImageToGrids_grey(image, g_down_sampled_grid_size, g_down_sampled_grid_size)
return grids.reshape(g_grid_row * g_grid_col, g_down_sampled_grid_size, g_down_sampled_grid_size)
# atomic func:
def func_canny(file_name):
img = ReadImage(file_name)
x = PreprocessImage_canny(img)
y = np.array(FENtoL(GetCleanNameByPath(file_name)))
return x, y
# split into 64 small square from 1 - output of x: number of image x64 x 25 x25 , y:number of image x 64
def func_generator_canny(image_file_names):
xs = []
ys = []
for image_file_name in image_file_names:
x, y = func_canny(image_file_name)
xs.append(x)
ys.append(y)
return xs, ys
# Example output for a file - canny
num_train = 1
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg",num_return = num_train)
x, y = func_generator_canny(train_file_names)
print("x type :", type(x))
print("x[0] type :", type(x[0]))
print("x[0] shape:", x[0].shape)
print("y type :", type(y))
print("y[0] type :", type(y[0]))
print("y[0] shape:", y[0].shape)
plt.imshow(x[0][1])
```
### Pre-processing - SIFT
```
# Processing image with sift
import cv2
def ExtractSIFTForGrid(board_image, row, col, center_x = 25, center_y = 25, radius = 45):
kps = [cv2.KeyPoint(x = center_x + 50 * col, y = center_y + 50 * row, _size = 45)]
# USE THE CORRECT VERSION OF CV2
if cv2.__version__ == "4.4.0":
keypoints, descriptors = cv2.SIFT_create(edgeThreshold = 0).compute(image = board_image, keypoints = kps)
else:
keypoints, descriptors = cv2.xfeatures2d.SIFT_create(edgeThreshold = 0).compute(image = board_image, keypoints = kps)
return keypoints[0], descriptors[0, :]
def PreprocessImage_sift(image):
# 1st and 2nd dim is 8
desc=[]
for i in range(8):
for j in range(8):
kp, d= ExtractSIFTForGrid(image,i,j)
desc.append(np.array(d))
return desc
# atomic func:
def func_sift(file_name):
img = ReadImage(file_name)
x = PreprocessImage_sift(img)
y = np.array(FENtoL(GetCleanNameByPath(file_name)))
return x, y
# split into 64 small square from 1 - output of x: number of image x64 x128 , y:number of image x 64
def func_generator_sift(image_file_names):
xs = []
ys = []
for image_file_name in image_file_names:
x, y = func_sift(image_file_name)
xs.append(np.array(x))
ys.append(np.array(y))
return xs, ys
# Example output for a file - SIFT
num_train = 1
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg",num_return = num_train)
x, y = func_generator_sift(train_file_names)
print("x type :", type(x))
print("x[0] type :", type(x[0]))
print("x[0] shape:", x[0].shape)
print("y type :", type(y))
print("y[0] type :", type(y[0]))
print("y[0] shape:", y[0].shape)
plt.bar(x = range(128), height = x[0][1])
plt.title(y[0][1])
plt.xticks(x = range(128))
plt.show()
```
### Example of image input, canny, SIFT
```
import cv2
from skimage.filters import sobel
#print("Sift: decriptor size:", cv2.SIFT_create().descriptorSize())
img = ReadImage(a_random_file)
img1 = cv2.Canny(img,100,200)
img2= sobel(img[:,:,0])
print(img.shape)
print(img1.shape)
print(img2.shape)
kp, desc = ExtractSIFTForGrid(img, 0, 1)
kp2, desc2 = ExtractSIFTForGrid(img, 0, 3)
kp3, desc3 = ExtractSIFTForGrid(img, 0, 5)
kp4, desc4 = ExtractSIFTForGrid(img, 1, 3)
img_kp = cv2.drawKeypoints(img, [kp, kp2,kp3,kp4], img, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
img_kp1 = cv2.drawKeypoints(img1, [kp, kp2,kp3,kp4], img1, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
print('file name:',a_random_file)
plt.figure(figsize=(18,6))
plt.suptitle('Image processing output', fontsize=16)
plt.subplot(1, 3, 1)
plt.imshow(img_kp, aspect='auto')
plt.title('original image with keypoint')
plt.subplot(1, 3, 2)
plt.imshow(img2, aspect='auto')
plt.title('Sobel image')
plt.subplot(1, 3, 3)
plt.imshow(img1, aspect='auto')
plt.title('Canny image')
plt.show()
plt.figure(figsize=(15,6))
plt.suptitle('Sift output for original image at squares', fontsize=16)
#plt.tight_layout()
plt.subplot(2, 2, 1)
plt.title('square 0,1(b)')
plt.bar(x = range(128), height = desc)
plt.xticks(x = range(128))
plt.subplot(2,2, 2)
plt.title('square 0,3(B)')
plt.bar(x = range(128), height = desc2)
plt.xticks(x = range(128))
plt.subplot(2,2,3)
plt.title('square 0,5(b)')
plt.bar(x = range(128), height = desc3)
plt.xticks(x = range(128))
plt.subplot(2,2,4)
plt.title('square 1,3(K)')
plt.bar(x = range(128), height = desc4)
plt.xticks(x = range(128))
plt.show()
```
### Read image - generic, canny, sift - run time
```
if run_preprocessing_benchmark:
start_time = time.time()
num_train = 100
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg",num_return = num_train)
X_org,Y_org = func_generator(train_file_names)
print('runnning time for generic 100 images')
print('--- {} seconds ---'.format(time.time() - start_time))
X_sift,Y_sift = func_generator_sift(train_file_names)
print('runnning time for sift 100 images')
print('--- {} seconds ---'.format(time.time() - start_time))
start_time = time.time()
X_canny,Y_canny = func_generator_canny(train_file_names)
print('runnning time for canny 100 images')
print('--- {} seconds ---'.format(time.time() - start_time))
```
### Subset train data - high quality (image level)
```
#https://www.researchgate.net/post/How_to_use_clustering_to_reduce_data_set_samples
#subset data of high quality image (at image level)
#Output is
def file_names_highquality(image_file_names, min_piece =15):
names = np.array(image_file_names)
idx_sub = []
for idx in range(len(image_file_names)):
y = FENtoL(GetCleanNameByPath(image_file_names[idx]))
piece_count = 64 - y.count('0')
if piece_count >= min_piece:
idx_sub.append(idx)
return np.array(image_file_names)[idx_sub]
# test
if run_preprocessing_benchmark:
file_name_reduced = file_names_highquality(train_file_names, min_piece =15)
print('reduced file number from',len(train_file_names),'to',len(file_name_reduced))
```
### Undersempling -square (grid level)
```
# install the package if needed.
if run_preprocessing_benchmark:
!pip install imblearn
#subset data (in square level)
#ref: #https://www.researchgate.net/post/How_to_use_clustering_to_reduce_data_set_samples
# implement using https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.under_sampling.ClusterCentroids.html
if run_preprocessing_benchmark:
from imblearn.under_sampling import ClusterCentroids
# output x: number of grid x 125, y: number of grid
def undersampling_ClusterCentroids_canny(X,Y):
trans = ClusterCentroids(random_state=0)
length=len(np.array(Y))
X= np.array(X).reshape(length*64,25*25)
Y = np.array(Y).reshape(length*64)
X_resampled, y_resampled = trans.fit_sample(X, Y)
return X_resampled, y_resampled
# output x: number of grid x 128, y: number of grid
def undersampling_ClusterCentroids_sift(X,Y):
trans = ClusterCentroids(random_state=0)
length=len(np.array(Y))
X= np.array(X).reshape(length*64,128)
Y = np.array(Y).reshape(length*64)
X_resampled, y_resampled = trans.fit_sample(X, Y)
return X_resampled, y_resampled
if run_preprocessing_benchmark:
# test canny -resampled
start_time = time.time()
X_resampled_canny, Y_resampled_canny = undersampling_ClusterCentroids_canny(X_canny,Y_canny)
print('--- {} seconds ---'.format(time.time() - start_time))
print('reduce grid number from',len(np.array(Y_canny))*64,'to',len(np.array(Y_resampled_canny)))
# test sift -resampled
start_time = time.time()
X_resampled_sift, Y_resampled_sift = undersampling_ClusterCentroids_sift(X_sift,Y_sift)
print('--- {} seconds ---'.format(time.time() - start_time))
print('reduce grid number from',len(np.array(Y_sift))*64,'to',len(np.array(Y_resampled_sift)))
```
### Read Image - parallel version
```
from joblib import Parallel, delayed
# note: functions are first-class objects in Python. we pass it directly as parameter.
def Preprocess_parallel(func, file_names, job_count = 6):
result = Parallel(n_jobs=job_count)(delayed(func)(file_name) for file_name in file_names)
return zip(*result)
if run_preprocessing_benchmark:
start_time = time.time()
num_train = 100
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg",num_return = num_train)
xs, ys = Preprocess_parallel(func_canny, train_file_names)
print('--- {} seconds ---'.format(time.time() - start_time))
```
### Read file names
```
if do_hyper_parameter_tuning:
# Using FEN to identify grid with chess
num_train = 500 # small bath train
num_test= 500 # small bath train
# Reading lables
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg",num_return = num_train)
test_file_names = GetFileNamesInDir(g_test_dir, extension = "jpeg",num_return = num_test)
```
## Section 3. Implement algorithms
### Section 3.0 AdaBoostClassifier (ABC) Prototype
#### Import data - SIFT output ( n x 128)
```
if do_hyper_parameter_tuning:
# import data - train
start_time = time.time()
#xs_train, ys_train = Preprocess_parallel(train_file_names)
xs_train_sift, ys_train_sift = Preprocess_parallel(func_sift, train_file_names) #[JL - I broke Preprocess_parallel. use this instead ]
print('xs_train_sift, ys_train_sift generated:',len(xs_train_sift))
print(np.array(xs_train_sift).shape, np.array(ys_train_sift).shape)
print('--- {} seconds ---'.format(time.time() - start_time))
# import data - test
start_time = time.time()
#xs_train, ys_train = Preprocess_parallel(train_file_names)
xs_test_sift, ys_test_sift = Preprocess_parallel(func_sift, test_file_names) #[JL - I broke Preprocess_parallel. use this instead ]
print('xs_test_sift, ys_test_sift generated:',len(xs_test_sift))
print(np.array(xs_test_sift).shape, np.array(ys_test_sift).shape)
print('--- {} seconds ---'.format(time.time() - start_time))
```
#### intital prediction accuracy in sift data
```
if do_hyper_parameter_tuning:
xs_train_sift2= np.array(xs_train_sift).reshape(500*64,128)
ys_train_sift2 = np.array(ys_train_sift).reshape(500*64)
print('shape of train data:',np.array(xs_train_sift2).shape, np.array(ys_train_sift2).shape)
# ABC classifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
start_time = time.time()
ada = AdaBoostClassifier(n_estimators=50, learning_rate=0.5, random_state=42)
ada.fit(xs_train_sift2,ys_train_sift2)
y_pred= ada.predict(xs_train_sift2)
#Evaluate its performance on the training and test set
print("AdaBoost- accuracy on validation set:", accuracy_score(ys_train_sift2 ,y_pred))
print('--- {} seconds ---'.format(time.time() - start_time))
```
#### Import data - canny output ( n x 25 x 25 )
```
if do_hyper_parameter_tuning:
# import data - train 500
start_time = time.time()
#xs_train, ys_train = Preprocess_parallel(train_file_names)
xs_train_canny, ys_train_canny = func_generator_canny(train_file_names) #[JL - I broke Preprocess_parallel. use this instead ]
print('xs_train_canny, ys_train_canny generated:',len(xs_train_canny))
print(np.array(xs_train_canny).shape, np.array(ys_train_canny).shape)
print('--- {} seconds ---'.format(time.time() - start_time))
# import data - test 500
start_time = time.time()
#xs_train, ys_train = Preprocess_parallel(train_file_names)
xs_test_canny, ys_test_canny = func_generator_canny(test_file_names) #[JL - I broke Preprocess_parallel. use this instead ]
print('xs_test_canny, ys_test_canny generated:',len(xs_test_canny))
print(np.array(xs_test_canny).shape, np.array(ys_test_canny).shape)
print('--- {} seconds ---'.format(time.time() - start_time))
```
#### intital prediction accuracy in canny data
```
if do_hyper_parameter_tuning:
xs_train_canny2= np.array(xs_train_canny).reshape(500*64,25*25)
ys_train_canny2 = np.array(ys_train_canny).reshape(500*64)
print('shape of train data:',np.array(xs_train_canny2).shape, np.array(ys_train_canny2).shape)
# ABC classifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
start_time = time.time()
ada = AdaBoostClassifier(n_estimators=50, learning_rate=0.5, random_state=42)
ada.fit(xs_train_canny2,ys_train_canny2)
y_pred= ada.predict(xs_train_canny2)
#Evaluate its performance on the training and test set
print("AdaBoost- accuracy on validation set:", accuracy_score(ys_train_canny2 ,y_pred))
print('--- {} seconds ---'.format(time.time() - start_time))
```
### Proceed with canny data
#### Split train to train and validation
```
if do_hyper_parameter_tuning:
#split 500 training data
start_time = time.time()
test_size=0.33
from sklearn.model_selection import train_test_split
X_train, X_val, Y_train, Y_val = train_test_split(
xs_train_canny,ys_train_canny, test_size=0.33, random_state=0)
print(np.array(X_train).shape, np.array(X_val).shape, np.array(Y_train).shape, np.array(Y_val).shape)
print('--- {} seconds ---'.format(time.time() - start_time))
```
#### Undersampling traing set
```
if do_hyper_parameter_tuning:
# test sift -resampled
start_time = time.time()
X_resampled, Y_resampled = undersampling_ClusterCentroids_canny(X_train,Y_train)
print('--- {} seconds ---'.format(time.time() - start_time))
print('reduce grid number from',len(np.array(Y_train))*64,'to',len(np.array(Y_resampled)))
print('shape of resampled data:',np.array(X_resampled).shape, np.array(Y_resampled).shape)
if do_hyper_parameter_tuning:
# reshpe train daa, validation data, and testing data as resampled
X_train= np.array(X_train).reshape(335*64,25*25)
Y_train = np.array(Y_train).reshape(335*64)
print('shape of train data:',np.array(X_train).shape, np.array(Y_train).shape)
X_val= np.array(X_val).reshape(165*64,25*25)
Y_val = np.array(Y_val).reshape(165*64)
print('shape of validation data:',np.array(X_val).shape, np.array(Y_val).shape)
xs_test_canny= np.array(xs_test_canny).reshape(num_test*64,25*25)
ys_test_canny = np.array(ys_test_canny).reshape(num_test*64)
print('shape of test data:',np.array(xs_test_canny).shape, np.array(ys_test_canny).shape)
```
#### Base classifier- Tree (with canny & canny resampling)
```
if do_hyper_parameter_tuning:
# Tree classifier - Canny data
from sklearn.tree import DecisionTreeClassifier
start_time = time.time()
#Apply pre-pruning by limiting the depth of the tree - max_depth=2
tree = DecisionTreeClassifier(criterion='gini', max_depth=5)
tree.fit(X_train, Y_train)
#Evaluate its performance on the training and test set
print("Accuracy on training set: {:.3f}".format(tree.score(X_train, Y_train)))
print("Accuracy on validation set: {:.3f}".format(tree.score(X_val, Y_val)))
print("Accuracy on testing set: {:.3f}".format(tree.score(xs_test_canny, ys_test_canny)))
print('--- {} seconds ---'.format(time.time() - start_time))
if do_hyper_parameter_tuning:
# Tree classifier - Canny & resampling data
from sklearn.tree import DecisionTreeClassifier
start_time = time.time()
#Apply pre-pruning by limiting the depth of the tree
tree = DecisionTreeClassifier(criterion='gini', max_depth=5)
tree.fit(X_resampled, Y_resampled)
#Evaluate its performance on the training and test set
print("Accuracy on training set: {:.3f}".format(tree.score(X_train, Y_train)))
print("Accuracy on validation set: {:.3f}".format(tree.score(X_val, Y_val)))
print("Accuracy on testing set: {:.3f}".format(tree.score(xs_test_canny, ys_test_canny)))
print('--- {} seconds ---'.format(time.time() - start_time))
```
#### AdaBoostClassifier- Tree (with canny & canny resampling)
```
if do_hyper_parameter_tuning:
# ABC classifier- Canny data
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
start_time = time.time()
ada = AdaBoostClassifier(n_estimators=50,
base_estimator = DecisionTreeClassifier(criterion='gini', max_depth=5),
learning_rate=0.5,
random_state=42)
ada.fit(X_train, Y_train)
y_pred_val = ada.predict(X_val)
y_pred_test = ada.predict(xs_test_canny)
#Evaluate its performance on the training and test set
print("AdaBoost- accuracy on validation set:", accuracy_score(Y_val, y_pred_val))
print("AdaBoost- accuracy on testing set:", accuracy_score(ys_test_canny, y_pred_test))
print('--- {} seconds ---'.format(time.time() - start_time))
if do_hyper_parameter_tuning:
# ABC classifier- Canny & resampling data
start_time = time.time()
ada = AdaBoostClassifier(n_estimators=50,
base_estimator = DecisionTreeClassifier(criterion='gini', max_depth=5),
learning_rate=0.5,
random_state=42)
ada.fit(X_resampled, Y_resampled)
y_pred_val = ada.predict(X_val)
y_pred_test = ada.predict(xs_test_canny)
#Evaluate its performance on the training and test set
print("AdaBoost- accuracy on validation set:", accuracy_score(Y_val, y_pred_val))
print("AdaBoost- accuracy on testing set:", accuracy_score(ys_test_canny, y_pred_test))
print('--- {} seconds ---'.format(time.time() - start_time))
```
#### Hyper parameter tunning
```
###https://machinelearningmastery.com/adaboost-ensemble-in-python/
if do_hyper_parameter_tuning:
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import GridSearchCV
# AdaBoost
param_grid = [{'n_estimators': np.arange(10,50,5),
'learning_rate': [0.01, 0.05, 0.1, 1,5,10]
}]
start_time = time.time()
abc = GridSearchCV(AdaBoostClassifier(random_state=42), param_grid)
abc.fit(X_resampled, Y_resampled)
print('--- {} seconds ---'.format(time.time() - start_time))
# SVC
param_grid = [{'n_estimators': np.arange(10,50,5),
'learning_rate': [0.01, 0.05, 0.1, 1,5,10],
"kernel" : ["linear", "poly", "rbf", "sigmoid"],
"C" : [0.01, 1, 10, 100]
}]
start_time = time.time()
svc = GridSearchCV(svm.SVC(random_state=42), param_grid)
abc.fit(X_resampled, Y_resampled)
print('--- {} seconds ---'.format(time.time() - start_time))
else:
print("Hyper parameter tuning skipped.")
```
#### GridSearchCV Result
```
if do_hyper_parameter_tuning:
# Print grid search results
from sklearn.metrics import classification_report
means = abc.cv_results_['mean_test_score']
stds = abc.cv_results_['std_test_score']
params = abc.cv_results_['params']
print('Grid search mean and stdev:\n')
for mean, std, p in zip(means, stds, params):
print("%0.3f (+/-%0.03f) for %r"% (mean, std * 2, p))
# Print best params
print('\nBest parameters:', abc.best_params_)
print("Detailed classification report:")
print()
print(classification_report(Y_val, abc.predict(X_val)))
print()
else:
print("Adaboost: Hyper parameter report skipped.")
```
## Section 3.1 SVM Classifier (SVC)
#### Base class for all classifiers
```
import abc
# interface of the classifiers
class IClassifier:
# this method should accept a list of file names of the training data
@abc.abstractmethod
def Train(self, train_file_names):
raise NotImplementedError()
# this should accept a 400 * 400 * 3 numpy array as query data, and returns the fen notation of the board.
@abc.abstractmethod
def Predict(self, query_data):
raise NotImplementedError()
# this should accept a list of file names, and returns the predicted labels as 1d numpy array.
@abc.abstractmethod
def Predict(self, query_data):
raise NotImplementedError()
```
#### Class definition for SVC
```
from sklearn import svm
import numpy as np
# image io and plotting
from skimage import io, transform
import skimage.util
from skimage.util.shape import view_as_blocks
from matplotlib import pyplot as plt
# parallel processing
from joblib import Parallel, delayed
# model save and load
import pickle
import os
# profiling
import time
# joblib needs the kernel to be a top-level function, so we defined it here.
def PreprocessKernel(name):
img = ReadImage(name, gray = True)
grids = SVCClassifier.SVCPreprocess(img)
labels = np.array(FENtoOneHot(GetCleanNameByPath(name))).argmax(axis=1)
return grids, labels
# SVM Classifier
class SVCClassifier(IClassifier):
def __init__(self):
self.__svc__ = svm.SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=0.001, kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
# this method should accept a list of file names of the training data
def Train(self, train_file_names):
print("svc: reading image.")
start_time = time.time()
xs, ys = SVCClassifier.PreprocessParallelWrapperFunc(train_file_names)
print("svc: finished reading image, {} sec.".format(time.time() - start_time))
# train
print("svc: start training.")
start_time = time.time()
self.__svc__.fit(xs, ys)
print("svc: finished. {} sec.".format(time.time() - start_time))
# this should accept a 400 * 400 * 3 numpy array as query data, and returns the fen notation of the board.
def Predict(self, query_data):
grids = SVCClassifier.SVCPreprocess(query_data)
y_pred = self.__svc__.predict(grids)
return LabelArrayToL(y_pred)
# predict by file name:
def PredictMultiple(self, file_names):
preds = []
truth = []
for f in file_names:
img = ReadImage(f, gray = True)
y_pred = self.Predict(img)
y_true = FENtoL(GetCleanNameByPath(f))
preds.append(y_pred)
truth.append(y_true)
all_pred = np.vstack(preds)
all_truth = np.vstack(truth)
return all_pred, all_truth
# parallel pre-process wrapper:
@staticmethod
def PreprocessParallelWrapperFunc(file_names, num_thread = g_thread_num):
result = Parallel(n_jobs = num_thread)(delayed(PreprocessKernel)(file_name) for file_name in file_names)
xs, ys = zip(*result)
xs = np.concatenate(xs, axis=0)
ys = np.concatenate(ys)
return xs, ys
@staticmethod
def SVCPreprocess(img):
img = transform.resize(img, (g_down_sampled_size, g_down_sampled_size), mode='constant')
grids = skimage.util.shape.view_as_blocks(img, block_shape = (g_down_sampled_grid_size, g_down_sampled_grid_size))
grids = grids.reshape((-1, grids.shape[3], grids.shape[3]))
grids = grids.reshape((grids.shape[0], grids.shape[1] * grids.shape[1]))
return grids
def SaveModel(self, save_file_name):
os.makedirs(os.path.dirname(save_file_name), exist_ok = True)
with open(save_file_name, 'wb') as file:
pickle.dump(self.__svc__, file)
def LoadModel(self, load_file_name):
with open(load_file_name, 'rb') as file:
self.__svc__ = pickle.load(file)
```
#### Test code for SVC
```
if run_test_code_for_classifiers:
svc = SVCClassifier()
train_names = GetFileNamesInDir(g_train_dir)
if load_saved_model:
print("svc: loading model from " + svc_model_file)
svc.LoadModel(svc_model_file)
else:
svc.Train(train_names[:500])
y_truth = FENtoL(GetCleanNameByPath(a_random_file))
img = ReadImage(a_random_file, gray = True)
pred = svc.Predict(img)
print("truth: ", ''.join(y_truth))
print("pred : ", ''.join(pred))
# save model
if not load_saved_model:
print("svc: saving model to " + svc_model_file)
svc.SaveModel(svc_model_file)
```
### Section 3.2 Convolutional Neural Network Classifier (CNN)
#### Class definition for CNN
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import cv2
from skimage import io, transform
import numpy as np
import os
#import tensorflow as tf
#from tensorflow import keras
#from tf.keras.models import Sequential
#from tf.keras.layers.core import Flatten, Dense, Dropout, Activation
#from tf.keras.layers.convolutional import Convolution2D
class CNNClassifier(IClassifier):
# the file name format does not accept batch as parameter. link:
# https://github.com/tensorflow/tensorflow/issues/38668
s_check_point_file_name = "./CNN_training_checkpoint/cp_{epoch:02d}-{accuracy:.2f}.ckpt"
s_check_point_path = os.path.dirname(s_check_point_file_name)
s_save_frequence = 10000 # save a checkpoint every s_save_frequence batches
def __init__(self):
#tf.config.threading.set_inter_op_parallelism_threads(3)
#tf.config.threading.set_intra_op_parallelism_threads(3)
# define our model
self.__model__ = keras.Sequential(
[
layers.Convolution2D(32, (3, 3), input_shape = (g_down_sampled_grid_size, g_down_sampled_grid_size, 3)),
layers.Activation('relu'),
layers.Dropout(0.1),
layers.Convolution2D(32, (3, 3)),
layers.Activation('relu'),
layers.Convolution2D(32, (3, 3)),
layers.Activation('relu'),
layers.Flatten(),
layers.Dense(128),
layers.Activation('relu'),
layers.Dropout(0.3),
layers.Dense(13),
layers.Activation("softmax")
]
)
self.__model__.compile(loss = "categorical_crossentropy", optimizer = 'adam', metrics = ["accuracy"])
self.__save_check_point_callback__ = tf.keras.callbacks.ModelCheckpoint(
filepath = CNNClassifier.s_check_point_file_name,
monitor='val_accuracy',
save_weights_only = True,
save_freq = CNNClassifier.s_save_frequence,
verbose = 1
)
# generator
@staticmethod
def func_generator(train_file_names):
for image_file_name in train_file_names:
img = ReadImage(image_file_name)
x = CNNClassifier.PreprocessImage(img)
y = np.array(FENtoOneHot(GetCleanNameByPath(image_file_name)))
yield x, y
# this method should accept N * 64 * m * n numpy array as train data, and N lists of 64 chars as label.
def Train(self, train_data_names):
train_size = len(train_data_names)
## try load last checkpoint
#if not self.LoadMostRecentModel():
# os.makedirs(CNNClassifier.s_check_point_path, exist_ok = True)
# train
self.__model__.fit(CNNClassifier.func_generator(train_data_names),
use_multiprocessing = False,
#batch_size = 1000,
steps_per_epoch = train_size / 20,
epochs = 2,
#callbacks = [self.__save_check_point_callback__],
verbose = 1)
# this should accept a 64 * m * n numpy array as query data, and returns the fen notation of the board.
def Predict(self, query_data):
grids = CNNClassifier.PreprocessImage(query_data)
y_pred = self.__model__.predict(grids).argmax(axis=1)
return y_pred
# predict by file name:
def PredictMultiple(self, file_names):
preds = []
truth = []
for f in file_names:
img = ReadImage(f, gray = False)
y_pred = LabelArrayToL(self.Predict(img))
y_true = FENtoL(GetCleanNameByPath(f))
preds.append(y_pred)
truth.append(y_true)
all_pred = np.vstack(preds)
all_truth = np.vstack(truth)
return all_pred, all_truth
def LoadModel(self, name):
self.__model__.load_weights(name)
def SaveModel(self, name):
os.makedirs(os.path.dirname(name), exist_ok = True)
self.__model__.save_weights(name)
def PrintModel(self):
self.__model__.summary()
def LoadMostRecentModel(self):
return self.LoadMostRecentModelFromDirectory(CNNClassifier.s_check_point_path)
def LoadMostRecentModelFromDirectory(self, path):
try:
last_cp = tf.train.latest_checkpoint(path)
self.__model__.load_weights(last_cp)
print("Loaded checkpoint from " + last_cp)
return True
except:
print("No checkpoint is loaded.")
return False
def TestAccuracy(self, test_file_names):
num_files = len(test_file_names)
predict_result = self.__model__.predict(CNNClassifier.func_generator(test_file_names)).argmax(axis=1)
predict_result = predict_result.reshape(num_files, -1)
predicted_fen_arr = np.array([LtoFEN(LabelArrayToL(labels)) for labels in predict_result])
test_fens = np.array([GetCleanNameByPath(file_name) for file_name in test_file_names])
final_accuracy = (predicted_fen_arr == test_fens).astype(np.float).mean()
return final_accuracy
@staticmethod
def PreprocessImage(image):
image = transform.resize(image, (g_down_sampled_size, g_down_sampled_size), mode='constant')
# 1st and 2nd dim is 8
grids = ImageToGrids(image, g_down_sampled_grid_size, g_down_sampled_grid_size)
# debug
#plt.imshow(grids[0][3])
#plt.show()
return grids.reshape(g_grid_row * g_grid_col, g_down_sampled_grid_size, g_down_sampled_grid_size, 3)
```
#### test code for CNN
```
if run_test_code_for_classifiers:
if not load_saved_model:
cnn = CNNClassifier()
train_names = GetFileNamesInDir(g_train_dir)
cnn.Train(train_names)
cnn.SaveModel(cnn_model_file)
else:
cnn = CNNClassifier()
cnn.PrintModel()
print("cnn: loading model from " + cnn_model_file)
cnn.LoadModel(cnn_model_file)
predicted_label = cnn.Predict(ReadImage(a_random_file))
L = predicted_label
FEN = LtoFEN(LabelArrayToL(L))
print("predicted: " + FEN)
print("Original: " + GetCleanNameByPath(a_random_file))
#test_file_names = GetFileNamesInDir(g_test_dir)[:1000]
#print("CNN: Testing accuracy for {} board images.".format(len(test_file_names)))
#accuracy = cnn.TestAccuracy(test_file_names)
#print("CNN: Final accuracy: {}".format(accuracy))
labels = cnn.PredictMultiple(GetFileNamesInDir(g_test_dir)[:100])
```
### Section 3.3 AdaBoost Classifier
#### class definition
```
from sklearn.ensemble import AdaBoostClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
import numpy as np
# image io and plotting
from skimage import io, transform
import skimage.util
from skimage.util.shape import view_as_blocks
from matplotlib import pyplot as plt
# parallel processing
from joblib import Parallel, delayed
# model save and load
import pickle
import os
# profiling
import time
# joblib needs the kernel to be a top-level function, so we defined it here.
def PreprocessKernel(name):
img = ReadImage(name, gray = True)
grids = ABClassifier.ABCPreprocess(img)
labels = np.array(FENtoOneHot(GetCleanNameByPath(name))).argmax(axis=1)
return grids, labels
# Adaboost Classifier
class ABClassifier(IClassifier):
def __init__(self):
self.__abc__ = AdaBoostClassifier(n_estimators=30,
base_estimator = DecisionTreeClassifier(criterion='gini', max_depth=5),
learning_rate=0.5)
# this method should accept a list of file names of the training data
def Train(self, train_file_names):
print("abc: reading image.")
start_time = time.time()
xs, ys = ABClassifier.PreprocessParallelWrapperFunc(train_file_names)
print("abc: finished reading image, {} sec.".format(time.time() - start_time))
# train
print("abc: start training.")
start_time = time.time()
self.__abc__.fit(xs, ys)
print("abc: finished. {} sec.".format(time.time() - start_time))
# this should accept a 400 * 400 * 3 numpy array as query data, and returns the fen notation of the board.
def Predict(self, query_data):
grids = ABClassifier.ABCPreprocess(query_data)
y_pred = self.__abc__.predict(grids)
return LabelArrayToL(y_pred)
# parallel pre-process wrapper:
@staticmethod
def PreprocessParallelWrapperFunc(file_names, num_thread = g_thread_num):
result = Parallel(n_jobs = num_thread)(delayed(PreprocessKernel)(file_name) for file_name in file_names)
xs, ys = zip(*result)
xs = np.concatenate(xs, axis=0)
ys = np.concatenate(ys)
return xs, ys
@staticmethod
def ABCPreprocess(img):
img = transform.resize(img, (g_down_sampled_size, g_down_sampled_size), mode='constant')
grids = skimage.util.shape.view_as_blocks(img, block_shape = (g_down_sampled_grid_size, g_down_sampled_grid_size))
grids = grids.reshape((-1, grids.shape[3], grids.shape[3]))
grids = grids.reshape((grids.shape[0], grids.shape[1] * grids.shape[1]))
return grids
def SaveModel(self, save_file_name):
os.makedirs(os.path.dirname(save_file_name), exist_ok = True)
with open(save_file_name, 'wb') as file:
pickle.dump(self.__abc__, file)
def LoadModel(self, load_file_name):
with open(load_file_name, 'rb') as file:
self.__abc__ = pickle.load(file)
# predict by file name:
def PredictMultiple(self, file_names):
preds = []
truth = []
for f in file_names:
img = ReadImage(f, gray = True)
y_pred = self.Predict(img)
y_true = FENtoL(GetCleanNameByPath(f))
preds.append(y_pred)
truth.append(y_true)
all_pred = np.vstack(preds)
all_truth = np.vstack(truth)
return all_pred, all_truth
```
#### Test code for ABC
```
if run_test_code_for_classifiers:
abc = ABClassifier()
train_names = GetFileNamesInDir(g_train_dir)
if load_saved_model:
print("abc: loading model from " + abc_model_file)
abc.LoadModel(abc_model_file)
else:
abc.Train(train_names)
y_truth = FENtoL(GetCleanNameByPath(a_random_file))
img = ReadImage(a_random_file, gray = True)
pred = abc.Predict(img)
print("truth: ", ''.join(y_truth))
print("pred : ", ''.join(pred))
# save model
if not load_saved_model:
print("abc: saving model to " + abc_model_file)
abc.SaveModel(abc_model_file)
```
## 10-fold cross validation for 3 classifiers
Cv reference
https://scikit-learn.org/stable/modules/cross_validation.html
options for 10 fold
1. https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html
2. https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html (Preferred)
" StratifiedKFold is a variation of k-fold which returns stratified folds: each set contains approximately the same percentage of samples of each target class as the complete set."
### helper functions
```
# filters accepts a list of file names, and return the data matrix and labels
import random
from sklearn.metrics import confusion_matrix
# get balanced accuracy from confusion matrix
def BalancedAccuracyFromConfusionMatrix(cm):
ret = np.empty((cm.shape[0]))
for idx, row in enumerate(cm):
ret[idx] = row[idx] / row.sum()
return ret.mean()
# dummy filter to return all files
def DefaultFilter(file_names, rate = 1):
return file_names
# filter using random_sampling:
def RandomFilter(file_names, rate = 1):
# we fix the random part to assure the results are consistent
random_seed = 4242
random.seed(random_seed)
return random.sample(file_names, k = int(len(file_names) * rate))
def ConfusionMatrix(classifier, test_file_names, filter = RandomFilter, sampling_rate = 0.001):
confusion_matrices = []
accuracies = []
accuracies_balanced = []
train_time_cost = []
validation_time_cost = []
# split name list into 10 equal parts
division = len(test_file_names) / float(10)
complete_name_folds = [ test_file_names[int(round(division * i)): int(round(division * (i + 1)))] for i in range(10) ]
filtered_name_folds = complete_name_folds.copy()
for i in range(10):
filtered_name_folds[i] = filter(complete_name_folds[i], rate = sampling_rate)
# we use filtered name folds to train, and validation.
for iv in range(10):
# merge the 9 folds:
train_names = []
validation_names = []
for i in range(10):
if i != iv:
train_names.extend(filtered_name_folds[i])
else:
# validation_names = complete_name_folds[i].copy()
validation_names = filtered_name_folds[i].copy()
# train the classifier:
print("training started: ", type(classifier).__name__, "for fold #", iv, "# train files:", len(train_names))
t = time.time()
classifier.Train(train_names)
train_time_cost.append(time.time() - t)
print("training finished: ", type(classifier).__name__, "for fold #", iv,
"time: {}s".format(time.time() - t))
print("predicting started: ", type(classifier).__name__, "for fold #", iv)
t = time.time()
ypreds, y_true = classifier.PredictMultiple(validation_names)
validation_time_cost.append(time.time() - t)
ypreds = ypreds.reshape((-1, 1))
y_true = y_true.reshape((-1, 1))
conf_mat = confusion_matrix(y_true, ypreds, labels = g_labels)
confusion_matrices.append(conf_mat)
accuracy = np.trace(conf_mat) / float(np.sum(conf_mat))
accuracies.append(accuracy)
accuracy_balanced = BalancedAccuracyFromConfusionMatrix(conf_mat)
accuracies_balanced.append(accuracy_balanced)
print("predicting finished: ", type(classifier).__name__, "for fold #", iv,
"time: {}s".format(time.time() - t), " accuracy: ", accuracy, " balanced_accuracy:", accuracy_balanced)
return confusion_matrices, accuracies, accuracies_balanced, train_time_cost, validation_time_cost
```
### 10-fold routine
```
if run_ten_fold:
# 10-fold for ABC
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg")
# random sampling rate of the each fold in 10-fold
abc_random_sampling_rate = 0.005
abc_tf = ABClassifier()
confusion_matrices_abc, accuracies_abc, accuracies_balanced_abc, train_time_cost_abc, validation_time_cost_abc = \
ConfusionMatrix(abc_tf, train_file_names, RandomFilter, sampling_rate = abc_random_sampling_rate)
if run_ten_fold:
# 10-fold for CNN
# random sampling rate of the each fold in 10-fold
cnn_random_sampling_rate = 0.5
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg")
cnn_tf = CNNClassifier()
confusion_matrices_cnn, accuracies_cnn, accuracies_balanced_cnn, train_time_cost_cnn, validation_time_cost_cnn = \
ConfusionMatrix(cnn_tf, train_file_names, RandomFilter, sampling_rate = cnn_random_sampling_rate)
if run_ten_fold:
# 10-fold for SVM
train_file_names = GetFileNamesInDir(g_train_dir, extension = "jpeg")
# random sampling rate of the each fold in 10-fold
svc_random_sampling_rate = 0.01
svc_tf = SVCClassifier()
confusion_matrices_svc, accuracies_svc, accuracies_balanced_svc, train_time_cost_svc, validation_time_cost_svc = \
ConfusionMatrix(svc_tf, train_file_names, RandomFilter, sampling_rate = svc_random_sampling_rate)
```
### Serialize the results (export to hard drive)
```
if run_ten_fold:
# dump the matrices for report.
os.makedirs(os.path.dirname(ten_fold_result_path), exist_ok = True)
np.save(ten_fold_result_path + "confusion_matrices_abc.npy", confusion_matrices_abc)
np.save(ten_fold_result_path + "accuracies_abc.npy", accuracies_abc)
np.save(ten_fold_result_path + "accuracies_balanced_abc.npy", accuracies_balanced_abc)
np.save(ten_fold_result_path + "train_time_cost_abc.npy", train_time_cost_abc)
np.save(ten_fold_result_path + "validation_time_cost_abc.npy", validation_time_cost_abc)
np.save(ten_fold_result_path + "confusion_matrices_cnn.npy", confusion_matrices_cnn)
np.save(ten_fold_result_path + "accuracies_cnn.npy", accuracies_cnn)
np.save(ten_fold_result_path + "accuracies_balanced_cnn.npy", accuracies_balanced_cnn)
np.save(ten_fold_result_path + "train_time_cost_cnn.npy", train_time_cost_cnn)
np.save(ten_fold_result_path + "validation_time_cost_cnn.npy", validation_time_cost_cnn)
np.save(ten_fold_result_path + "confusion_matrices_svc.npy", confusion_matrices_svc)
np.save(ten_fold_result_path + "accuracies_svc.npy", accuracies_svc)
np.save(ten_fold_result_path + "accuracies_balanced_svc.npy", accuracies_balanced_svc)
np.save(ten_fold_result_path + "train_time_cost_svc.npy", train_time_cost_svc)
np.save(ten_fold_result_path + "validation_time_cost_svc.npy", validation_time_cost_svc)
svc_tf.SaveModel(svc_model_file)
abc_tf.SaveModel(abc_model_file)
cnn_tf.SaveModel(cnn_model_file)
```
### Read the results from hard drive
```
if not run_ten_fold:
import numpy as np
confusion_matrices_abc = np.load(ten_fold_result_path + "confusion_matrices_abc.npy")
accuracies_abc = np.load(ten_fold_result_path + "accuracies_abc.npy")
accuracies_balanced_abc = np.load(ten_fold_result_path + "accuracies_balanced_abc.npy")
train_time_cost_abc = np.load(ten_fold_result_path + "train_time_cost_abc.npy")
validation_time_cost_abc = np.load(ten_fold_result_path + "validation_time_cost_abc.npy")
confusion_matrices_cnn = np.load(ten_fold_result_path + "confusion_matrices_cnn.npy")
accuracies_cnn = np.load(ten_fold_result_path + "accuracies_cnn.npy")
accuracies_balanced_cnn = np.load(ten_fold_result_path + "accuracies_balanced_cnn.npy")
train_time_cost_cnn = np.load(ten_fold_result_path + "train_time_cost_cnn.npy")
validation_time_cost_cnn = np.load(ten_fold_result_path + "validation_time_cost_cnn.npy")
confusion_matrices_svc = np.load(ten_fold_result_path + "confusion_matrices_svc.npy")
accuracies_svc = np.load(ten_fold_result_path + "accuracies_svc.npy")
accuracies_balanced_svc = np.load(ten_fold_result_path + "accuracies_balanced_svc.npy")
train_time_cost_svc = np.load(ten_fold_result_path + "train_time_cost_svc.npy")
validation_time_cost_svc = np.load(ten_fold_result_path + "validation_time_cost_svc.npy")
```
### Plot the results
```
import matplotlib.pyplot as plt
def plot_accuracy(mat_abc, mat_cnn, mat_svc, title):
line, = plt.plot([i for i in range(len(mat_abc))],mat_abc)
line.set_label('AdaBoost')
line, = plt.plot([i for i in range(len(mat_cnn))],mat_cnn)
line.set_label('CNN')
line, = plt.plot([i for i in range(len(mat_svc))],mat_svc)
line.set_label('SVM')
plt.title(title)
plt.xlabel('10 - folds')
plt.ylabel('accuracy')
plt.ylim(0.5, 1.1)
plt.legend()
plt.show()
plot_accuracy(accuracies_abc, accuracies_cnn, accuracies_svc, "Accuracy of 3 Classifiers")
plot_accuracy(accuracies_balanced_abc, accuracies_balanced_cnn, accuracies_balanced_svc, "Balanced Accuracy of 3 Classifiers")
import seaborn as sns
def plot_confusion_mat(conf_mat, title = ""):
plt.figure(figsize=(10,7))
ax = sns.heatmap(conf_mat, annot=True, fmt="d")
plt.ylabel('True')
plt.xlabel('Predicted')
plt.title(title)
plt.show()
plot_confusion_mat(sum(confusion_matrices_abc), title = "AdaBoost Confusion Matrix" )
plot_confusion_mat(sum(confusion_matrices_cnn), title = "CNN Confusion Matrix" )
plot_confusion_mat(sum(confusion_matrices_svc), title = "SVM Confusion Matrix" )
```
### Bonus: GUI: see GUI_with_Classifiers.ipynb
|
github_jupyter
|
# Visualizing invasive and non-invasive EEG data
[Liberty Hamilton, PhD](https://csd.utexas.edu/research/hamilton-lab)
Assistant Professor, University of Texas at Austin
Department of Speech, Language, and Hearing Sciences
and Department of Neurology, Dell Medical School
Welcome! In this notebook we will be discussing how to look at time series electrophysiological 🧠 data that is recorded noninvasively at the scalp (scalp electroencephalography or EEG), or invasively in patients who are undergoing surgical treatment for epilepsy (sometimes called intracranial EEG or iEEG, also called stereo EEG/sEEG, or electrocorticography/ECoG).
### Python libraries you will be using in this tutorial:
* MNE-python
* matplotlib
* numpy

MNE-python is open source python software for exploring and analyzing human neurophysiological data (EEG/MEG/iEEG).
### What you will learn to do
* Load some sample EEG data
* Load some sample intracranial EEG data
* Plot the raw EEG data/iEEG data
* Plot the power spectrum of your data
* Epoch data according to specific task conditions (sentences)
* Plot all epochs and averaged evoked activity
* Plot average evoked activity in response to specific task conditions (ERPs)
* Plot by channel as well as averaging across channels
* Plot EEG activity at specific time points on the scalp (topomaps)
* Customize your plots
### Other Resources:
* [MNE-python tutorials](https://mne.tools/stable/auto_tutorials/index.html) -- This has many additional resources above and beyond that also include how to preprocess your data, remove artifacts, and more!
<a id="basics1"></a>
# 1. The basics: loading in your data
```
!pip install matplotlib==3.2
import mne # This is the mne library
import numpy as np # This gives us the power of numpy, which is just generally useful for array manipulation
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import cm
datasets = {'ecog': '/home/jovyan/data/we_eeg_viz_data/ecog/sub-S0006/S0006_ecog_hg.fif',
'eeg': '/home/jovyan/data/we_eeg_viz_data/eeg/sub-MT0002/MT0002-eeg.fif'}
event_files = {'ecog': '/home/jovyan/data/we_eeg_viz_data/ecog/sub-S0006/S0006_eve.txt',
'eeg': '/home/jovyan/data/we_eeg_viz_data/eeg/sub-MT0002/MT0002_eve.txt'}
stim_file = '/home/jovyan/data/we_eeg_viz_data/stimulus_list.csv'
# Get some information about the stimuli (here, the names of the sound files that were played)
ev_names=np.genfromtxt(stim_file, skip_header=1, delimiter=',',dtype=np.str, usecols=[1],encoding='utf-8')
ev_nums=np.genfromtxt(stim_file, skip_header=1, delimiter=',',dtype=np.int, usecols=[0], encoding='utf-8')
event_id = dict()
for i, ev_name in enumerate(ev_names):
event_id[ev_name] = ev_nums[i]
```
## 1.1. Choose which dataset to look at (start with EEG)
For the purposes of this tutorial, we'll be looking at some scalp EEG and intracranial EEG datasets from my lab. Participants provided written informed consent for participation in our research. These data were collected from two distinct participants listening to sentences from the [TIMIT acoustic-phonetic corpus](https://catalog.ldc.upenn.edu/LDC93S1). This is a database of English sentences spoken by multiple talkers from throughout the United States, and has been used in speech recognition research, neuroscience research, and more!
The list of stimuli is in the `stimulus_list.csv` file. Each stimulus starts with either a "f" or a "m" to indicate a female or male talker. The rest of the alphanumeric string has to do with other characteristics of the talkers that we won't go into here. The stimulus timings have been provided for you in the event files (ending with the suffix `_eve.txt`. We'll talk about those more later.
### EEG Data
The EEG data was recorded with a 64-channel [BrainVision ActiCHamp](https://www.brainproducts.com/productdetails.php?id=74) system. These data are part of an ongoing project in our lab and are unpublished. You can find similar (larger) datasets from [Broderick et al.](https://datadryad.org/stash/dataset/doi:10.5061/dryad.070jc), or Bradley Voytek's lab has a list of [Open Electrophysiology datasets](https://github.com/openlists/ElectrophysiologyData).
### The ECoG Data
The ECoG data was recorded from 106 electrodes across multiple regions of the brain while our participant listened to TIMIT sentences. This is a smaller subset of sentences than the EEG dataset and so is a bit faster to load. The areas we recorded from are labeled according to a clinical montage. For iEEG and ECoG datasets, these names are rarely standardized, so it can be hard to know exactly what is what without additional information. Here, each channel is named according to the general location of the electrode probe to which it belongs.
| Device | General location |
|---|---|
| RAST | Right anterior superior temporal |
| RMST | Right middle superior temporal |
| RPST | Right posterior superior temporal |
| RPPST | Right posterior parietal/superior temporal |
| RAIF | Right anterior insula |
| RPI | Right posterior insula |
| ROF | Right orbitofrontal |
| RAC | Right anterior cingulate |
```
data_type = 'eeg' # Can choose from 'eeg' or 'ecog'
```
## 1.2. Load the data
This next command loads the data from our fif file of interest. The `preload=True` flag means that the data will be loaded (necessary for some operations). If `preload=False`, you can still perform some aspects of this tutorial, and this is a great option if you have a large dataset and would like to look at some of the header information and metadata before you start to analyze it.
```
raw = mne.io.read_raw_fif(datasets[data_type], preload=True)
```
There is a lot of useful information in the info structure. For example, we can get the sampling frequency (`raw.info['sfreq']`), the channel names (`raw.info['ch_names']`), the channel types and locations (in `raw.info['chs']`), and whether any filtering operations have been performed already (`raw.info['highpass']` and `raw.info['lowpass']` show the cut-offs for the data).
```
print(raw.info)
sampling_freq = raw.info['sfreq']
nchans = raw.info['nchan']
print('The sampling frequency of our data is %d'%(sampling_freq))
print('Here is our list of %d channels: '%nchans)
print(raw.ch_names)
eeg_colors = {'eeg': 'k', 'eog': 'steelblue'}
fig = raw.plot(show=False, color=eeg_colors, scalings='auto');
fig.set_figwidth(8)
fig.set_figheight(4)
```
<a id="plots2"></a>
# 2. Let's make some plots!
MNE-python makes creating some plots *super easy*, which is great for data quality checking, exploration, and eventually manuscript figure generation. For example, one might wish to plot the power spectral density (PSD), which
## 2.2. Power spectral density
```
raw.plot_psd();
```
## 2.3. Sensor positions (for EEG)
For EEG, MNE-python also has convenient functions for showing the location of the sensors used. Here, we have a 64-channel montage. You can also use this information to help interpret some of your plots if you're plotting a single channel or a group of channels.
For ECoG, we will not be plotting sensors in this way. If you would like read more about that process, please see [this tutorial](https://mne.tools/stable/auto_tutorials/misc/plot_ecog.html). You can also check out [Noah Benson's session](https://neurohackademy.org/course/introduction-to-the-geometry-and-structure-of-the-human-brain/) (happening in parallel with this tutorial!) for plotting 3D brains.
```
if data_type == 'eeg':
raw.plot_sensors(kind='topomap',show_names=True);
```
Ok, awesome! So now we know where the sensors are, how densely they tile the space, and what their names are. *Knowledge = Power!*
So what if we wanted to look at the power spectral density plot we saw above by channel? We can use `plot_psd_topo` for that! There are also customizable options for playing with the colors.
```
if data_type == 'eeg':
raw.plot_psd_topo(fig_facecolor='w', axis_facecolor='w', color='k');
```
Finally, this one works for both EEG and ECoG. Here we are looking at the power spectral density plot again, but taking the average across trials and showing +/- 1 standard deviation from the mean across channels.
```
raw.plot_psd(area_mode='std', average=True);
```
Finally, we can plot these same figures using a narrower frequency range, and looking at a smaller set of channels using `picks`. For `plot_psd` and other functions, `picks` is a list of integer indices corresponding to your channels of interest. You can choose these by their number, or you can use the convenient `mne.pick_channels` function to choose them by name. For example, in EEG, we often see strong responses to auditory stimuli at the top of the head, so here we will restrict our EEG channels to a few at the top of the head at the midline. For ECoG, we are more likely to see responses to auditory stimuli in temporal lobe electrodes (potentially RPPST, RPST, RMST, RAST), so we'll try those.
```
if data_type == 'eeg':
picks = mne.pick_channels(raw.ch_names, include=['Pz','CPz','Cz','FCz','Fz','C1','C2','FC1','FC2','CP1','CP2'])
elif data_type == 'ecog':
picks = mne.pick_channels(raw.ch_names, include=['RPPST9','RPPST10','RPPST11'])
raw.plot_psd(picks = picks, fmin=1, fmax=raw.info['sfreq']/2, xscale='log');
```
## Plotting responses to events
Ok, so this is all well and good. We can plot our raw data, the power spectrum, and the locations of the sensors. But what if we care about responses to the stimuli we described above? What if we want to look at responses to specific sentences, or the average response across all sentences, or something else? How can we determine which EEG sensors or ECoG electrodes respond to the speech stimuli?
Enter.... *Epoching!* MNE-python gives you a very convenient way of rearranging your data according to events of interest. These can actually even be found automatically from a stimulus channel, if you have one (using [`mne.find_events`](https://mne.tools/stable/generated/mne.find_events.html)), which we won't use here because we already have the timings from another procedure. You can also find other types of epochs, like those based on EMG or [eye movements (EOG)](https://mne.tools/stable/generated/mne.preprocessing.find_eog_events.html).
Here, we will load our event files (ending with `_eve.txt`). These contain information about the start sample, stop sample, and event ID for each stimulus. Each row in the file is one stimulus. The timings are in samples rather than in seconds, so if you are creating these on your own, pay attention to your sampling rate (in `raw.info['sfreq']`).
```
# Load some events. The format of these is start sample, end sample, and event ID.
events = mne.read_events(event_files[data_type])
print(events)
num_events = len(events)
unique_stimuli = np.unique(np.array(events)[:,2])
num_unique = len(unique_stimuli)
print('There are %d total events, corresponding to %d unique stimuli'%(num_events, num_unique))
```
## Epochs
Great. So now that we have the events, we will "epoch" our data, which basically uses these timings to split up our data into trials of a given length. We will also set some parameters for data rejection to get rid of noisy trials.
```
# Set some rejection criteria. This will be based on the peak-to-peak
# amplitude of your data.
if data_type=='eeg':
reject = {'eeg': 60e-6} # Higher than peak to peak amplitude of 60 µV will be rejected
scalings = None
units = None
elif data_type=='ecog':
reject = {'ecog': 10} # Higher than Z-score of 10 will be rejected
scalings = {'ecog': 1} # Don't rescale these as if they should be in µV
units = {'ecog': 'Z-score'}
tmin = -0.2
tmax = 1.0
epochs = mne.Epochs(raw, events, tmin=tmin, tmax=tmax, baseline=(None, 0), reject=reject, verbose=True)
```
So what's in this epochs data structure? If we look at it, we can see that we have an entry for each event ID, and we can see how many times that stimulus was played. You can also see whether baseline correction was done and for what time period, and whether any data was rejected.
```
epochs
```
Now, you could decide at this point that you just want to work with the data directly as a numpy array. Luckily, that's super easy to do! We can just call `get_data()` on our epochs data structure, and this will output a matrix of `[events x channels x time points]`. If you do not limit the channel type, you will get all of them (including any EOG, stimulus channels, or other non-EEG/ECoG channels).
```
ep_data = epochs.get_data()
print(ep_data.shape)
```
## Plotting Epoched data
Ok... so we are getting ahead of ourselves. MNE-python provides a lot of ways to plot our data so that we don't have to deal with writing functions to do this ourselves! For example, if we'd like to plot the EEG/ECoG for all of the single trials we just loaded, along with an average across all of these trials (and channels of interest), we can do that easily with `epochs.plot_image()`.
```
epochs.plot_image(combine='mean', scalings=scalings, units=units)
```
As before, we can choose specific channels to look at instead of looking at all of them at once. For which method do you think this would make the most difference? Why?
```
if data_type == 'eeg':
picks = mne.pick_channels(raw.ch_names, include=['Fz','FCz','Cz','CPz','Pz'])
elif data_type == 'ecog':
picks = mne.pick_channels(raw.ch_names, include=['RPPST9','RPPST10','RPPST11'])
epochs.plot_image(picks = picks, combine='mean', scalings=scalings, units=units)
```
We can also sort the trials, if we would like. This can be very convenient if you have reaction times or some other portion of the trial where reordering would make sense. Here, we'll just pick a channel and order by the mean activity within each trial.
```
if data_type == 'eeg':
picks = mne.pick_channels(raw.ch_names, include=['CP6'])
elif data_type == 'ecog':
picks = mne.pick_channels(raw.ch_names, include=['RPPST2'])
# Get the data as a numpy array
eps_data = epochs.get_data()
# Sort the data
new_order = eps_data[:,picks[0],:].mean(1).argsort(0)
epochs.plot_image(picks=picks, order=new_order, scalings=scalings, units=units)
```
## Other ways to view epoched data
For EEG, another way to view these epochs by trial is using the scalp topography information. This allows us to quickly assess differences across the scalp in response to the stimuli. What do you notice about the responses?
```
if data_type == 'eeg':
epochs.plot_topo_image(vmin=-30, vmax=30, fig_facecolor='w',font_color='k');
```
## Comparing epochs of different trial types
So far we have just shown averages of activity across many different sentences. However, as mentioned above, the sentences come from multiple male and female talkers. So -- one quick split we could try is just to compare the responses to female vs. male talkers. This is relatively simple with the TIMIT stimuli because their file name starts with "f" or "m" to indicate this.
```
# Make lists of the event ID numbers corresponding to "f" and "m" sentences
f_evs = []
m_evs = []
for k in event_id.keys():
if k[0] == 'f':
f_evs.append(event_id[k])
elif k[0] == 'm':
m_evs.append(event_id[k])
print(unique_stimuli)
f_evs_new = [v for v in f_evs if v in unique_stimuli]
m_evs_new = [v for v in m_evs if v in unique_stimuli]
# Epoch the data separately for "f" and "m" epochs
f_epochs = mne.Epochs(raw, events, event_id=f_evs_new, tmin=tmin, tmax=tmax, reject=reject)
m_epochs = mne.Epochs(raw, events, event_id=m_evs_new, tmin=tmin, tmax=tmax, reject=reject)
```
Now we can plot the epochs just as we did above.
```
f_epochs.plot_image(combine='mean', show=False, scalings=scalings, units=units)
m_epochs.plot_image(combine='mean', show=False, scalings=scalings, units=units)
```
Cool! So now we have a separate plot for the "f" and "m" talkers. However, it's not super convenient to compare the traces this way... we kind of want them on the same axis. MNE easily allows us to do this too! Instead of using the epochs, we can create `evoked` data structures, which are averaged epochs. You can [read more about evoked data structures here](https://mne.tools/dev/auto_tutorials/evoked/plot_10_evoked_overview.html).
## Compare evoked data
```
evokeds = {'female': f_epochs.average(), 'male': m_epochs.average()}
mne.viz.plot_compare_evokeds(evokeds, show_sensors='upper right',picks=picks);
```
If we actually want errorbars on this plot, we need to do this a bit differently. We can use the `iter_evoked()` method on our epochs structures to create a dictionary of conditions for which we will plot our comparisons with `plot_compare_evokeds`.
```
evokeds = {'f':list(f_epochs.iter_evoked()), 'm':list(m_epochs.iter_evoked())}
mne.viz.plot_compare_evokeds(evokeds, picks=picks);
```
## Plotting scalp topography
For EEG, another common plot you may see is a topographic map showing activity (or other data like p-values, or differences between conditions). In this example, we'll show the activity at -0.2, 0, 0.1, 0.2, 0.3, and 1 second. You can also of course choose just one time to look at.
```
if data_type == 'eeg':
times=[tmin, 0, 0.1, 0.2, 0.3, tmax]
epochs.average().plot_topomap(times, ch_type='eeg', cmap='PRGn', res=32,
outlines='skirt', time_unit='s');
```
We can also plot arbitrary data using `mne.viz.plot_topomap`, and passing in a vector of data matching the number of EEG channels, and `raw.info` to give specifics on those channel locations.
```
if data_type == 'eeg':
chans = mne.pick_types(raw.info, eeg=True)
data = np.random.randn(len(chans),)
plt.figure()
mne.viz.plot_topomap(data, raw.info, show=True)
```
We can even animate these topo maps! This won't work well in jupyterhub, but feel free to try on your own!
```
if data_type == 'eeg':
fig,anim=epochs.average().animate_topomap(blit=False, times=np.linspace(tmin, tmax, 100))
```
## A few more fancy EEG plots
If we want to get especially fancy, we can also use `plot_joint` with our evoked data (or averaged epoched data, as shown below). This allows us to combine the ERPs for individual channels with topographic maps at time points that we specify. Pretty awesome!
```
if data_type == 'eeg':
epochs.average().plot_joint(picks='eeg', times=[0.1, 0.2, 0.3])
```
# What if I need more control? - matplotlib alternatives
If you feel you need more specific control over your plots, it's easy to get the data into a usable format for plotting with matplotlib. You can export both the raw and epoched data using the `get_data()` function, which will allow you to save your data as a numpy array `[ntrials x nchannels x ntimepoints]`.
Then, you can do whatever you want with the data! Throw it into matplotlib, use seaborn, or whatever your heart desires!
```
if data_type == 'eeg':
picks = mne.pick_channels(raw.ch_names, include=['Fz','FCz','Cz','CPz','Pz'])
elif data_type == 'ecog':
picks = mne.pick_channels(raw.ch_names, include=['RPPST9','RPPST10','RPPST11'])
f_data = f_epochs.get_data(picks=picks)
m_data = m_epochs.get_data(picks=picks)
times = f_epochs.times
print(f_data.shape)
```
## Plot evoked data with errorbars
We can recreate some similar plots to those in MNE-python with some of the matplotlib functions. Here we'll create something similar to what was plotted in `plot_compare_evokeds`.
```
def plot_errorbar(x, ydata, label=None, axlines=True, alpha=0.5, **kwargs):
'''
Plot the mean +/- standard error of ydata.
Inputs:
x : vector of x values
ydata : matrix of your data (this will be averaged along the 0th dimension)
label : A string containing the label for this plot
axlines : [bool], whether to draw the horizontal and vertical axes
alpha: opacity of the standard error area
'''
ymean = ydata.mean(0)
ystderr = ydata.std(0)/np.sqrt(ydata.shape[0])
plt.plot(x, ydata.mean(0), label=label, **kwargs)
plt.fill_between(x, ymean+ystderr, ymean-ystderr, alpha=alpha, **kwargs)
if axlines:
plt.axvline(0, color='k', linestyle='--')
plt.axhline(0, color='k', linestyle='--')
plt.gca().set_xlim([x.min(), x.max()])
plt.figure()
plot_errorbar(times, f_data.mean(0), label='female')
plot_errorbar(times, m_data.mean(0), label='male')
plt.xlabel('Time (s)')
plt.ylabel('Z-scored high gamma')
plt.legend()
```
## ECoG Exercise:
1. If you wanted to look at each ECoG electrode individually to find which ones have responses to the speech data, how would you do this?
2. Can you plot the comparison between "f" and "m" trials for each electrode as a subplot (try using `plt.subplot()` from `matplotlib`)
```
# Get the data for f trials
# Get the data for m trials
# Loop through each channel, and create a set of subplots for each
```
# Hooray, the End!
You did it! Go forth and use MNE-python in your own projects, or even contribute to the code! 🧠
|
github_jupyter
|
```
# all_no_testing
# default_exp models.binaryClassification
# default_cls_lvl 2
```
# Binary Horse Poo Model
> Simple model to detect HorsePoo vs noHorsePoo
## export data
```
%load_ext autoreload
%autoreload 2
#!rm -R data/tmp/horse_poo/ && rm -R data/tmp/no_horse_poo/
#!prodigy db-out binary_horse_poo ./data/tmp
```
## Description
With this model we will start of with a very simple binary classification. We will try to use most of the default settings from fastai. This will also be our benchmark model for further investigations.
```
#export
from fastai.vision import *
from fastai.callbacks import EarlyStoppingCallback
from prodigy.util import read_jsonl, write_jsonl
from prodigy.components.db import connect
from PooDetector.dataset_operations import extract_jsonl_to_binary_folders
import os
import shutil
from fastscript import *
#export
def prepare_data(fld_input:str='data/tmp', bs=256):
"""function to get a fastai databunch which can be used for training"""
#tfms = get_transforms(do_flip=False, max_zoom=1, max_warp=None)
#t_tfms = []
#t_tfms.append(flip_lr(p=0.5))
#t_tfms.append(symmetric_warp(magnitude=(-0.2,0.2), p=0.75))
#t_tfms.append(rotate(degrees=(-10,10), p=0.75))
#t_tfms.append(rand_zoom(scale=(1.,1.1), p=0.75))
#t_tfms.append(brightness(change=(0.5*(1-0.2), 0.5*(1+0.2)), p=0.75))
#t_tfms.append(contrast(scale=(1-0.2, 1/(1-0.2)), p=0.75))
#tfms = (t_tfms , [])
tfms = get_transforms()
return (ImageList.from_folder(fld_input)
.split_by_rand_pct(0.2)
.label_from_folder()
.transform(tfms, size=224)
.databunch(bs=bs)
.normalize(imagenet_stats))
#no_testing
data = prepare_data(fld_input='test_data/', bs=16)
#no_testing
data.show_batch()
#export
def get_learner(data:ImageDataBunch=None, model:Module=None):
"""get a lerner object for training"""
if data is None:
data = prepare_data()
if model is None:
model = models.resnet50
early_stopping = partial(EarlyStoppingCallback, min_delta=0.005, patience=8)
return cnn_learner(data, base_arch=model, callback_fns=[early_stopping])
#no_testing
learn = get_learner(data=data)
#no_testing
learn.fit_one_cycle(2, 5e-2)
#learn.fit_one_cycle(2, 5e-2)
learn.save('stage1')
#no_testing
learn.export()
#export
@call_parse
def train_model(path_jsonl:Param("path to jsonl file", str)='test_data/binary_horse_poo.jsonl',
cycles_to_fit:Param("number of cycles to fit", int)=10,
bs:Param("batch size", int)=128,
label:Param("positive label for binary classification", str)="horse_poo"
):
"""start training a new model with early stopping and export it"""
path_jsonl = Path(path_jsonl)
if path_jsonl.exists():
path_jsonl.unlink()
db = connect() # uses settings from your prodigy.json
images = db.get_dataset("binary_horse_poo")
write_jsonl(path_jsonl, images)
remove_subfolders(str(path_jsonl.parent))
extract_jsonl_to_binary_folders(str(path_jsonl), label)
data = prepare_data(path_jsonl.parent, bs=bs)
learn = get_learner(data)
learn.fit_one_cycle(cycles_to_fit, 5e-2)
learn.export()
return learn
#export
def remove_subfolders(path_parent:[Path, str]):
"""reomve all subfolders"""
path_parent = Path(path_parent)
for root, dirs, files in os.walk(str(path_parent), topdown=False):
for directory in dirs:
print(f"remove {str(Path(root) / Path(directory))}")
shutil.rmtree(str(Path(root) / Path(directory)))
#no_testing
path = Path('test_data/tmp/')
if os.path.exists(str(path)) is False:
os.mkdir(str(path))
if os.path.exists(str(path / 'horse')) is False:
os.mkdir(str(path / 'horse'))
if os.path.exists(str(path / 'no_horse')) is False:
os.mkdir(str(path / 'no_horse'))
assert os.path.exists(str(path))
assert os.path.exists(str(path / 'horse'))
assert os.path.exists(str(path / 'no_horse'))
remove_subfolders(str(path))
assert not os.path.exists(str(path / 'horse'))
assert not os.path.exists(str(path / 'no_horse'))
# prepare test
path_jsonl = 'test_data/binary_horse_poo.jsonl'
path_jsonl = Path(path_jsonl)
if os.path.exists('test_data/tmp') is False:
os.mkdir('test_data/tmp')
path_fld_target = path_jsonl.parent / 'tmp'
shutil.copy(str(path_jsonl), str(path_fld_target) )
path_jsonl = path_fld_target / path_jsonl.name
assert os.path.exists(path_jsonl)
#test
#learn = train_model(path_jsonl=path_jsonl, cycles_to_fit=2, bs=4)
#assert os.path.exists(str(path_jsonl.parent / 'export.pkl'))
#no_testing
#!prodigy db-out binary_horse_poo > data/tmp/binary_horse_poo.jsonl
path_jsonl = 'data/tmp/binary_horse_poo.jsonl'
learn = train_model(path_jsonl=path_jsonl, cycles_to_fit=15, bs=128)
#no_testing
learn.unfreeze()
#no_testing
learn.fit_one_cycle(8)
#no_testing
learn.fit_one_cycle(8)
#no_testing
learn.export()
```
|
github_jupyter
|
# MDN-transformer with examples
- What kind of data can be predicted by a mixture density network Transformer?
- Continuous sequential data
- Drawing data and RoboJam Touch Screem would be good examples for this, continuous values yield high resolution in 2d space.
# 1. Kanji Generation
- Firstly, let's try modelling some drawing data for Kanji writing using an MDN-Transformer.
- This work is inspired by previous work "MDN-RNN for Kanji Generation", hardmaru's Kanji tutorial and the original Sketch-RNN repository:
- http://blog.otoro.net/2015/12/28/recurrent-net-dreams-up-fake-chinese-characters-in-vector-format-with-tensorflow/
- https://github.com/hardmaru/sketch-rnn
- The idea is to learn how to draw kanji characters from a dataset of vector representations.
- This means learning how to move a pen in 2D space.
- The data consists of a sequence of pen movements (loations in 2D) and whether the pen is up or down.
- In this example, we will use one 3D MDN to model everything!
We will end up with a system that will continue writing Kanji given a short sequence, like this:
```
# Setup and modules
import sys
!{sys.executable} -m pip install keras-mdn-layer
!{sys.executable} -m pip install tensorflow
!{sys.executable} -m pip install tensorflow-probability
!{sys.executable} -m pip install matplotlib
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install svgwrite
import mdn
import numpy as np
import random
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
%matplotlib notebook
# Only for GPU use:
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
```
### Download and process the data set
```
# Train from David Ha's Kanji dataset from Sketch-RNN: https://github.com/hardmaru/sketch-rnn-datasets
# Other datasets in "Sketch 3" format should also work.
import urllib.request
url = 'https://github.com/hardmaru/sketch-rnn-datasets/raw/master/kanji/kanji.rdp25.npz'
urllib.request.urlretrieve(url, './kanji.rdp25.npz')
```
### Dataset
Includes about 11000 handwritten kanji characters divied into training, validation, and testing sets.
```
with np.load('./kanji.rdp25.npz', allow_pickle=True) as data:
train_set = data['train']
valid_set = data['valid']
test_set = data['test']
print("Training kanji:", len(train_set))
print("Validation kanji:", len(valid_set))
print("Testing kanji:", len(test_set))
# Functions for slicing up data
def slice_sequence_examples(sequence, num_steps):
xs = []
for i in range(len(sequence) - num_steps - 1):
example = sequence[i: i + num_steps]
xs.append(example)
return xs
def seq_to_singleton_format(examples):
xs = []
ys = []
for ex in examples:
xs.append(ex[:SEQ_LEN])
ys.append(ex)
return xs, ys
# Functions for making the data set
def format_dataset(x, y):
return ({
"input": x,
"target": y[:, :-1, :],
}, y[:, 1:, :])
def make_dataset(X, y):
dataset = tf.data.Dataset.from_tensor_slices((X, y))
dataset = dataset.batch(batch_size)
dataset = dataset.map(format_dataset)
return dataset.shuffle(2048).prefetch(16).cache()
# Data shapes
NUM_FEATS = 3
SEQ_LEN = 20
gap_len = 1
batch_size = 128
# Prepare training data as X and Y.
slices = []
for seq in train_set:
slices += slice_sequence_examples(seq, SEQ_LEN+gap_len)
X, y = seq_to_singleton_format(slices)
X = np.array(X)
y = np.array(y)
train_ds = make_dataset(X, y)
print("Number of training examples:")
print("X:", X.shape)
print("y:", y.shape)
print(train_ds)
```
### Constructing the MDN Transformer
Our MDN Transformer has the following settings:
- an embedding layer with positional embedding
- a transformer encoder
- a transformer decoder
- a three-dimensional mixture layer with 10 mixtures
- train for sequence length ___
- training for ___ epochs with a batch size of ___
Here's a diagram:
```
class PositionalEmbedding(layers.Layer):
def __init__(self, sequence_length, input_dim, output_dim, **kwargs):
super().__init__(**kwargs)
self.token_embeddings = layers.Dense(output_dim)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=output_dim)
self.sequence_length = sequence_length
self.input_dim = input_dim
self.output_dim = output_dim
def call(self, inputs, padding_mask=None):
length = inputs.shape[1]
positions = tf.range(start=0, limit=length, delta=1)
embedded_tokens = self.token_embeddings(inputs)
embedded_positions = self.position_embeddings(positions)
return embedded_tokens + embedded_positions
def get_config(self):
config = super().get_config()
config.update({
"output_dim": self.output_dim,
"sequence_length": self.sequence_length,
"input_dim": self.input_dim,
})
return config
class TransformerEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim)
self.dense_proj = keras.Sequential(
[layers.Dense(dense_dim, activation="relu"),
layers.Dense(embed_dim),]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
def call(self, inputs, mask=None):
attention_output = self.attention(inputs, inputs, attention_mask=mask)
proj_input = self.layernorm_1(inputs + attention_output)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)
def get_config(self):
config = super().get_config()
config.update({
"embed_dim": self.embed_dim,
"num_heads": self.num_heads,
"dense_dim": self.dense_dim,
})
return config
class TransformerDecoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention_1 = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim)
self.attention_2 = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim)
self.dense_proj = keras.Sequential(
[layers.Dense(dense_dim, activation="relu"),
layers.Dense(embed_dim),]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
self.layernorm_3 = layers.LayerNormalization()
self.supports_masking = True
def get_causal_attention_mask(self, inputs):
input_shape = tf.shape(inputs)
batch_size, sequence_length = input_shape[0], input_shape[1]
i = tf.range(sequence_length)[:, tf.newaxis]
j = tf.range(sequence_length)
mask = tf.cast(i >= j, dtype="int32")
mask = tf.reshape(mask, (1, input_shape[1], input_shape[1]))
mult = tf.concat(
[tf.expand_dims(batch_size, -1),
tf.constant([1, 1], dtype=tf.int32)], axis=0)
return tf.tile(mask, mult)
def call(self, inputs, encoder_outputs, padding_mask=None):
causal_mask = self.get_causal_attention_mask(inputs)
attention_output_1 = self.attention_1(
query=inputs,
value=inputs,
key=inputs,
attention_mask=causal_mask)
attention_output_1 = self.layernorm_1(inputs + attention_output_1)
if padding_mask==None:
attention_output_2 = self.attention_2(
query=attention_output_1,
value=encoder_outputs,
key=encoder_outputs)
else:
attention_output_2 = self.attention_2(
query=attention_output_1,
value=encoder_outputs,
key=encoder_outputs,
attention_mask=padding_mask)
attention_output_2 = self.layernorm_2(
attention_output_1 + attention_output_2)
proj_output = self.dense_proj(attention_output_2)
return self.layernorm_3(attention_output_2 + proj_output)
def get_config(self):
config = super().get_config()
config.update({
"embed_dim": self.embed_dim,
"num_heads": self.num_heads,
"dense_dim": self.dense_dim,
})
return config
# Training Hyperparameters:
input_dim = 3
sequence_length = 20
target_length = 20
embed_dim = 256
dense_dim = 128
num_heads = 2
output_dim = 3
number_mixtures = 10
EPOCHS = 20
SEED = 2345 # set random seed for reproducibility
random.seed(SEED)
np.random.seed(SEED)
encoder_inputs = keras.Input(shape=(sequence_length, input_dim), dtype="float64", name="input")
x = PositionalEmbedding(sequence_length, input_dim, embed_dim)(encoder_inputs)
encoder_outputs = TransformerEncoder(embed_dim, dense_dim, num_heads)(x)
print(encoder_outputs.shape)
# encoder_outputs2 = TransformerEncoder(embed_dim, dense_dim, num_heads)(encoder_outputs)
decoder_inputs = keras.Input(shape=(target_length, input_dim), dtype="float64", name="target")
x = PositionalEmbedding(target_length, input_dim, embed_dim)(decoder_inputs)
x = TransformerDecoder(embed_dim, dense_dim, num_heads)(x, encoder_outputs)
# x = TransformerDecoder(embed_dim, dense_dim, num_heads)(x, encoder_outputs2)
x = layers.Dropout(0.2)(x)
decoder_outputs = layers.Dense(input_dim, activation="softmax")(x)
outputs = mdn.MDN(output_dim, number_mixtures) (decoder_outputs)
model = keras.Model([encoder_inputs, decoder_inputs], outputs)
model.compile(loss=mdn.get_mixture_loss_func(output_dim,number_mixtures),
optimizer=keras.optimizers.Adam())
model.summary()
callbacks = [
keras.callbacks.ModelCheckpoint("full_transformer.keras",
save_best_only=True)
]
history=model.fit(train_ds, batch_size=batch_size, epochs=EPOCHS, callbacks=callbacks)
!mkdir -p saved_model
model.save('my_model_100_128_256_128_2_02')
# print(f"Test acc: {model.evaluate(int_test_ds)[1]:.3f}")
plt.figure()
plt.plot(history.history['loss'])
plt.show()
ls saved_model/my_model6
model = keras.models.load_model('saved_model/my_model6')
```
## Generating drawings
First need some helper functions to view the output.
```
def zero_start_position():
"""A zeroed out start position with pen down"""
out = np.zeros((1, 1, 3), dtype=np.float32)
out[0, 0, 2] = 1 # set pen down.
return out
def generate_sketch(model, start_pos, num_points=100):
return None
def cutoff_stroke(x):
return np.greater(x,0.5) * 1.0
def plot_sketch(sketch_array):
"""Plot a sketch quickly to see what it looks like."""
sketch_df = pd.DataFrame({'x':sketch_array.T[0],'y':sketch_array.T[1],'z':sketch_array.T[2]})
sketch_df.x = sketch_df.x.cumsum()
sketch_df.y = -1 * sketch_df.y.cumsum()
# Do the plot
fig = plt.figure(figsize=(8, 8))
ax1 = fig.add_subplot(111)
#ax1.scatter(sketch_df.x,sketch_df.y,marker='o', c='r', alpha=1.0)
# Need to do something with sketch_df.z
ax1.plot(sketch_df.x,sketch_df.y,'r-')
plt.show()
```
## SVG Drawing Function
Here's Hardmaru's Drawing Functions from _write-rnn-tensorflow_. Big hat tip to Hardmaru for this!
Here's the source: https://github.com/hardmaru/write-rnn-tensorflow/blob/master/utils.py
```
import svgwrite
from IPython.display import SVG, display
def get_bounds(data, factor):
min_x = 0
max_x = 0
min_y = 0
max_y = 0
abs_x = 0
abs_y = 0
for i in range(len(data)):
x = float(data[i, 0]) / factor
y = float(data[i, 1]) / factor
abs_x += x
abs_y += y
min_x = min(min_x, abs_x)
min_y = min(min_y, abs_y)
max_x = max(max_x, abs_x)
max_y = max(max_y, abs_y)
return (min_x, max_x, min_y, max_y)
def draw_strokes(data, factor=1, svg_filename='sample.svg'):
min_x, max_x, min_y, max_y = get_bounds(data, factor)
dims = (50 + max_x - min_x, 50 + max_y - min_y)
dwg = svgwrite.Drawing(svg_filename, size=dims)
dwg.add(dwg.rect(insert=(0, 0), size=dims, fill='white'))
lift_pen = 1
abs_x = 25 - min_x
abs_y = 25 - min_y
p = "M%s,%s " % (abs_x, abs_y)
command = "m"
for i in range(len(data)):
if (lift_pen == 1):
command = "m"
elif (command != "l"):
command = "l"
else:
command = ""
x = float(data[i, 0]) / factor
y = float(data[i, 1]) / factor
lift_pen = data[i, 2]
p += command + str(x) + "," + str(y) + " "
the_color = "black"
stroke_width = 1
dwg.add(dwg.path(p).stroke(the_color, stroke_width).fill("none"))
dwg.save()
display(SVG(dwg.tostring()))
original = valid_set[0]
x0 = np.array([valid_set[0][:SEQ_LEN]])
y0 = x0
# y0 = np.array([valid_set[0][:(SEQ_LEN+9)]])
# Predict a character and plot the result.
pi_temperature = 3 # seems to work well with rather high temperature (2.5)
sigma_temp = 0.1 # seems to work well with low temp
### Generation using one example from validation set as
p = x0
sketch = p
for i in range(100):
params = model.predict([p, p])
out = mdn.sample_from_output(params[0][49], output_dim, number_mixtures, temp=pi_temperature, sigma_temp=sigma_temp)
p = np.concatenate((p[:,1:],np.array([out])), axis=1)
sketch = np.concatenate((sketch, np.array([out])), axis=1)
sketch.T[2] = cutoff_stroke(sketch.T[2])
draw_strokes(sketch[0], factor=0.5)
draw_strokes(x0[0], factor=0.5)
```
|
github_jupyter
|
### Imports
```
import pandas as pd
import numpy as np
#Python Standard Libs Imports
import json
import urllib2
import sys
from datetime import datetime
from os.path import isfile, join, splitext
from glob import glob
#Imports to enable visualizations
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
```
### Functions
#### Basic Functions
#### OTP Functions
#### Analysis Functions
### Main Code
#### Reading itinerary alternatives data
```
all_itineraries = pd.read_csv('/local/tarciso/data/its/itineraries/all_itineraries.csv', parse_dates=['planned_start_time','actual_start_time','exec_start_time'])
all_itineraries.head()
all_itineraries.dtypes
len(all_itineraries)
len(all_itineraries.user_trip_id.unique())
```
#### Adding metadata for further analysis
```
def get_trip_len_bucket(trip_duration):
if (trip_duration < 10):
return '<10'
elif (trip_duration < 20):
return '10-20'
elif (trip_duration < 30):
return '20-30'
elif (trip_duration < 40):
return '30-40'
elif (trip_duration < 50):
return '40-50'
elif (trip_duration >= 50):
return '50+'
else:
return 'NA'
def get_day_type(trip_start_time):
trip_weekday = trip_start_time.weekday()
if ((trip_weekday == 0) | (trip_weekday == 4)):
return 'MON/FRI'
elif ((trip_weekday > 0) & (trip_weekday < 4)):
return 'TUE/WED/THU'
elif (trip_weekday > 4):
return 'SAT/SUN'
else:
return 'NA'
all_itineraries['trip_length_bucket'] = all_itineraries['exec_duration_mins'].apply(get_trip_len_bucket)
all_itineraries['hour_of_day'] = all_itineraries['exec_start_time'].dt.hour
period_of_day_list = [('hour_of_day', [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]),
('period_of_day', ['very_late_night','very_late_night','very_late_night','very_late_night','early_morning','early_morning','early_morning','morning','morning','morning','morning','midday','midday','midday','afternoon','afternoon','afternoon','evening','evening','evening','night','night','late_night','late_night'])]
period_of_day_df = pd.DataFrame.from_items(period_of_day_list)
period_of_day_df.period_of_day = period_of_day_df.period_of_day.astype('category', ordered=True)
all_itineraries = all_itineraries.merge(period_of_day_df, how='inner', on='hour_of_day')
all_itineraries['weekday'] = all_itineraries['exec_start_time'].apply(lambda x: x.weekday() < 5)
all_itineraries['day_type'] = all_itineraries['exec_start_time'].apply(get_day_type)
all_itineraries
```
#### Filtering trips for whose executed itineraries there is no schedule information
```
def filter_trips_alternatives(trips_alternatives):
min_trip_dur = 10
max_trip_dur = 50
max_trip_start_diff = 20
return trips_alternatives[(trips_alternatives['actual_duration_mins'] >= min_trip_dur) & (trips_alternatives['actual_duration_mins'] <= max_trip_dur)] \
.assign(start_diff = lambda x: np.absolute(x['exec_start_time'] - x['actual_start_time'])/pd.Timedelta(minutes=1)) \
[lambda x: x['start_diff'] <= 20]
def filter_trips_with_insufficient_alternatives(trips_alternatives):
num_trips_alternatives = trips_alternatives.groupby(['date','user_trip_id']).size().reset_index(name='num_alternatives')
trips_with_executed_alternative = trips_alternatives[trips_alternatives['itinerary_id'] == 0][['date','user_trip_id']]
return trips_alternatives.merge(trips_with_executed_alternative, on=['date','user_trip_id'], how='inner') \
.merge(num_trips_alternatives, on=['date','user_trip_id'], how='inner') \
[lambda x: x['num_alternatives'] > 1] \
.sort_values(['user_trip_id','itinerary_id'])
clean_itineraries = filter_trips_with_insufficient_alternatives(filter_trips_alternatives(all_itineraries))
clean_itineraries.head()
sns.distplot(clean_itineraries['start_diff'])
len(clean_itineraries)
len(clean_itineraries.user_trip_id.unique())
exec_itineraries_with_scheduled_info = all_itineraries[(all_itineraries['itinerary_id'] == 0) & (pd.notnull(all_itineraries['planned_duration_mins']))][['date','user_trip_id']]
clean_itineraries2 = filter_trips_with_insufficient_alternatives(filter_trips_alternatives(all_itineraries.merge(exec_itineraries_with_scheduled_info, on=['date','user_trip_id'], how='inner')))
clean_itineraries2.head()
len(clean_itineraries2)
len(clean_itineraries2.user_trip_id.unique())
```
## Compute Inefficiency Metrics

```
def select_best_itineraries(trips_itineraries,metric_name):
return trips_itineraries.sort_values([metric_name]) \
.groupby(['date','user_trip_id']) \
.nth(0) \
.reset_index()
```
### Observed Inefficiency
```
#Choose best itinerary for each trip by selecting the ones with lower actual duration
best_trips_itineraries = select_best_itineraries(clean_itineraries,'actual_duration_mins')
best_trips_itineraries.head()
trips_inefficiency = best_trips_itineraries \
.assign(dur_diff = lambda x: x['exec_duration_mins'] - x['actual_duration_mins']) \
.assign(observed_inef = lambda x: x['dur_diff']/x['exec_duration_mins'])
trips_inefficiency.head(10)
sns.distplot(trips_inefficiency.observed_inef)
sns.violinplot(trips_inefficiency.observed_inef)
pos_trips_inefficiency = trips_inefficiency[trips_inefficiency['dur_diff'] > 1]
pos_trips_inefficiency.head()
```
#### Number of Trips with/without improvent per Trip Length Bucket
```
trips_per_length = trips_inefficiency.groupby('trip_length_bucket').size().reset_index(name='total')
trips_per_length
trips_per_length_improved = pos_trips_inefficiency.groupby('trip_length_bucket').size().reset_index(name='total')
trips_per_length_improved
sns.set_style("whitegrid")
#Plot 1 - background - "total" (top) series
ax = sns.barplot(x = trips_per_length.trip_length_bucket, y = trips_per_length.total, color = "#66c2a5")
#Plot 2 - overlay - "bottom" series
bottom_plot = sns.barplot(x = trips_per_length_improved.trip_length_bucket, y = trips_per_length_improved.total, color = "#fc8d62")
bottom_plot.set(xlabel='Trip Length (minutes)',ylabel='Number of Trips')
topbar = plt.Rectangle((0,0),1,1,fc="#66c2a5", edgecolor = 'none')
bottombar = plt.Rectangle((0,0),1,1,fc='#fc8d62', edgecolor = 'none')
l = plt.legend([bottombar, topbar], ['Not Optimal', 'Optimal'], bbox_to_anchor=(0, 1.2), loc=2, ncol = 2, prop={'size':10})
l.draw_frame(False)
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/trip_length_by_optimality.pdf', bbox_inches='tight')
```
#### Per Trip Length Bucket
```
trip_len_order=['10-20','20-30','30-40','40-50']
ax = sns.boxplot(x='observed_inef',y='trip_length_bucket', orient='h', data=pos_trips_inefficiency, order=trip_len_order, color='#fc8d62')
ax.set(xlabel='Inefficiency (%)',ylabel='Trip Length')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_trip_length.pdf', bbox_inches='tight')
```
#### Per Period of Day
```
period_of_day_order = ['very_late_night','early_morning','morning','midday','afternoon','evening','night','late_night']
ax = sns.boxplot(x='observed_inef',y='period_of_day', data=pos_trips_inefficiency, order=period_of_day_order, color='#fc8d62')
ax.set(xlabel='Inefficiency (%)',ylabel='Period of Day')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
#### Per Weekday/Weekend
```
ax = sns.barplot(x='trip_length_bucket',y='observed_inef', hue='weekday', data=pos_trips_inefficiency, color='#fc8d62')
ax.set(xlabel='Trip Length',ylabel='Inefficiency (%)')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
#### Per Day Type
```
ax = sns.barplot(x='trip_length_bucket',y='observed_inef', hue='day_type', data=pos_trips_inefficiency, color='#fc8d62')
ax.set(xlabel='Trip Length',ylabel='Inefficiency (%)')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
### Schedule Inefficiency
```
shortest_planned_itineraries = select_best_itineraries(clean_itineraries[pd.notnull(clean_itineraries['planned_duration_mins'])],'planned_duration_mins') \
[['date','user_trip_id','planned_duration_mins','actual_duration_mins']] \
.rename(index=str,columns={'planned_duration_mins':'shortest_scheduled_planned_duration',
'actual_duration_mins':'shortest_scheduled_observed_duration'})
shortest_planned_itineraries.head()
sched_inef = best_trips_itineraries \
.rename(index=str,columns={'actual_duration_mins':'shortest_observed_duration'}) \
.merge(shortest_planned_itineraries, on=['date','user_trip_id'], how='inner') \
.assign(sched_dur_diff = lambda x: x['shortest_scheduled_observed_duration'] - x['shortest_observed_duration']) \
.assign(sched_inef = lambda x: x['sched_dur_diff']/x['shortest_scheduled_observed_duration'])
sched_inef.head()
sns.distplot(sched_inef.sched_inef)
sns.violinplot(sched_inef.sched_inef)
pos_sched_inef = sched_inef[sched_inef['sched_dur_diff'] > 1]
sns.distplot(pos_sched_inef.sched_inef)
```
#### Number of Trips with/without improvent per Trip Length Bucket
```
trips_per_length = sched_inef.groupby('trip_length_bucket').size().reset_index(name='total')
trips_per_length
trips_per_length_improved = pos_sched_inef.groupby('trip_length_bucket').size().reset_index(name='total')
trips_per_length_improved
sns.set_style("whitegrid")
#Plot 1 - background - "total" (top) series
ax = sns.barplot(x = trips_per_length.trip_length_bucket, y = trips_per_length.total, color = "#66c2a5")
#Plot 2 - overlay - "bottom" series
bottom_plot = sns.barplot(x = trips_per_length_improved.trip_length_bucket, y = trips_per_length_improved.total, color = "#fc8d62")
bottom_plot.set(xlabel='Trip Length (minutes)',ylabel='Number of Trips')
topbar = plt.Rectangle((0,0),1,1,fc="#66c2a5", edgecolor = 'none')
bottombar = plt.Rectangle((0,0),1,1,fc='#fc8d62', edgecolor = 'none')
l = plt.legend([bottombar, topbar], ['Not Optimal', 'Optimal'], bbox_to_anchor=(0, 1.2), loc=2, ncol = 2, prop={'size':10})
l.draw_frame(False)
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/trip_length_by_optimality.pdf', bbox_inches='tight')
```
#### Per Trip Length Bucket
```
trip_len_order=['10-20','20-30','30-40','40-50']
ax = sns.boxplot(x='sched_inef',y='trip_length_bucket', orient='h', data=pos_sched_inef, order=trip_len_order, color='#fc8d62')
ax.set(xlabel='Schedule Inefficiency (%)',ylabel='Trip Length')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_trip_length.pdf', bbox_inches='tight')
```
#### Per Period of Day
```
period_of_day_order = ['very_late_night','early_morning','morning','midday','afternoon','evening','night','late_night']
ax = sns.boxplot(x='sched_inef',y='period_of_day', data=pos_sched_inef, order=period_of_day_order, color='#fc8d62')
ax.set(xlabel='Schedule Inefficiency (%)',ylabel='Period of Day')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
#### Per Weekday/Weekend
```
ax = sns.barplot(x='trip_length_bucket',y='sched_inef', hue='weekday', data=pos_sched_inef, color='#fc8d62')
ax.set(xlabel='Trip Length',ylabel='Schedule Inefficiency (%)')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
#### Per Day Type
```
ax = sns.barplot(x='trip_length_bucket',y='sched_inef', hue='day_type', data=pos_sched_inef, color='#fc8d62')
ax.set(xlabel='Trip Length',ylabel='Schedule Inefficiency (%)')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
### User choice plan inefficiency
```
best_scheduled_itineraries = select_best_itineraries(clean_itineraries2,'planned_duration_mins') \
[['date','user_trip_id','planned_duration_mins']] \
.rename(index=str,columns={'planned_duration_mins':'best_planned_duration_mins'})
best_scheduled_itineraries.head()
plan_inef = clean_itineraries2.merge(best_scheduled_itineraries, on=['date','user_trip_id'], how='inner') \
[lambda x: x['itinerary_id'] == 0] \
.assign(plan_dur_diff = lambda x: x['planned_duration_mins'] - x['best_planned_duration_mins']) \
.assign(plan_inef = lambda x: x['plan_dur_diff']/x['planned_duration_mins'])
sns.distplot(plan_inef.plan_inef)
pos_plan_inef = plan_inef[plan_inef['plan_dur_diff'] > 1]
sns.distplot(pos_plan_inef.plan_inef)
```
#### Number of Trips with/without improvent per Trip Length Bucket
```
trips_per_length = plan_inef.groupby('trip_length_bucket').size().reset_index(name='total')
trips_per_length
trips_per_length_improved = pos_plan_inef.groupby('trip_length_bucket').size().reset_index(name='total')
trips_per_length_improved
sns.set_style("whitegrid")
#Plot 1 - background - "total" (top) series
ax = sns.barplot(x = trips_per_length.trip_length_bucket, y = trips_per_length.total, color = "#66c2a5")
#Plot 2 - overlay - "bottom" series
bottom_plot = sns.barplot(x = trips_per_length_improved.trip_length_bucket, y = trips_per_length_improved.total, color = "#fc8d62")
bottom_plot.set(xlabel='Trip Length (minutes)',ylabel='Number of Trips')
topbar = plt.Rectangle((0,0),1,1,fc="#66c2a5", edgecolor = 'none')
bottombar = plt.Rectangle((0,0),1,1,fc='#fc8d62', edgecolor = 'none')
l = plt.legend([bottombar, topbar], ['Not Optimal', 'Optimal'], bbox_to_anchor=(0, 1.2), loc=2, ncol = 2, prop={'size':10})
l.draw_frame(False)
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/trip_length_by_optimality.pdf', bbox_inches='tight')
```
#### Per Trip Length Bucket
```
trip_len_order=['10-20','20-30','30-40','40-50']
ax = sns.boxplot(x='plan_inef',y='trip_length_bucket', orient='h', data=pos_plan_inef, order=trip_len_order, color='#fc8d62')
ax.set(xlabel='Plan Inefficiency (%)',ylabel='Trip Length')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_trip_length.pdf', bbox_inches='tight')
```
#### Per Period of Day
```
period_of_day_order = ['very_late_night','early_morning','morning','midday','afternoon','evening','night','late_night']
ax = sns.boxplot(x='plan_inef',y='period_of_day', data=pos_plan_inef, order=period_of_day_order, color='#fc8d62')
ax.set(xlabel='Plan Inefficiency (%)',ylabel='Period of Day')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
#### Per Weekday/Weekend
```
ax = sns.barplot(x='trip_length_bucket',y='plan_inef', hue='weekday', data=pos_plan_inef, color='#fc8d62')
ax.set(xlabel='Trip Length',ylabel='Plan Inefficiency (%)')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
#### Per Day Type
```
ax = sns.barplot(x='trip_length_bucket',y='plan_inef', hue='day_type', data=pos_plan_inef, color='#fc8d62')
ax.set(xlabel='Trip Length',ylabel='Plan Inefficiency (%)')
fig = ax.get_figure()
fig.set_size_inches(4.5, 3)
#fig.savefig('/local/tarciso/masters/data/results/imp_capacity_per_day_period.pdf', bbox_inches='tight')
```
#### System Schedule Deviation
$$
\begin{equation*}
{Oe - Op}
\end{equation*}
$$
```
sched_deviation = clean_itineraries[clean_itineraries['itinerary_id'] > 0] \
.assign(sched_dev = lambda x: x['actual_duration_mins'] - x['planned_duration_mins'])
sched_deviation.head()
sns.distplot(sched_deviation.sched_dev)
```
#### User stop waiting time offset
$$
\begin{equation*}
{start(Oe) - start(Op)}
\end{equation*}
$$
```
user_boarding_timediff = clean_itineraries[clean_itineraries['itinerary_id'] > 0] \
.assign(boarding_timediff = lambda x: (x['actual_start_time'] - x['planned_start_time'])/pd.Timedelta(minutes=1))
user_boarding_timediff.head()
sns.distplot(user_boarding_timediff.boarding_timediff)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/valentina-s/Oceans19-data-science-tutorial/blob/master/notebooks/1_data_loading_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Whale Sound Exploration
In this tutorial we will explore some data which contain right whale up-calls. The dataset was shared as part of a [2013 Kaggle competition](https://www.kaggle.com/c/whale-detection-challenge). Our goal is not to show the best winning algorithm to detect a call, but share a simple pipeline for processing oscillatory data, which possibly can be used on wide range of time series.
Objectives:
* read and extract features form audio data
* apply dimensionality reduction techiques
* perform supervised classification
* learn how to evaluate machine learning models
* train a neural network to detect whale calls
### Data Loading and Exploration
---
```
# ignore warnings
import warnings
warnings.filterwarnings("ignore")
# importing multiple visualization libraries
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import mlab
import pylab as pl
import seaborn
# importing libraries to manipulate the data files
import os
from glob import glob
# importing scientific python packages
import numpy as np
# import a library to read the .aiff format
import aifc
```
The `train` folder contains many `.aiff` files (2 second snippets) and we have `.csv` document which contains the corresponding labels.
```
!wget http://oceanhackweek2018.s3.amazonaws.com/oceans_data/train.tar.gz
!wget http://oceanhackweek2018.s3.amazonaws.com/oceans_data/labels.csv
!tar xvzf train.tar.gz
# read the audio filenames
filenames = sorted(glob(os.path.join('train','*.aiff')))
print('There are '+str(len(filenames))+' files.' )
# read the labels
import pandas as pd
labels = pd.read_csv(os.path.join('labels.csv'), index_col = 0)
```
The format of the labels is:
```
labels.head(10)
```
Let's look at one of those files.
```
# reading the file info
whale_sample_file = 'train00006.aiff'
whale_aiff = aifc.open(os.path.join('train',whale_sample_file),'r')
print ("Frames:", whale_aiff.getnframes() )
print ("Frame rate (frames per second):", whale_aiff.getframerate())
# reading the data
whale_strSig = whale_aiff.readframes(whale_aiff.getnframes())
whale_array = np.fromstring(whale_strSig, np.short).byteswap()
plt.plot(whale_array)
plt.xlabel('Frame number')
signal = whale_array.astype('float64')
# playing a whale upcall in the notebook
from IPython.display import Audio
# Audio(signal, rate=3000, autoplay = True)# the rate is set to 3000 make the widget to run (seems the widget does not run with rate below 3000)
```
Working directly with the signals is hard (there is important frequency information). Let's calculate the spectrograms for each of the signals and use as features.
```
# a function for plotting spectrograms
def PlotSpecgram(P, freqs, bins):
"""Spectrogram"""
Z = np.flipud(P) # flip rows so that top goes to bottom, bottom to top, etc.
xextent = 0, np.amax(bins)
xmin, xmax = xextent
extent = xmin, xmax, freqs[0], freqs[-1]
im = pl.imshow(Z, extent=extent,cmap = 'plasma')
pl.axis('auto')
pl.xlim([0.0, bins[-1]])
pl.ylim([0, freqs[-1]])
params = {'NFFT':256, 'Fs':2000, 'noverlap':192}
P, freqs, bins = mlab.specgram(whale_array, **params)
PlotSpecgram(P, freqs, bins)
plt.title('Spectrogram with an Upcall')
```
### Feature Extraction
---
We will go through the files and extract the spectrograms from each of them. We will do it for the first N files.
```
N = 10000 #number of files to use
# create a dictionary which contains all the spectrograms, labeled by the filename
spec_dict = {}
# threshold to cut higher frequencies
m = 60
# loop through all the files
for filename in filenames[:N]:
# read the file
aiff = aifc.open(filename,'r')
whale_strSig = aiff.readframes(aiff.getnframes())
whale_array = np.fromstring(whale_strSig, np.short).byteswap()
# create the spectrogram
P, freqs, bins = mlab.specgram(whale_array, **params)
spec_dict[filename] = P[:m,:]
# save the dimensions of the spectrogram
spec_dim = P[:m,:].shape
```
Most machine learning algorithms in Python expect the data to come in a format **observations** x **features**. In order to get the data in this format we need to convert the two-dimensional spectrogram into a long vector. For that we will use the `ravel` function.
```
# We will put the data in a dictionary
feature_dict = {}
for key in filenames[:N]:
# vectorize the spectrogram
feature_dict[key.split('/')[-1]] = spec_dict[key].ravel()
# convert to a pandas dataframe
X = pd.DataFrame(feature_dict).T
X.head(5)
# we do not need these objects anymore so let's release them from memory
del feature_dict
del spec_dict
# let's save these variables for reuse
np.save('X.npy',X)
np.save('y.npy',np.array(labels['label'][X.index])[:N])
```
### References:
https://www.kaggle.com/c/whale-detection-challenge
https://github.com/jaimeps/whale-sound-classification
```
```
|
github_jupyter
|
```
import os
import h5py
import numpy as np
# -- local --
from feasibgs import util as UT
from feasibgs import catalogs as Cat
from feasibgs import forwardmodel as FM
import matplotlib as mpl
import matplotlib.pyplot as pl
mpl.rcParams['text.usetex'] = True
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['axes.linewidth'] = 1.5
mpl.rcParams['axes.xmargin'] = 1
mpl.rcParams['xtick.labelsize'] = 'x-large'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['xtick.major.width'] = 1.5
mpl.rcParams['ytick.labelsize'] = 'x-large'
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['ytick.major.width'] = 1.5
mpl.rcParams['legend.frameon'] = False
%matplotlib inline
cata = Cat.GamaLegacy()
gleg = cata.Read()
r_mag = UT.flux2mag(gleg['legacy-photo']['apflux_r'][:,1], method='log')
float(np.sum(np.isfinite(r_mag)))/float(len(r_mag))
no_rmag = np.invert(np.isfinite(r_mag))
fig = plt.figure()
sub = fig.add_subplot(111)
_ = sub.hist(gleg['gama-photo']['modelmag_r'], color='C0', histtype='stepfilled', range=(16, 21), bins=40)
_ = sub.hist(gleg['gama-photo']['modelmag_r'][no_rmag], color='C1', histtype='stepfilled', range=(16, 21), bins=40)
sub.set_xlabel(r'$r$ band model magnitude', fontsize=20)
sub.set_xlim([16., 21.])
fig = plt.figure()
sub = fig.add_subplot(111)
_ = sub.hist(gleg['gama-photo']['modelmag_r'], color='C0', histtype='stepfilled', range=(16, 21), bins=40, normed=True)
_ = sub.hist(gleg['gama-photo']['modelmag_r'][no_rmag], color='C1', histtype='stepfilled', range=(16, 21), bins=40, normed=True)
sub.set_xlabel(r'$r$ band model magnitude', fontsize=20)
sub.set_xlim([16., 21.])
fig = plt.figure()
sub = fig.add_subplot(111)
sub.scatter(gleg['gama-photo']['modelmag_r'], r_mag, s=2, c='k')
sub.scatter(gleg['gama-photo']['modelmag_r'], UT.flux2mag(gleg['legacy-photo']['apflux_r'][:,1]), s=1, c='C0')
sub.plot([0., 25.], [0., 25.], c='C1', lw=1, ls='--')
sub.set_xlabel('$r$-band model magnitude', fontsize=20)
sub.set_xlim([13, 21])
sub.set_ylabel('$r$-band apflux magnitude', fontsize=20)
sub.set_ylim([15, 25])
fig = plt.figure()
sub = fig.add_subplot(111)
sub.scatter(gleg['gama-photo']['modelmag_r'], UT.flux2mag(gleg['legacy-photo']['flux_r']), s=2, c='C0',
label='Legacy flux')
sub.scatter(gleg['gama-photo']['modelmag_r'], r_mag, s=0.5, c='C1',
label='Legacy apflux (fiber)')
sub.plot([0., 25.], [0., 25.], c='k', lw=1, ls='--')
sub.legend(loc='upper left', markerscale=5, handletextpad=0., prop={'size':15})
sub.set_xlabel('$r$-band GAMA model magnitude', fontsize=20)
sub.set_xlim([13, 20])
sub.set_ylabel('$r$-band Legacy photometry', fontsize=20)
sub.set_ylim([13, 23])
```
|
github_jupyter
|
# Multi-Layer Perceptron, MNIST
---
In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.
The process will be broken down into the following steps:
>1. Load and visualize the data
2. Define a neural network
3. Train the model
4. Evaluate the performance of our trained model on a test dataset!
Before we begin, we have to import the necessary libraries for working with data and PyTorch.
```
# import libraries
import torch
import numpy as np
```
---
## Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.
This cell will create DataLoaders for each of our datasets.
```
# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection
# Run this script to enable the datasets download
# Reference: https://github.com/pytorch/vision/issues/1938
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
```
### Visualize a Batch of Training Data
The first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
```
### View an Image in More Detail
```
img = np.squeeze(images[1])
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
```
---
## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.
```
import torch.nn as nn
import torch.nn.functional as F
## TODO: Define the NN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# linear layer (784 -> 1 hidden node)
self.fc1 = nn.Linear(28 * 28, 512)
self.fc2 = nn.Linear(512, 256)
self.fc3 = nn.Linear(256, 128)
self.fc4 = nn.Linear(128, 10)
self.dropout = nn.Dropout(0.2)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = F.relu(self.fc3(x))
x = self.dropout(x)
return self.fc4(x)
# initialize the NN
model = Net()
print(model)
```
### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)
It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.
```
## TODO: Specify loss and optimization functions
# specify loss function
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
---
## Train the Network
The steps for training/learning from a batch of data are described in the comments below:
1. Clear the gradients of all optimized variables
2. Forward pass: compute predicted outputs by passing inputs to the model
3. Calculate the loss
4. Backward pass: compute gradient of the loss with respect to model parameters
5. Perform a single optimization step (parameter update)
6. Update average training loss
The following loop trains for 30 epochs; feel free to change this number. For now, we suggest somewhere between 20-50 epochs. As you train, take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.
```
# number of epochs to train the model
n_epochs = 30 # suggest training between 20-50 epochs
model.train() # prep model for training
for epoch in range(n_epochs):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data, target in train_loader:
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*data.size(0)
# print training statistics
# calculate average loss over an epoch
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch+1,
train_loss
))
```
---
## Test the Trained Network
Finally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.
#### `model.eval()`
`model.eval(`) will set all the layers in your model to evaluation mode. This affects layers like dropout layers that turn "off" nodes during training with some probability, but should allow every node to be "on" for evaluation!
```
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval() # prep model for *evaluation*
for data, target in test_loader:
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
str(i), 100 * class_correct[i] / class_total[i],
class_correct[i], class_total[i]))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
```
### Visualize Sample Test Results
This cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds = torch.max(output, 1)
# prep images for display
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())),
color=("green" if preds[idx]==labels[idx] else "red"))
```
|
github_jupyter
|
```
import pandas as pd
import os
import s3fs # for reading from S3FileSystem
import json
%matplotlib inline
import matplotlib.pyplot as plt
import torch.nn as nn
import torch
import torch.utils.model_zoo as model_zoo
import numpy as np
import torchvision.models as models # To get ResNet18
# From - https://github.com/cfotache/pytorch_imageclassifier/blob/master/PyTorch_Image_Inference.ipynb
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
from PIL import Image
from torch.autograd import Variable
from torch.utils.data.sampler import SubsetRandomSampler
```
# Prepare the Model
```
SAGEMAKER_PATH = r'/home/ec2-user/SageMaker'
MODEL_PATH = os.path.join(SAGEMAKER_PATH, r'sidewalk-cv-assets19/pytorch_pretrained/models/20e_slid_win_no_feats_r18.pt')
os.path.exists(MODEL_PATH)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
# Use PyTorch's ResNet18
# https://stackoverflow.com/questions/53612835/size-mismatch-for-fc-bias-and-fc-weight-in-pytorch
model = models.resnet18(num_classes=5)
model.to(device)
model.load_state_dict(torch.load(MODEL_PATH))
model.eval()
```
# Prep Data
```
# From Galen
test_transforms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
#device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# the dataset loads the files into pytorch vectors
#image_dataset = TwoFileFolder(dir_containing_crops, meta_to_tensor_version=2, transform=data_transform)
# the dataloader takes these vectors and batches them together for parallelization, increasing performance
#dataloader = torch.utils.data.DataLoader(image_dataset, batch_size=4, shuffle=True, num_workers=4)
# this is the number of additional features provided by the dataset
#len_ex_feats = image_dataset.len_ex_feats
#dataset_size = len(image_dataset)
# Load in the data
data_dir = 'images'
data = datasets.ImageFolder(data_dir, transform=test_transforms)
classes = data.classes
!ls -a images
!rm -f -r images/.ipynb_checkpoints/
# Examine the classes based on folders...
# Need to make sure that we don't get a .ipynb_checkpoints as a folder
# Discussion here - https://forums.fast.ai/t/how-to-remove-ipynb-checkpoint/8532/19
classes
num = 10
indices = list(range(len(data)))
print(indices)
np.random.shuffle(indices)
idx = indices[:num]
test_transforms = transforms.Compose([transforms.Resize(224),
transforms.ToTensor(),
])
#sampler = SubsetRandomSampler(idx)
loader = torch.utils.data.DataLoader(data, batch_size=num)
dataiter = iter(loader)
images, labels = dataiter.next()
len(images)
# Look at the first image
images[0]
len(labels)
labels
```
# Execute Inference on 2 Sample Images
```
# Note on how to make sure the model and the input tensors are both on cuda device (gpu)
# https://discuss.pytorch.org/t/runtimeerror-input-type-torch-cuda-floattensor-and-weight-type-torch-floattensor-should-be-the-same/21782/6
def predict_image(image, model):
image_tensor = test_transforms(image).float()
image_tensor = image_tensor.unsqueeze_(0)
input = Variable(image_tensor)
input = input.to(device)
output = model(input)
index = output.data.cpu().numpy().argmax()
return index, output
to_pil = transforms.ToPILImage()
#images, labels = get_random_images(5)
fig=plt.figure(figsize=(10,10))
for ii in range(len(images)):
image = to_pil(images[ii])
index, output = predict_image(image, model)
print(f'index: {index}')
print(f'output: {output}')
sub = fig.add_subplot(1, len(images), ii+1)
res = int(labels[ii]) == index
sub.set_title(str(classes[index]) + ":" + str(res))
plt.axis('off')
plt.imshow(image)
plt.show()
res
```
# Comments and Questions
What's the order of the labels (and how I should order the folders for the input data?)
This file implies that there are different orders
https://github.com/ProjectSidewalk/sidewalk-cv-assets19/blob/master/GSVutils/sliding_window.py
```label_from_int = ('Curb Cut', 'Missing Cut', 'Obstruction', 'Sfc Problem')
pytorch_label_from_int = ('Missing Cut', "Null", 'Obstruction', "Curb Cut", "Sfc Problem")```
|
github_jupyter
|
```
transformedDfs = [i.transform(logDf) for i in model]
costs = [(i,v.stages[-1].computeCost(transformedDfs[i])) for i,v in enumerate(model)]
costs
#transformedModels = [v.stages[-1].computeCost(transformedDfs[i]) for i,v in enumerate(model)]
newParamMap = ({kmeans.k: 10,kmeans.initMode:"random"})
newModel = pipeline.fit(logDf,newParamMap)
#computedModel = pipeline.fit(logDf)
#ide til næste gang beregn beste stuff for alle modeller i pipelinen, derefer tag bedste pipeline ud og byg videre på den.
trans = newModel.transform(logDf)
trans.groupBy("prediction").count().show() # shows the distribution of companies
vec = [Row(cluster=i,center=Vectors.dense(v)) for i,v in enumerate(newModel.stages[-1].clusterCenters())]
#print(type(vec))
SpDf = sqlContext.createDataFrame(data=vec)
#SpDf.show(truncate=False)
featureContributionUdf = F.udf(lambda x,y: (x-y)*(x-y),VectorUDT() )
sqrtUdf = F.udf(lambda x,y: float(Vectors.norm(vector=x-y,p=2)),DoubleType())
printUdf = F.udf(lambda x: type(x),StringType())
toDenseUDf = F.udf(lambda x: Vectors.dense(x.toArray()),VectorUDT())
#print(np.sum(vec[0]["vec"]))
joinedDf = (trans
.join(SpDf,on=(trans["prediction"]==SpDf["cluster"]),how="left")
.withColumn(colName="features",col=toDenseUDf(F.col("features")))
.drop(SpDf["cluster"])
.withColumn(colName="contribution",col=featureContributionUdf(F.col("features"),F.col("center")))
.withColumn(colName="distance",col=sqrtUdf(F.col("features"),F.col("center")))
)
int_range = widgets.IntSlider()
display(int_range)
def on_value_change(change):
print(change['new'])
int_range.observe(on_value_change, names='value')
def printTotalAndAvgFeatContribution(df,cluster=0,toPrint=False):
joinedRdd = (df
.select("prediction","contribution")
.rdd)
#print(joinedRdd.take(1))
summed = joinedRdd.reduceByKey(add)
normedtotalContribute = summed.map(lambda x: (x[0],x[1])).collectAsMap()
return normedtotalContribute
stuff = printTotalAndAvgFeatContribution(joinedDf)
centers = [(i,np.log(stuff[i])/np.sum(np.log(stuff[i]))) for i in range(0,10)]
cols =joinedDf.columns[5:31]
centers
clusters = np.array([i[1] for i in centers if i[0] in [6,9,4] ])
transposedCluster = np.log1p(clusters.transpose())
N =3
import colorsys
HSV_tuples = [(x*1.0/len(transposedCluster), 0.5, 0.5) for x in range(len(transposedCluster))]
RGB_tuples = list(map(lambda x: colorsys.hsv_to_rgb(*x), HSV_tuples))
ind = np.arange(N)
#print(ind)# the x locations for the groups
width = 0.5
plots = [plt.bar(ind, transposedCluster[1], width, color='#d62728')]
former = transposedCluster[1]
for i,v in enumerate(transposedCluster[1:]):
plots.append(plt.bar(ind, v, width, color=RGB_tuples[i],bottom=former))
former += v
plt.ylabel('log Scores')
plt.title('Component Contribution for outlier clusters')
plt.xticks(ind+0.3, ['C_'+str(i) for i in [6,9,4]])
plt.legend([p[0] for p in plots], cols,bbox_to_anchor=(1.05, 1.5),loc=2,borderaxespad=1)
plt.show()
class DistanceTransformation(Transformer,HasInputCol,HasOutputCol):
'''
'''
@keyword_only
def __init__(self, inputCol=None, outputCol=None, model=None):
super(DistanceTransformation, self).__init__()
kwargs = self.__init__._input_kwargs
self.setParams(**kwargs)
@keyword_only
def setParams(self, inputCol=None, outputCol=None, model=None):
kwargs = self.setParams._input_kwargs
return self._set(**kwargs)
def _transform(self, dataset,model):
def computeAndInsertClusterCenter(dataset,centers):
'''
Insert a clusterCenter as column.
'''
distanceUdf = F.udf(lambda x,y: float(np.sqrt(np.sum((x-y)*(x-y)))),DoubleType())
return (dataset
.join(F.broadcast(centers),on=(dataset["prediction"]==centers["cluster"]),how="inner")
.withColumn(colName="distance",col=distanceUdf(F.col("scaledFeatures"),F.col("center")))
.drop("cluster")
.drop("features")
.drop("v2")
)
print(getCenters(0))
paramGrid = ParamGridBuilder() \
.addGrid(kmeans.k, [2, 4, 10]) \
.addGrid(kmeans.initSteps, [3,5,10]) \
.build()
#create an unsupervised classification evaluator
class ElbowEvaluation(Estimator,ValidatorParams):
'''
doc
'''
@keyword_only
def __init__(self, estimator=None, estimatorParamMaps=None, evaluator=None,
seed=None):
super(ElbowEvaluation, self).__init__()
kwargs = self.__init__._input_kwargs
self._set(**kwargs)
@keyword_only
def setParams(self, estimator=None, estimatorParamMaps=None, evaluator=None):
kwargs = self.setParams._input_kwargs
return self._set(**kwargs)
computeDistanceToCenterUdf = F.udf(lambda x,y: (x-y)*(x-y),VectorUDT())
def _fit(self, dataset):
est = self.getOrDefault(self.estimator)
epm = self.getOrDefault(self.estimatorParamMaps)
numModels = len(epm)
eva = self.getOrDefault(self.evaluator)
for j in range(numModels):
model = est.fit(dataset, epm[j])
model.
metric = eva.evaluate(model.transform(dataset, epm[j]))
metrics[j] += metric
if eva.isLargerBetter():
bestIndex = np.argmax(metrics)
else:
bestIndex = np.argmin(metrics)
bestModel = est.fit(dataset, epm[bestIndex])
return self._copyValues(TrainValidationSplitModel(bestModel, metrics))
def copy(self, extra=None):
"""
Creates a copy of this instance with a randomly generated uid
and some extra params. This copies creates a deep copy of
the embedded paramMap, and copies the embedded and extra parameters over.
:param extra: Extra parameters to copy to the new instance
:return: Copy of this instance
"""
if extra is None:
extra = dict()
newTVS = Params.copy(self, extra)
if self.isSet(self.estimator):
newTVS.setEstimator(self.getEstimator().copy(extra))
# estimatorParamMaps remain the same
if self.isSet(self.evaluator):
newTVS.setEvaluator(self.getEvaluator().copy(extra))
return newTVS
class ElbowEvaluationModel(Model, ValidatorParams):
"""
.. note:: Experimental
Model from train validation split.
.. versionadded:: 2.0.0
"""
def __init__(self, bestModel, validationMetrics=[]):
super(TrainValidationSplitModel, self).__init__()
#: best model from cross validation
self.bestModel = bestModel
#: evaluated validation metrics
self.validationMetrics = validationMetrics
def _transform(self, dataset):
return self.bestModel.transform(dataset)
def copy(self, extra=None):
"""
Creates a copy of this instance with a randomly generated uid
and some extra params. This copies the underlying bestModel,
creates a deep copy of the embedded paramMap, and
copies the embedded and extra parameters over.
And, this creates a shallow copy of the validationMetrics.
:param extra: Extra parameters to copy to the new instance
:return: Copy of this instance
"""
if extra is None:
extra = dict()
bestModel = self.bestModel.copy(extra)
validationMetrics = list(self.validationMetrics)
return TrainValidationSplitModel(bestModel, validationMetrics)
```
|
github_jupyter
|
# CLX Workflow
This is an introduction to the CLX Workflow and it's I/O components.
## What is a CLX Workflow?
A CLX Workflow receives data from a particular source, performs operations on that data within a GPU dataframe, and outputs that data to a particular destination. This guide will teach you how to configure your workflow inputs and outputs around a simple workflow example.
## When to use a CLX Workflow
A CLX Workflow provides a simple and modular way of "plugging in" a particular workflow to a read from different inputs and outputs. Use a CLX Workflow when you would like to deploy a workflow as part of a data pipeline.
#### A simple example of a custom Workflow
```
from clx.workflow.workflow import Workflow
class CustomWorkflow(Workflow):
def workflow(self, dataframe):
dataframe["enriched"] = "enriched output"
return dataframe
```
The Workflow relies on the Workflow class which handles the I/O and general data processing functionality. To implement a new Workflow, the developer need only implement the `workflow` function which receives an input dataframe, as shown above.
A more advanced example of a Worlflow can be found [here](https://github.com/rapidsai/clx/blob/branch-0.12/clx/workflow/splunk_alert_workflow.py).
It is an example of a [Splunk](https://www.splunk.com/) Alert Workflow used to find anamolies in Splunk alert data.
## Workflow I/O Components
In order to deploy a workflow to an input an output data feed, we integrate the CLX I/O components.
Let's look quickly at what a workflow configuration for the source and destination might look like. You can see below we declare each of the properties within a dictionary. For more information on how to declare configuration within a configurable yaml file [go].
```
source = {
"type": "fs",
"input_format": "csv",
"input_path": "/full/path/to/input/data",
"schema": ["raw"],
"delimiter": ",",
"required_cols": ["raw"],
"dtype": ["str"],
"header": 0
}
destination = {
"type": "fs",
"output_format": "csv",
"output_path": "/full/path/to/output/data"
}
```
The first step to configuring the input and output of a workflow is to determine the source and destination type. Then to set the associated parameters for that specific type.
As seen above the `type` property is listed first and can be one of the following.
Source Types
* `fs` - Read from a local filesystem
* `dask_fs` - Increase the speed of GPU workflow operations by reading from a file using Dask
* `kafka` - Read from [Kafka](https://kafka.apache.org/)
Destination Types
* `fs` - Writing to local filesystem
* `kafka` - Write to [Kafka](https://kafka.apache.org/)
### Source and Destination Configurations
#### Filesystem
If the `fs` type is used, the developer must distinguish the data format using the `input_format` attribute. Formats available are: csv, parquet, and orc.
The associated parameters available for the `fs` type and `input_format` are documented within the [cuDF I/O](https://rapidsai.github.io/projects/cudf/en/0.11.0/api.html#module-cudf.io.csv) API. For example for reading data from a csv file, reference [cudf.io.csv.read_csv](https://rapidsai.github.io/projects/cudf/en/0.11.0/api.html#cudf.io.csv.read_csv) available parameters.
Example
```
source = {
"type": "fs",
"input_format": "parquet",
"input_path": "/full/path/to/input/data",
"columns": ["x"]
}
```
#### Dask Filesystem
If the `dask_fs` type is used the developer must distinguish the data format using the `input_format` attribute. Formats available are: csv, parquet, and orc.
The associated parameters available for the `dask_fs` type and `input_format` are listed within the [Dask cuDF](https://rapidsai.github.io/projects/cudf/en/0.11.0/10min.html#Getting-Data-In/Out) documentation.
Example
```
source = {
"type": "dask_fs",
"input_format": "csv",
"input_path": "/full/path/to/input/data/*.csv"
}
```
#### Kafka
If the `kafka` type is used the following parameters must be indicated
Source
* `kafka_brokers` - Kafka brokers
* `group_id` - Group ID for consuming kafka messages
* `consumer_kafka_topics` - Names of kafka topics to read from
* `batch_size` - Indicates number of kafka messages to read before data is processed through the workflow
* `time_window` - Maximum time window to wait for `batch_size` to be reached before workflow processing begins.
Destination
* `kafka_brokers` - Kafka brokers
* `publisher_kafka_topic` - Names of kafka topic to write data to
* `batch_size` - Indicates number of workflow-processed messages to aggregate before data is written to the kafka topic
* `output_delimiter` - Delimiter of the data columns
Example
```
source = {
"type": "kafka",
"kafka_brokers": "kafka:9092",
"group_id": "cyber",
"batch_size": 10,
"consumer_kafka_topics": ["topic1", "topic2"],
"time_window": 5
}
dest = {
"type": "kafka",
"kafka_brokers": "kafka:9092"
"batch_size": 10,
"publisher_kafka_topic": "topic3",
"output_delimiter": ","
}
```
## Tying it together
Once we have established our workflow and source and destination configurations we can now run our workflow. Let's create a workflow using the `CustomWorkflow` we created above.
Firstly, we must know the parameters for instantiating a basic workflow
* `name` - The name of the workflow
* `source` - The source of input data (optional)
* `destination` - The destination for output data (optional)
```
from clx.workflow.workflow import Workflow
class CustomWorkflow(Workflow):
def workflow(self, dataframe):
dataframe["enriched"] = "enriched output"
return dataframe
source = {
"type": "fs",
"input_format": "csv",
"input_path": "/full/path/to/input/data",
"schema": ["raw"],
"delimiter": ",",
"required_cols": ["raw"],
"dtype": ["str"],
"header": 0
}
destination = {
"type": "fs",
"output_format": "csv",
"output_path": "/full/path/to/output/data"
}
my_new_workflow = CustomWorkflow(source=source, destination=destination, name="my_new_workflow")
my_new_workflow.run_workflow()
```
## Workflow configurations in an external file
Sometimes workflow configurations may need to change dependent upon the environment. To avoid declaring workflow configurations within sourcecode you may also declare them in an external yaml file. A workflow will look for and establish I/O connections by searching for configurations in the following order:
1. /etc/clx/[workflow-name]/workflow.yaml
1. ~/.config/clx/[workflow-name]/workflow.yaml
1. In-line python config
If source and destination are indicated in external files, they are not required to instantiate a new workflow
```
# Workflow config located at /etc/clx/my_new_workflow/workflow.yaml
my_new_workflow = CustomWorkflow(name="my_new_workflow")
```
|
github_jupyter
|
```
from google.colab import drive
drive.mount('/content/drive')
path = '/content/drive/MyDrive/Research/AAAI/dataset1/second_layer_without_entropy/'
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
color = ['#1F77B4','orange', 'g','brown']
name = [1,2,3,0]
for i in range(10):
if i==3:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[3],label="D_"+str(name[i]))
elif i>=4:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[3])
else:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[i],label="D_"+str(name[i]))
plt.legend()
x[idx[0]][0], x[idx[5]][5]
desired_num = 6000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(a)
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
len(mosaic_list_of_images), mosaic_list_of_images[0]
```
# load mosaic data
```
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd1 = MosaicDataset(mosaic_list_of_images[0:3000], mosaic_label[0:3000] , fore_idx[0:3000])
train_loader = DataLoader( msd1 ,batch_size= batch ,shuffle=True)
batch = 250
msd2 = MosaicDataset(mosaic_list_of_images[3000:6000], mosaic_label[3000:6000] , fore_idx[3000:6000])
test_loader = DataLoader( msd2 ,batch_size= batch ,shuffle=True)
```
# models
```
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50, bias=False) #,self.output)
self.linear2 = nn.Linear(50,50 , bias=False)
self.linear3 = nn.Linear(50,self.output, bias=False)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.xavier_normal_(self.linear3.weight)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,50], dtype=torch.float64) # number of features of output
features = torch.zeros([batch,self.K,50],dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
features = features.to("cuda")
for i in range(self.K):
alp,ftrs = self.helper(z[:,i] ) # self.d*i:self.d*i+self.d
x[:,i] = alp[:,0]
features[:,i] = ftrs
x = F.softmax(x,dim=1) # alphas
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],features[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x = self.linear1(x)
x = F.relu(x)
x = self.linear2(x)
x1 = F.tanh(x)
x = F.relu(x)
x = self.linear3(x)
#print(x1.shape)
return x,x1
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
#self.linear2 = nn.Linear(6,12)
self.linear2 = nn.Linear(50,self.output)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
# torch.manual_seed(12)
# focus_net = Focus_deep(2,1,9,2).double()
# focus_net = focus_net.to("cuda")
# focus_net.linear2.weight.shape,focus_net.linear3.weight.shape
# focus_net.linear2.weight.data[25:,:] = focus_net.linear2.weight.data[:25,:] #torch.nn.Parameter(torch.tensor([last_layer]) )
# (focus_net.linear2.weight[:25,:]== focus_net.linear2.weight[25:,:] )
# focus_net.linear3.weight.data[:,25:] = -focus_net.linear3.weight.data[:,:25] #torch.nn.Parameter(torch.tensor([last_layer]) )
# focus_net.linear3.weight
# focus_net.helper( torch.randn((5,2,2)).double().to("cuda") )
def calculate_attn_loss(dataloader,what,where,criter):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss = criter(outputs, labels)
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
```
# training
```
number_runs = 10
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
full_analysis= []
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(2,1,9,2).double()
where.linear2.weight.data[25:,:] = where.linear2.weight.data[:25,:]
where.linear3.weight.data[:,25:] = -where.linear3.weight.data[:,:25]
where = where.double().to("cuda")
ex,_ = where.helper( torch.randn((5,2,2)).double().to("cuda"))
print(ex)
torch.manual_seed(n)
what = Classification_deep(50,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.001)#,momentum=0.9)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)#,momentum=0.9)
criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 2000
# calculate zeroth epoch loss and FTPT values
running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
analysis_data.append(anls_data)
if(epoch % 200==0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training run ' +str(n)+' at epoch: ',epoch)
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
full_analysis.append((epoch, analysis_data))
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 test images: %f %%' % ( 100 * correct / total))
print(np.mean(np.array(FTPT_analysis),axis=0))
FTPT_analysis
FTPT_analysis[FTPT_analysis['FTPT']+FTPT_analysis['FFPT'] > 90 ]
print(np.mean(np.array(FTPT_analysis[FTPT_analysis['FTPT']+FTPT_analysis['FFPT'] > 90 ]),axis=0))
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,5))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0]/30,label="FTPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1]/30,label="FFPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2]/30,label="FTPF")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3]/30,label="FFPF")
plt.title("Training trends for run "+str(cnt))
plt.grid()
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.legend()
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold')
plt.savefig(path + "run"+str(cnt)+".png",bbox_inches="tight")
plt.savefig(path + "run"+str(cnt)+".pdf",bbox_inches="tight")
cnt+=1
FTPT_analysis.to_csv(path+"synthetic_zeroth.csv",index=False)
```
|
github_jupyter
|
```
class Solution:
def numberOfSubstrings(self, s: str) -> int:
letters = {'a', 'b', 'c'}
N = len(s)
count = 0
for gap in range(3, N + 1):
for start in range(N - gap + 1):
sub_str = s[start:start + gap]
if set(sub_str) == letters:
count += 1
return count
from collections import Counter
class Solution:
def numberOfSubstrings(self, s: str) -> int:
def count_letter(ct):
if ct['a'] and ct['b'] and ct['c']:
return True
return False
count = 0
for i in range(len(s)):
ctr = Counter(s[:len(s) - i])
for j in range(i + 1):
if j != 0:
start_idx = j - 1
end_idx = len(s) + start_idx - i
ctr[s[start_idx]] -= 1
ctr[s[end_idx]] += 1
if count_letter(ctr):
count += 1
return count
class Solution:
def numberOfSubstrings(self, s: str) -> int:
a, b, c = 0, 0, 0
ans, i, n = 0, 0, len(s)
for j, letter in enumerate(s):
if letter == 'a':
a += 1
elif letter == 'b':
b += 1
else:
c += 1
while a and b and c:
ans += n - j
print(n - j, n, j)
if s[i] == 'a':
a -= 1
elif s[i] == 'b':
b -= 1
else:
c -= 1
i += 1
return ans
solution = Solution()
solution.numberOfSubstrings('abcbb')
a = {'a', 'c', 'b'}
b = {'a', 'b', 'c'}
print(a == b)
class Solution:
def numberOfSubstrings(self, s: str) -> int:
a = b = c = 0 # counter for letter a/b/c
ans, i, n = 0, 0, len(s) # i: slow pointer
for j, letter in enumerate(s): # j: fast pointer
if letter == 'a': a += 1 # increment a/b/c accordingly
elif letter == 'b': b += 1
else: c += 1
while a > 0 and b > 0 and c > 0: # if all of a/b/c are contained, move slow pointer
ans += n-j # count possible substr, if a substr ends at j, then there are n-j substrs to the right that are containing all a/b/c
if s[i] == 'a': a -= 1 # decrement counter accordingly
elif s[i] == 'b': b -= 1
else: c -= 1
i += 1 # move slow pointer
return ans
```
|
github_jupyter
|
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
```
# Exploratory Climate Analysis
```
# Design a query to retrieve the last 12 months of precipitation data and plot the results
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()[0]
last_date
# Calculate the date 1 year ago from the last data point in the database
last_year = dt.datetime.strptime(last_date, "%Y-%m-%d") - dt.timedelta(days=365)
last_year
# Perform a query to retrieve the data and precipitation scores
query = session.query(Measurement.date,Measurement.prcp).filter(Measurement.date>=last_year).all()
query
# Save the query results as a Pandas DataFrame and set the index to the date column
prec_df = pd.DataFrame(query,columns=['date','precipitation'])
prec_df
prec_df['date']= pd.to_datetime(prec_df['date']) #format= '%y-%m-%d')
# set the index
prec_df.set_index("date",inplace=True)
# Sort the dataframe by date
prec_df = prec_df.sort_values(by= "date",ascending=True)
prec_df.head(20)
# Use Pandas Plotting with Matplotlib to plot the data
prec_df.plot(title = "Precipitation (12 months)", color ='blue', alpha = 0.8 , figsize =(10,6))
plt.legend(loc='upper center',prop={'size':10})
plt.savefig("Images/Precipitation.png")
plt.show()
```

```
# Use Pandas to calcualte the summary statistics for the precipitation data
prec_df.describe()
```

```
# Design a query to show how many stations are available in this dataset?
station_count = session.query(Station.station).count()
station_count
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
active_station = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
active_station
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
temp = [Measurement.station,
func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs),]
all_temp = session.query(*temp).group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
all_temp
# Choose the station with the highest number of temperature observations.
highest_temp = session.query(Measurement.station,func.count(Measurement.tobs)).\
group_by(Measurement.tobs).\
order_by(func.count(Measurement.tobs).desc()).all()
highest_temp
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
last_year_data = session.query(Measurement.date,Measurement.tobs).filter(Measurement.date>=last_year).all()
last_year_data
last_year_data_df = pd.DataFrame(last_year_data)
last_year_data_df
last_year_data_df.hist()
plt.title("Temperature over 12 months")
plt.ylabel("Frequency")
plt.legend("tobs",loc='upper right')
plt.savefig("Images/Temperature over 12 months")
plt.tight_layout
plt.show()
```

```
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
Temp = calc_temps('2016-08-23','2017-08-23')
Temp
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
plt.figure(figsize=(3,6))
sns.barplot(data=Temp,color = "lightsalmon")
plt.ylabel('Temp(F)')
plt.title("Trip Avg Temp")
plt.tight_layout
plt.savefig("Images/Trip Avg Temp.png")
plt.show()
def precipitation(start_date, end_date):
select_column = [Measurement.station,
Station.name,
Station.latitude,
Station.longitude,
Station.elevation,
Measurement.prcp]
return session.query(*select_column).\
filter(Measurement.station == Station.station).filter(Measurement.date >= start_date).\
filter(Measurement.date <= end_date).group_by(Measurement.station).order_by(Measurement.prcp.desc()).all()
print(precipitation('2016-02-26','2016-03-02'))
```
## Optional Challenge Assignment
```
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
```
|
github_jupyter
|
<font size="+5">#02 | Decision Tree. A Supervised Classification Model</font>
- Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)
- Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄
# Discipline to Search Solutions in Google
> Apply the following steps when **looking for solutions in Google**:
>
> 1. **Necesity**: How to load an Excel in Python?
> 2. **Search in Google**: by keywords
> - `load excel python`
> - ~~how to load excel in python~~
> 3. **Solution**: What's the `function()` that loads an Excel in Python?
> - A Function to Programming is what the Atom to Phisics.
> - Every time you want to do something in programming
> - **You will need a `function()`** to make it
> - Theferore, you must **detect parenthesis `()`**
> - Out of all the words that you see in a website
> - Because they indicate the presence of a `function()`.
# Load the Data
> Load the Titanic dataset with the below commands
> - This dataset **people** (rows) aboard the Titanic
> - And their **sociological characteristics** (columns)
> - The aim of this dataset is to predict the probability to `survive`
> - Based on the social demographic characteristics.
```
import seaborn as sns
df = sns.load_dataset(name='titanic').iloc[:, :4]
df.head()
```
# `DecisionTreeClassifier()` Model in Python
## Build the Model
> 1. **Necesity**: Build Model
> 2. **Google**: How do you search for the solution?
> 3. **Solution**: Find the `function()` that makes it happen
## Code Thinking
> Which function computes the Model?
> - `fit()`
>
> How could can you **import the function in Python**?
```
fit()
model.fit()
```
`model = ?`
```
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.__dict__
model.fit()
```
### Separate Variables for the Model
> Regarding their role:
> 1. **Target Variable `y`**
>
> - [ ] What would you like **to predict**?
>
> 2. **Explanatory Variable `X`**
>
> - [ ] Which variable will you use **to explain** the target?
```
explanatory = df.drop(columns='survived')
target = df.survived
```
### Finally `fit()` the Model
```
model.__dict__
model.fit(X=explanatory, y=target)
import pandas as pd
pd.get_dummies(data=df)
df = pd.get_dummies(data=df, drop_first=True)
df
explanatory = df.drop(columns='survived')
target = df.survived
model.fit(X=explanatory, y=target)
df
df.isna().sum()
df.fillna('hola')
df.dropna(inplace=True) # df = df.dropna()
df
df.dropna(inplace=True) # df = df.dropna()
df
explanatory = df.drop(columns='survived')
target = df.survived
model.fit(X=explanatory, y=target)
```
## Calculate a Prediction with the Model
> - `model.predict_proba()`
```
model.predict_proba()
```
## Model Visualization
> - `tree.plot_tree()`
## Model Interpretation
> Why `sex` is the most important column? What has to do with **EDA** (Exploratory Data Analysis)?
```
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VeUPuFGJHk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
```
# Prediction vs Reality
> How good is our model?
```
dfsel = df[['survived']].copy()
dfsel['pred'] = model.predict(X=explanatory)
dfsel.sample(10)
comp = dfsel.survived == dfsel.pred
comp.sum()
comp.sum()/714
comp.mean()
```
## Precision
> - `model.score()`
```
model.score(X=explanatory, y=target)
```
## Confusion Matrix
> 1. **Sensitivity** (correct prediction on positive value, $y=1$)
> 2. **Specificity** (correct prediction on negative value $y=0$).
## ROC Curve
> A way to summarise all the metrics (score, sensitivity & specificity)
|
github_jupyter
|
# Parse Java Methods
----
(C) Maxim Gansert, 2020, Mindscan Engineering
```
import sys
sys.path.insert(0,'../src')
import os
import datetime
from com.github.c2nes.javalang import tokenizer, parser, ast
from de.mindscan.fluentgenesis.dataprocessing.method_extractor import tokenize_file, extract_allmethods_from_compilation_unit
from de.mindscan.fluentgenesis.bpe.bpe_model import BPEModel
from de.mindscan.fluentgenesis.bpe.bpe_encoder_decoder import SimpleBPEEncoder
from de.mindscan.fluentgenesis.dataprocessing.method_dataset import MethodDataset
def split_methodbody_into_multiple_lines(method_body):
result = []
current_line_number = -1
current_line_tokens = []
for token in method_body:
token_line = token.position[0]
if token_line != current_line_number:
current_line_number = token_line
if len(current_line_tokens) != 0:
result.append(current_line_tokens)
current_line_tokens = []
current_line_tokens.append(token.value)
pass
if len(current_line_tokens) !=0:
result.append(current_line_tokens)
pass
return result
def process_source_file(dataset_directory, source_file_path, encoder, dataset):
# derive the full source file path
full_source_file_path = os.path.join( dataset_directory, source_file_path);
# Work on the source file
java_tokenlist = tokenize_file(full_source_file_path)
parsed_compilation_unit = parser.parse(java_tokenlist)
# collect file names, line numbers, method names, class names etc
all_methods_per_source = extract_allmethods_from_compilation_unit(parsed_compilation_unit, java_tokenlist)
for single_method in all_methods_per_source:
try:
method_name = single_method['method_name']
method_class_name = single_method['class_name']
method_body = single_method['method_body']
multi_line_body = split_methodbody_into_multiple_lines(method_body)
one_line = [item for sublist in multi_line_body for item in sublist]
print(one_line)
# encode body code and methodnames using the bpe-vocabulary
bpe_encoded_methodname = encoder.encode( [ method_name ] )
bpe_encoded_methodbody_ml = encoder.encode_multi_line( multi_line_body )
# do some calculations on the tokens and on the java code, so selection of smaller datasets is possible
bpe_encoded_method_name_length = len(bpe_encoded_methodname)
bpe_encoded_method_body_length = sum([len(line) for line in bpe_encoded_methodbody_ml])
# save this into dataset
method_data = {
"source_file_path": source_file_path,
"method_class_name": method_class_name,
"method_name": method_name,
"encoded_method_name_length": bpe_encoded_method_name_length,
"encoded_method_name": bpe_encoded_methodname,
"encoded_method_body_length": bpe_encoded_method_body_length,
"encoded_method_body": bpe_encoded_methodbody_ml,
"method_body": method_body
}
dataset.add_method_data( method_data )
except:
# ignore problematic method
pass
model = BPEModel("16K-full", "../src/de/mindscan/fluentgenesis/bpe/")
model.load_hparams()
dataset_directory = 'D:\\Downloads\\Big-Code-excerpt\\'
model_vocabulary = model.load_tokens()
model_bpe_data = model.load_bpe_pairs()
encoder = SimpleBPEEncoder(model_vocabulary, model_bpe_data)
method_dataset = MethodDataset(dataset_name='parseMethodPythonNotebook1.jsonl')
method_dataset.prepareNewDataset(dataset_directory)
process_source_file(dataset_directory,'wordhash/WordMap.java' ,encoder, method_dataset )
method_dataset.finish()
```
|
github_jupyter
|
# Simple Perceptron
```
import tensorflow as tf
import pandas as pd
import numpy as np
%load_ext tensorboard
# Import the dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# For now, we'll focus on digit images of fives
is_five_train = y_train == 5
is_five_test = y_test == 5
labels = ["Not five", "five"]
# Specifying the input shape for the perceptron (here, img_width x img_height)
img_width = x_train.shape[1]
img_height = x_train.shape[2]
```
# Creating a model
**Step 1: Design your model**
Model 1:
```
(28, 28) ---> Flatten (784) ---> Dense (1)
loss : mse (default attr.)
optimizer : adam (default attr.)
```
```
# Imports required for building model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense
# Create a simple perceptron model
model = Sequential(name="model_1")
model.add(Flatten(input_shape=(img_width, img_height)))
model.add(Dense(1))
# Shows how your model was built
model.summary()
```
**Step 2: Specify your model parameters**
```
model_params = {
"epochs": 10,
"batch_size": 32,
"loss": "mse",
"optimizer": "adam",
}
```
**Step 3: Compile the model with the parameters**
```
model.compile(loss=model_params["loss"],
optimizer=model_params["optimizer"],
metrics=['accuracy'])
```
**Step 4: Fit / Train the model**
***(Optional)*** Specify tensorboard callbacks.
Run tensorboard command first (after below cell) and then run the below cell. Then click on refersh icon on top right of tensorboard after every epoch to live track the accuracy and loss
```
from tensorflow.keras.callbacks import TensorBoard
log_dir = "logs/fit/" + model.name
tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x_train, is_five_train,
epochs=model_params["epochs"],
batch_size=model_params["batch_size"],
validation_data=(x_test, is_five_test),
callbacks=[tb_callback])
%tensorboard --logdir logs/fit
```
**Step 5: Evaluate the model**
Predict the test dataset and analyze them with corresponding test labels to check whether they make sense.
```
# Predict first 10 test samples
n_samples = 10000
predictions = model.predict(x_test[:n_samples, :, :])
true_labels = y_test[:n_samples]
is_five = is_five_test[:n_samples]
pd.DataFrame(data={
"predicted_label": predictions.flatten(),
"true_label": true_labels,
"is_five": is_five
})
```
**Step 6: Debug, rebuild, retrain and re-evaluate your model**
```
# Let's try more epochs, say 100
model_params["epochs"] = 100
# create a new model with same architecture
new_model = tf.keras.models.clone_model(model)
new_model._name = "model_2"
new_model.summary()
new_model.compile(loss=model_params["loss"],
optimizer=model_params["optimizer"],
metrics=['accuracy'])
log_dir = "logs/fit/" + new_model.name
tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
new_model.fit(x_train, is_five_train,
epochs=model_params["epochs"],
batch_size=model_params["batch_size"],
validation_data=(x_test, is_five_test),
callbacks=[tb_callback])
%tensorboard --logdir logs/fit
n_samples = 10000
predictions = new_model.predict(x_test[:n_samples, :, :])
true_labels = y_test[:n_samples]
is_five = is_five_test[:n_samples]
pd.DataFrame(data={
"predicted_label": predictions.flatten(),
"true_label": true_labels,
"is_five": is_five
})
```
### We haven't tried an activation function. That's why our model is broken. Let's add a sigmoid activation to our perceptron
```
model_params["epochs"] = 10
model_params
new_model = Sequential(name="model_3")
new_model.add(Flatten(input_shape=(img_width, img_height)))
new_model.add(Dense(1, activation='relu'))
new_model.summary()
new_model.compile(loss=model_params["loss"],
optimizer=model_params["optimizer"],
metrics=['accuracy'])
log_dir = "logs/fit/" + new_model.name
tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
new_model.fit(x_train, is_five_train,
epochs=model_params["epochs"],
batch_size=model_params["batch_size"],
validation_data=(x_test, is_five_test),
callbacks=[tb_callback])
%tensorboard --logdir logs/fit
n_samples = 10000
predictions = new_model.predict(x_test[:n_samples, :, :])
true_labels = y_test[:n_samples]
is_five = is_five_test[:n_samples]
pd.DataFrame(data={
"predicted_label": predictions.flatten(),
"true_label": true_labels,
"is_five": is_five
})
```
|
github_jupyter
|
# Transfer Learning
In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html).
ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).
Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.
With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
```
Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`.
```
data_dir = 'Cat_Dog_data/Cat_Dog_data'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[.229, 0.224, 0.225])])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
```
We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on.
```
model = models.densenet121(pretrained=True)
model
```
This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers.
```
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
```
With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.
PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU.
```
import time
for device in ['cpu', 'cuda']:
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device)
for ii, (inputs, labels) in enumerate(trainloader):
# Move input and label tensors to the GPU
inputs, labels = inputs.to(device), labels.to(device)
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if ii==3:
break
print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
```
You can write device agnostic code which will automatically use CUDA if it's enabled like so:
```python
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
```
From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.
>**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen.
#### Resnet101
Load the ResNet101 model.
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(f"Device {device}")
model = models.resnet101(pretrained=True)
model
```
Freeze parameters.
```
for param in model.parameters():
param.requires_grad = False
```
Replace model head.
```
model.fc = nn.Sequential(nn.Linear(2048, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
model.to(device)
```
Define the loss function and the optimizer.
```
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.fc.parameters())
```
Training loop.
```
n_epochs = 1
print_every = 5
step, train_loss, train_acc, test_loss, test_acc = 0, 0, 0, 0, 0
def get_batch_acc(preds, labels):
_, pred_class = preds.topk(1, dim=1)
correct_class = (pred_class == labels.view(*pred_class.shape))
batch_acc = torch.mean(correct_class.type(torch.FloatTensor))
return batch_acc
for epoch in range(n_epochs):
for images, labels in trainloader:
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
# forward pass
preds = model(images)
# compute loss & accuracy
batch_loss = criterion(preds, labels)
batch_acc = get_batch_acc(preds, labels)
# backward pass
batch_loss.backward()
# gradient descent step
optimizer.step()
# update running loss and accuracy
train_loss += batch_loss.item() / print_every
train_acc += batch_acc.item() / print_every
if step % print_every == 0:
# Evaluate on test data
with torch.no_grad():
model.eval()
for test_images, test_labels in testloader:
test_images, test_labels = test_images.to(device), test_labels.to(device)
test_preds = model(test_images)
batch_loss = criterion(test_preds, test_labels)
batch_acc = get_batch_acc(test_preds, test_labels)
test_loss += batch_loss.item() / len(testloader)
test_acc += batch_acc.item() / len(testloader)
print(f"Epoch: {epoch} | Step: {step}/{int(len(trainloader) / print_every)} | Train loss: {train_loss} | Train acc: {train_acc} | Test loss: {test_loss} | Test acc: {test_acc}")
model.train()
train_loss, train_acc, test_loss, test_acc = 0, 0, 0, 0
step += 1
```
#### Densenet121
```
# Use GPU if it's available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.003)
model.to(device);
device
## TODO: Use a pretrained model to classify the cat and dog images
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
```
|
github_jupyter
|
```
%matplotlib inline
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = (10, 8)
import seaborn as sns
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
import collections
from sklearn.model_selection import GridSearchCV
from sklearn import preprocessing
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
import pickle
cmp_df = pd.DataFrame(columns=['ID', 'TITLE', 'CMP_TYPE_CUSTOMER', 'CMP_TYPE_PARTNER'])
cmp_df
data_train = pd.read_csv('data/dataframe.csv', sep=';', index_col='ID')
data_test = pd.read_csv('data/dataframe.csv', sep=';', index_col='ID')
data_train
title_df = pd.DataFrame(data_train['TITLE'])
title_df.head()
data_train = data_train.drop(columns=['TITLE'], axis=1)
data_test = data_test.drop(columns=['TITLE'], axis=1)
data_test.describe(include='all').T
fig = plt.figure(figsize=(25, 15))
cols = 5
rows = np.ceil(float(data_train.shape[1]) / cols)
for i, column in enumerate(data_train.columns):
ax = fig.add_subplot(rows, cols, i + 1)
ax.set_title(column)
if data_train.dtypes[column] == np.object:
data_train[column].value_counts().plot(kind="bar", axes=ax)
else:
data_train[column].hist(axes=ax)
plt.xticks(rotation="vertical")
plt.subplots_adjust(hspace=0.7, wspace=0.2)
X_train=data_train.drop(['target'], axis=1)
y_train = data_train['target']
X_test=data_test.drop(['target'], axis=1)
y_test = data_test['target']
tree = DecisionTreeClassifier(criterion = "entropy", max_depth = 3, random_state = 17)
tree.fit(X = X_train, y = y_train)
tree_predictions = tree.predict(X = X_test) # Ваш код здесь
tree_predictions
accuracy_score = tree.score(X = X_test, y = y_test)
print(accuracy_score)
rf = RandomForestClassifier(criterion="entropy", random_state = 17, n_estimators=200,
max_depth=4, max_features=0.15 ,n_jobs=-1)
rf.fit(X = X_train, y = y_train) # Ваш код здесь
forest_predictions = rf.predict(X = X_test)
print(forest_predictions)
accuracy_score = rf.score(X = X_test, y = y_test)
print(accuracy_score)
pickle.dump(rf, open('random_forest.sav', 'wb'))
model = pickle.load(open('random_forest.sav', 'rb'))
predict = model.predict(X=X_test)
predict
title_df = title_df.reset_index(drop=True)
predict = pd.Series(predict).rename('PREDICT')
predict = predict.rename('PREDICT')
result_df = pd.concat([title_df, predict], axis=1)
result_df
```
--------------------------------------------------------------------------------------------------------------------------------
```
forest_params = {'max_depth': range(4, 21),
'max_features': range(7, 45),
'random_state': range(1, 100)}
locally_best_forest = GridSearchCV(rf, forest_params, n_jobs=-1)
locally_best_forest.fit(X = X_train, y = y_train)
print("Best params:", locally_best_forest.best_params_)
print("Best cross validaton score", locally_best_forest.best_score_)
tuned_forest_predictions = locally_best_forest.predict(X = X_test) # Ваш код здесь
accuracy_score = locally_best_forest.score(X = X_test, y = y_test)
print(accuracy_score)
tuned_forest_predictions
```
|
github_jupyter
|
# TEXT
This notebook serves as supporting material for topics covered in **Chapter 22 - Natural Language Processing** from the book *Artificial Intelligence: A Modern Approach*. This notebook uses implementations from [text.py](https://github.com/aimacode/aima-python/blob/master/text.py).
```
from text import *
from utils import open_data
from notebook import psource
```
## CONTENTS
* Text Models
* Viterbi Text Segmentation
* Information Retrieval
* Information Extraction
* Decoders
## TEXT MODELS
Before we start analyzing text processing algorithms, we will need to build some language models. Those models serve as a look-up table for character or word probabilities (depending on the type of model). These models can give us the probabilities of words or character sequences appearing in text. Take as example "the". Text models can give us the probability of "the", *P("the")*, either as a word or as a sequence of characters ("t" followed by "h" followed by "e"). The first representation is called "word model" and deals with words as distinct objects, while the second is a "character model" and deals with sequences of characters as objects. Note that we can specify the number of words or the length of the char sequences to better suit our needs. So, given that number of words equals 2, we have probabilities in the form *P(word1, word2)*. For example, *P("of", "the")*. For char models, we do the same but for chars.
It is also useful to store the conditional probabilities of words given preceding words. That means, given we found the words "of" and "the", what is the chance the next word will be "world"? More formally, *P("world"|"of", "the")*. Generalizing, *P(Wi|Wi-1, Wi-2, ... , Wi-n)*.
We call the word model *N-Gram Word Model* (from the Greek "gram", the root of "write", or the word for "letter") and the char model *N-Gram Character Model*. In the special case where *N* is 1, we call the models *Unigram Word Model* and *Unigram Character Model* respectively.
In the `text` module we implement the two models (both their unigram and n-gram variants) by inheriting from the `CountingProbDist` from `learning.py`. Note that `CountingProbDist` does not return the actual probability of each object, but the number of times it appears in our test data.
For word models we have `UnigramWordModel` and `NgramWordModel`. We supply them with a text file and they show the frequency of the different words. We have `UnigramCharModel` and `NgramCharModel` for the character models.
Execute the cells below to take a look at the code.
```
psource(UnigramWordModel, NgramWordModel, UnigramCharModel, NgramCharModel)
```
Next we build our models. The text file we will use to build them is *Flatland*, by Edwin A. Abbott. We will load it from [here](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/EN-text/flatland.txt). In that directory you can find other text files we might get to use here.
### Getting Probabilities
Here we will take a look at how to read text and find the probabilities for each model, and how to retrieve them.
First the word models:
```
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramWordModel(wordseq)
P2 = NgramWordModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
print(P1['an'])
print(P2[('i', 'was')])
```
We see that the most used word in *Flatland* is 'the', with 2081 occurrences, while the most used sequence is 'of the' with 368 occurrences. Also, the probability of 'an' is approximately 0.003, while for 'i was' it is close to 0.001. Note that the strings used as keys are all lowercase. For the unigram model, the keys are single strings, while for n-gram models we have n-tuples of strings.
Below we take a look at how we can get information from the conditional probabilities of the model, and how we can generate the next word in a sequence.
```
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P3 = NgramWordModel(3, wordseq)
print("Conditional Probabilities Table:", P3.cond_prob[('i', 'was')].dictionary, '\n')
print("Conditional Probability of 'once' give 'i was':", P3.cond_prob[('i', 'was')]['once'], '\n')
print("Next word after 'i was':", P3.cond_prob[('i', 'was')].sample())
```
First we print all the possible words that come after 'i was' and the times they have appeared in the model. Next we print the probability of 'once' appearing after 'i was', and finally we pick a word to proceed after 'i was'. Note that the word is picked according to its probability of appearing (high appearance count means higher chance to get picked).
Let's take a look at the two character models:
```
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramCharModel(wordseq)
P2 = NgramCharModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
print(P1['z'])
print(P2[('g', 'h')])
```
The most common letter is 'e', appearing more than 19000 times, and the most common sequence is "\_t". That is, a space followed by a 't'. Note that even though we do not count spaces for word models or unigram character models, we do count them for n-gram char models.
Also, the probability of the letter 'z' appearing is close to 0.0006, while for the bigram 'gh' it is 0.003.
### Generating Samples
Apart from reading the probabilities for n-grams, we can also use our model to generate word sequences, using the `samples` function in the word models.
```
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramWordModel(wordseq)
P2 = NgramWordModel(2, wordseq)
P3 = NgramWordModel(3, wordseq)
print(P1.samples(10))
print(P2.samples(10))
print(P3.samples(10))
```
For the unigram model, we mostly get gibberish, since each word is picked according to its frequency of appearance in the text, without taking into consideration preceding words. As we increase *n* though, we start to get samples that do have some semblance of conherency and do remind a little bit of normal English. As we increase our data, these samples will get better.
Let's try it. We will add to the model more data to work with and let's see what comes out.
```
data = open_data("EN-text/flatland.txt").read()
data += open_data("EN-text/sense.txt").read()
wordseq = words(data)
P3 = NgramWordModel(3, wordseq)
P4 = NgramWordModel(4, wordseq)
P5 = NgramWordModel(5, wordseq)
P7 = NgramWordModel(7, wordseq)
print(P3.samples(15))
print(P4.samples(15))
print(P5.samples(15))
print(P7.samples(15))
```
Notice how the samples start to become more and more reasonable as we add more data and increase the *n* parameter. We are still a long way to go though from realistic text generation, but at the same time we can see that with enough data even rudimentary algorithms can output something almost passable.
## VITERBI TEXT SEGMENTATION
### Overview
We are given a string containing words of a sentence, but all the spaces are gone! It is very hard to read and we would like to separate the words in the string. We can accomplish this by employing the `Viterbi Segmentation` algorithm. It takes as input the string to segment and a text model, and it returns a list of the separate words.
The algorithm operates in a dynamic programming approach. It starts from the beginning of the string and iteratively builds the best solution using previous solutions. It accomplishes that by segmentating the string into "windows", each window representing a word (real or gibberish). It then calculates the probability of the sequence up that window/word occurring and updates its solution. When it is done, it traces back from the final word and finds the complete sequence of words.
### Implementation
```
psource(viterbi_segment)
```
The function takes as input a string and a text model, and returns the most probable sequence of words, together with the probability of that sequence.
The "window" is `w` and it includes the characters from *j* to *i*. We use it to "build" the following sequence: from the start to *j* and then `w`. We have previously calculated the probability from the start to *j*, so now we multiply that probability by `P[w]` to get the probability of the whole sequence. If that probability is greater than the probability we have calculated so far for the sequence from the start to *i* (`best[i]`), we update it.
### Example
The model the algorithm uses is the `UnigramTextModel`. First we will build the model using the *Flatland* text and then we will try and separate a space-devoid sentence.
```
flatland = open_data("EN-text/flatland.txt").read()
wordseq = words(flatland)
P = UnigramWordModel(wordseq)
text = "itiseasytoreadwordswithoutspaces"
s, p = viterbi_segment(text,P)
print("Sequence of words is:",s)
print("Probability of sequence is:",p)
```
The algorithm correctly retrieved the words from the string. It also gave us the probability of this sequence, which is small, but still the most probable segmentation of the string.
## INFORMATION RETRIEVAL
### Overview
With **Information Retrieval (IR)** we find documents that are relevant to a user's needs for information. A popular example is a web search engine, which finds and presents to a user pages relevant to a query. Information retrieval is not limited only to returning documents though, but can also be used for other type of queries. For example, answering questions when the query is a question, returning information when the query is a concept, and many other applications. An IR system is comprised of the following:
* A body (called corpus) of documents: A collection of documents, where the IR will work on.
* A query language: A query represents what the user wants.
* Results: The documents the system grades as relevant to a user's query and needs.
* Presententation of the results: How the results are presented to the user.
How does an IR system determine which documents are relevant though? We can sign a document as relevant if all the words in the query appear in it, and sign it as irrelevant otherwise. We can even extend the query language to support boolean operations (for example, "paint AND brush") and then sign as relevant the outcome of the query for the document. This technique though does not give a level of relevancy. All the documents are either relevant or irrelevant, but in reality some documents are more relevant than others.
So, instead of a boolean relevancy system, we use a *scoring function*. There are many scoring functions around for many different situations. One of the most used takes into account the frequency of the words appearing in a document, the frequency of a word appearing across documents (for example, the word "a" appears a lot, so it is not very important) and the length of a document (since large documents will have higher occurrences for the query terms, but a short document with a lot of occurrences seems very relevant). We combine these properties in a formula and we get a numeric score for each document, so we can then quantify relevancy and pick the best documents.
These scoring functions are not perfect though and there is room for improvement. For instance, for the above scoring function we assume each word is independent. That is not the case though, since words can share meaning. For example, the words "painter" and "painters" are closely related. If in a query we have the word "painter" and in a document the word "painters" appears a lot, this might be an indication that the document is relevant but we are missing out since we are only looking for "painter". There are a lot of ways to combat this. One of them is to reduce the query and document words into their stems. For example, both "painter" and "painters" have "paint" as their stem form. This can improve slightly the performance of algorithms.
To determine how good an IR system is, we give the system a set of queries (for which we know the relevant pages beforehand) and record the results. The two measures for performance are *precision* and *recall*. Precision measures the proportion of result documents that actually are relevant. Recall measures the proportion of relevant documents (which, as mentioned before, we know in advance) appearing in the result documents.
### Implementation
You can read the source code by running the command below:
```
psource(IRSystem)
```
The `stopwords` argument signifies words in the queries that should not be accounted for in documents. Usually they are very common words that do not add any significant information for a document's relevancy.
A quick guide for the functions in the `IRSystem` class:
* `index_document`: Add document to the collection of documents (named `documents`), which is a list of tuples. Also, count how many times each word in the query appears in each document.
* `index_collection`: Index a collection of documents given by `filenames`.
* `query`: Returns a list of `n` pairs of `(score, docid)` sorted on the score of each document. Also takes care of the special query "learn: X", where instead of the normal functionality we present the output of the terminal command "X".
* `score`: Scores a given document for the given word using `log(1+k)/log(1+n)`, where `k` is the number of query words in a document and `k` is the total number of words in the document. Other scoring functions can be used and you can overwrite this function to better suit your needs.
* `total_score`: Calculate the sum of all the query words in given document.
* `present`/`present_results`: Presents the results as a list.
We also have the class `Document` that holds metadata of documents, like their title, url and number of words. An additional class, `UnixConsultant`, can be used to initialize an IR System for Unix command manuals. This is the example we will use to showcase the implementation.
### Example
First let's take a look at the source code of `UnixConsultant`.
```
psource(UnixConsultant)
```
The class creates an IR System with the stopwords "how do i the a of". We could add more words to exclude, but the queries we will test will generally be in that format, so it is convenient. After the initialization of the system, we get the manual files and start indexing them.
Let's build our Unix consultant and run a query:
```
uc = UnixConsultant()
q = uc.query("how do I remove a file")
top_score, top_doc = q[0][0], q[0][1]
print(top_score, uc.documents[top_doc].url)
```
We asked how to remove a file and the top result was the `rm` (the Unix command for remove) manual. This is exactly what we wanted! Let's try another query:
```
q = uc.query("how do I delete a file")
top_score, top_doc = q[0][0], q[0][1]
print(top_score, uc.documents[top_doc].url)
```
Even though we are basically asking for the same thing, we got a different top result. The `diff` command shows the differences between two files. So the system failed us and presented us an irrelevant document. Why is that? Unfortunately our IR system considers each word independent. "Remove" and "delete" have similar meanings, but since they are different words our system will not make the connection. So, the `diff` manual which mentions a lot the word `delete` gets the nod ahead of other manuals, while the `rm` one isn't in the result set since it doesn't use the word at all.
## INFORMATION EXTRACTION
**Information Extraction (IE)** is a method for finding occurrences of object classes and relationships in text. Unlike IR systems, an IE system includes (limited) notions of syntax and semantics. While it is difficult to extract object information in a general setting, for more specific domains the system is very useful. One model of an IE system makes use of templates that match with strings in a text.
A typical example of such a model is reading prices from web pages. Prices usually appear after a dollar and consist of numbers, maybe followed by two decimal points. Before the price, usually there will appear a string like "price:". Let's build a sample template.
With the following regular expression (*regex*) we can extract prices from text:
`[$][0-9]+([.][0-9][0-9])?`
Where `+` means 1 or more occurrences and `?` means atmost 1 occurrence. Usually a template consists of a prefix, a target and a postfix regex. In this template, the prefix regex can be "price:", the target regex can be the above regex and the postfix regex can be empty.
A template can match with multiple strings. If this is the case, we need a way to resolve the multiple matches. Instead of having just one template, we can use multiple templates (ordered by priority) and pick the match from the highest-priority template. We can also use other ways to pick. For the dollar example, we can pick the match closer to the numerical half of the highest match. For the text "Price $90, special offer $70, shipping $5" we would pick "$70" since it is closer to the half of the highest match ("$90").
The above is called *attribute-based* extraction, where we want to find attributes in the text (in the example, the price). A more sophisticated extraction system aims at dealing with multiple objects and the relations between them. When such a system reads the text "$100", it should determine not only the price but also which object has that price.
Relation extraction systems can be built as a series of finite state automata. Each automaton receives as input text, performs transformations on the text and passes it on to the next automaton as input. An automata setup can consist of the following stages:
1. **Tokenization**: Segments text into tokens (words, numbers and punctuation).
2. **Complex-word Handling**: Handles complex words such as "give up", or even names like "Smile Inc.".
3. **Basic-group Handling**: Handles noun and verb groups, segmenting the text into strings of verbs or nouns (for example, "had to give up").
4. **Complex Phrase Handling**: Handles complex phrases using finite-state grammar rules. For example, "Human+PlayedChess("with" Human+)?" can be one template/rule for capturing a relation of someone playing chess with others.
5. **Structure Merging**: Merges the structures built in the previous steps.
Finite-state, template based information extraction models work well for restricted domains, but perform poorly as the domain becomes more and more general. There are many models though to choose from, each with its own strengths and weaknesses. Some of the models are the following:
* **Probabilistic**: Using Hidden Markov Models, we can extract information in the form of prefix, target and postfix from a given text. Two advantages of using HMMs over templates is that we can train HMMs from data and don't need to design elaborate templates, and that a probabilistic approach behaves well even with noise. In a regex, if one character is off, we do not have a match, while with a probabilistic approach we have a smoother process.
* **Conditional Random Fields**: One problem with HMMs is the assumption of state independence. CRFs are very similar to HMMs, but they don't have the latter's constraint. In addition, CRFs make use of *feature functions*, which act as transition weights. For example, if for observation $e_{i}$ and state $x_{i}$ we have $e_{i}$ is "run" and $x_{i}$ is the state ATHLETE, we can have $f(x_{i}, e_{i}) = 1$ and equal to 0 otherwise. We can use multiple, overlapping features, and we can even use features for state transitions. Feature functions don't have to be binary (like the above example) but they can be real-valued as well. Also, we can use any $e$ for the function, not just the current observation. To bring it all together, we weigh a transition by the sum of features.
* **Ontology Extraction**: This is a method for compiling information and facts in a general domain. A fact can be in the form of `NP is NP`, where `NP` denotes a noun-phrase. For example, "Rabbit is a mammal".
## DECODERS
### Introduction
In this section we will try to decode ciphertext using probabilistic text models. A ciphertext is obtained by performing encryption on a text message. This encryption lets us communicate safely, as anyone who has access to the ciphertext but doesn't know how to decode it cannot read the message. We will restrict our study to <b>Monoalphabetic Substitution Ciphers</b>. These are primitive forms of cipher where each letter in the message text (also known as plaintext) is replaced by another another letter of the alphabet.
### Shift Decoder
#### The Caesar cipher
The Caesar cipher, also known as shift cipher is a form of monoalphabetic substitution ciphers where each letter is <i>shifted</i> by a fixed value. A shift by <b>`n`</b> in this context means that each letter in the plaintext is replaced with a letter corresponding to `n` letters down in the alphabet. For example the plaintext `"ABCDWXYZ"` shifted by `3` yields `"DEFGZABC"`. Note how `X` became `A`. This is because the alphabet is cyclic, i.e. the letter after the last letter in the alphabet, `Z`, is the first letter of the alphabet - `A`.
```
plaintext = "ABCDWXYZ"
ciphertext = shift_encode(plaintext, 3)
print(ciphertext)
```
#### Decoding a Caesar cipher
To decode a Caesar cipher we exploit the fact that not all letters in the alphabet are used equally. Some letters are used more than others and some pairs of letters are more probable to occur together. We call a pair of consecutive letters a <b>bigram</b>.
```
print(bigrams('this is a sentence'))
```
We use `CountingProbDist` to get the probability distribution of bigrams. In the latin alphabet consists of only only `26` letters. This limits the total number of possible substitutions to `26`. We reverse the shift encoding for a given `n` and check how probable it is using the bigram distribution. We try all `26` values of `n`, i.e. from `n = 0` to `n = 26` and use the value of `n` which gives the most probable plaintext.
```
%psource ShiftDecoder
```
#### Example
Let us encode a secret message using Caeasar cipher and then try decoding it using `ShiftDecoder`. We will again use `flatland.txt` to build the text model
```
plaintext = "This is a secret message"
ciphertext = shift_encode(plaintext, 13)
print('The code is', '"' + ciphertext + '"')
flatland = open_data("EN-text/flatland.txt").read()
decoder = ShiftDecoder(flatland)
decoded_message = decoder.decode(ciphertext)
print('The decoded message is', '"' + decoded_message + '"')
```
### Permutation Decoder
Now let us try to decode messages encrypted by a general mono-alphabetic substitution cipher. The letters in the alphabet can be replaced by any permutation of letters. For example, if the alphabet consisted of `{A B C}` then it can be replaced by `{A C B}`, `{B A C}`, `{B C A}`, `{C A B}`, `{C B A}` or even `{A B C}` itself. Suppose we choose the permutation `{C B A}`, then the plain text `"CAB BA AAC"` would become `"ACB BC CCA"`. We can see that Caesar cipher is also a form of permutation cipher where the permutation is a cyclic permutation. Unlike the Caesar cipher, it is infeasible to try all possible permutations. The number of possible permutations in Latin alphabet is `26!` which is of the order $10^{26}$. We use graph search algorithms to search for a 'good' permutation.
```
psource(PermutationDecoder)
```
Each state/node in the graph is represented as a letter-to-letter map. If there is no mapping for a letter, it means the letter is unchanged in the permutation. These maps are stored as dictionaries. Each dictionary is a 'potential' permutation. We use the word 'potential' because every dictionary doesn't necessarily represent a valid permutation since a permutation cannot have repeating elements. For example the dictionary `{'A': 'B', 'C': 'X'}` is invalid because `'A'` is replaced by `'B'`, but so is `'B'` because the dictionary doesn't have a mapping for `'B'`. Two dictionaries can also represent the same permutation e.g. `{'A': 'C', 'C': 'A'}` and `{'A': 'C', 'B': 'B', 'C': 'A'}` represent the same permutation where `'A'` and `'C'` are interchanged and all other letters remain unaltered. To ensure that we get a valid permutation, a goal state must map all letters in the alphabet. We also prevent repetitions in the permutation by allowing only those actions which go to a new state/node in which the newly added letter to the dictionary maps to previously unmapped letter. These two rules together ensure that the dictionary of a goal state will represent a valid permutation.
The score of a state is determined using word scores, unigram scores, and bigram scores. Experiment with different weightages for word, unigram and bigram scores and see how they affect the decoding.
```
ciphertexts = ['ahed world', 'ahed woxld']
pd = PermutationDecoder(canonicalize(flatland))
for ctext in ciphertexts:
print('"{}" decodes to "{}"'.format(ctext, pd.decode(ctext)))
```
As evident from the above example, permutation decoding using best first search is sensitive to initial text. This is because not only the final dictionary, with substitutions for all letters, must have good score but so must the intermediate dictionaries. You could think of it as performing a local search by finding substitutions for each letter one by one. We could get very different results by changing even a single letter because that letter could be a deciding factor for selecting substitution in early stages which snowballs and affects the later stages. To make the search better we can use different definitions of score in different stages and optimize on which letter to substitute first.
|
github_jupyter
|
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
print(g_key)
csvfile = "../output_data/weather_data.csv"
heatmap_df = pd.read_csv(csvfile)
heatmap_df
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
gmaps.configure(api_key=g_key)
locations = heatmap_df[["Lat", "Lng"]].astype(float)
humidity = heatmap_df["Humidity"].astype(float)
humidity.max()
fig = gmaps.figure()
heatmap_layer = gmaps.heatmap_layer(locations, weights=humidity,
dissipating=False, max_intensity=100,
point_radius = 1)
fig.add_layer(heatmap_layer)
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
heatmap_data = heatmap_df.loc[(heatmap_df["Max Temp"] < 100) &
(heatmap_df["Max Temp"] >=70) &
(heatmap_df["Cloudiness"] == 0) &
(heatmap_df["Wind Speed"] < 10)].dropna()
heatmap_data
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotel_df = heatmap_data.loc[:,["City","Country","Lat","Lng"]]
hotel_df["Hotel Name"] = ""
hotel_df
import json
url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?"
params = {
"radius": 5000,
"type": "hotel",
"keyword": "hotel",
"key": g_key}
for index, row in hotel_df.iterrows():
lat = row['Lat']
lon = row['Lng']
city_name = row['City']
params['location'] = f"{lat},{lon}"
response = requests.get(url, params=params).json()
results = response['results']
try:
print(f"Closest hotel in {city_name} is {results[0]['name']}.")
hotel_df.loc[index, "Hotel Name"] = results[0]['name']
# if there is no hotel available, show missing field
except (KeyError, IndexError):
print("No hotel")
hotel_df
hotel_drop = hotel_df.loc[hotel_df["Hotel Name"] != ""]
hotel_drop
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
for index, row in hotel_drop.iterrows():
city = row['City']
country = row['Country']
lat = row['Lat']
lon = row['Lng']
hotel_name = row['Hotel Name']
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_drop.iterrows()]
locations = hotel_drop[["Lat", "Lng"]]
hotel_info
# Add marker layer ontop of heat map
markers = gmaps.marker_layer(locations, info_box_content = hotel_info)
fig.add_layer(markers)
# Display figure
fig
```
- It found only 2 locations which has a max temperature lower than 80 degrees but higher than 70. Thus, tried to find lower then 100 degrees but higher than 70 and found 8 locations.
- We can find more locations in Northern Hemisphere then southern.
- In addition, most hotels are located near the Mediterranean area.
|
github_jupyter
|
<center>
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# Loops in Python
Estimated time needed: **20** minutes
## Objectives
After completing this lab you will be able to:
* work with the loop statements in Python, including for-loop and while-loop.
<h1>Loops in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the loops in the Python Programming Language. By the end of this lab, you'll know how to use the loop statements in Python, including for loop, and while loop.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="https://#loop">Loops</a>
<ul>
<li><a href="https://range/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01">Range</a></li>
<li><a href="https://for/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01">What is <code>for</code> loop?</a></li>
<li><a href="https://while/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01">What is <code>while</code> loop?</a></li>
</ul>
</li>
<li>
<a href="https://#quiz">Quiz on Loops</a>
</li>
</ul>
</div>
<hr>
<h2 id="loop">Loops</h2>
<h3 id="range">Range</h3>
Sometimes, you might want to repeat a given operation many times. Repeated executions like this are performed by <b>loops</b>. We will look at two types of loops, <code>for</code> loops and <code>while</code> loops.
Before we discuss loops lets discuss the <code>range</code> object. It is helpful to think of the range object as an ordered list. For now, let's look at the simplest case. If we would like to generate an object that contains elements ordered from 0 to 2 we simply use the following command:
```
# Use the range
range(3)
```
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%203/images/range.PNG" width="300" />
***NOTE: While in Python 2.x it returned a list as seen in video lessons, in 3.x it returns a range object.***
<h3 id="for">What is <code>for</code> loop?</h3>
The <code>for</code> loop enables you to execute a code block multiple times. For example, you would use this if you would like to print out every element in a list.\
Let's try to use a <code>for</code> loop to print all the years presented in the list <code>dates</code>:
This can be done as follows:
```
# For loop example
dates = [1982,1980,1973]
N = len(dates)
for i in range(N):
print(dates[i])
```
The code in the indent is executed <code>N</code> times, each time the value of <code>i</code> is increased by 1 for every execution. The statement executed is to <code>print</code> out the value in the list at index <code>i</code> as shown here:
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%203/images/LoopsForRange.gif" width="800" />
In this example we can print out a sequence of numbers from 0 to 7:
```
# Example of for loop
for i in range(0, 8):
print(i)
```
In Python we can directly access the elements in the list as follows:
```
# Exmaple of for loop, loop through list
for year in dates:
print(year)
```
For each iteration, the value of the variable <code>year</code> behaves like the value of <code>dates\[i]</code> in the first example:
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%203/images/LoopsForList.gif" width="800">
We can change the elements in a list:
```
# Use for loop to change the elements in list
squares = ['red', 'yellow', 'green', 'purple', 'blue']
for i in range(0, 5):
print("Before square ", i, 'is', squares[i])
squares[i] = 'white'
print("After square ", i, 'is', squares[i])
```
We can access the index and the elements of a list as follows:
```
# Loop through the list and iterate on both index and element value
squares=['red', 'yellow', 'green', 'purple', 'blue']
for i, square in enumerate(squares):
print(i, square)
```
<h3 id="while">What is <code>while</code> loop?</h3>
As you can see, the <code>for</code> loop is used for a controlled flow of repetition. However, what if we don't know when we want to stop the loop? What if we want to keep executing a code block until a certain condition is met? The <code>while</code> loop exists as a tool for repeated execution based on a condition. The code block will keep being executed until the given logical condition returns a **False** boolean value.
Let’s say we would like to iterate through list <code>dates</code> and stop at the year 1973, then print out the number of iterations. This can be done with the following block of code:
```
# While Loop Example
dates = [1982, 1980, 1973, 2000]
i = 0
year = dates[0]
while(year != 1973):
print(year)
i = i + 1
year = dates[i]
print("It took ", i ,"repetitions to get out of loop.")
```
A while loop iterates merely until the condition in the argument is not met, as shown in the following figure:
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/labs/Module%203/images/LoopsWhile.gif" width="650" />
<hr>
<h2 id="quiz">Quiz on Loops</h2>
Write a <code>for</code> loop the prints out all the element between <b>-5</b> and <b>5</b> using the range function.
```
# Write your code below and press Shift+Enter to execute
for i in range(-4, 5):
print(i)
```
<details><summary>Click here for the solution</summary>
```python
for i in range(-4, 5):
print(i)
```
</details>
Print the elements of the following list: <code>Genres=\[ 'rock', 'R\&B', 'Soundtrack', 'R\&B', 'soul', 'pop']</code>
Make sure you follow Python conventions.
```
# Write your code below and press Shift+Enter to execute
Genres = ['rock', 'R&B', 'Soundtrack', 'R&B', 'soul', 'pop']
for Genre in Genres:
print(Genre)
```
<details><summary>Click here for the solution</summary>
```python
Genres = ['rock', 'R&B', 'Soundtrack', 'R&B', 'soul', 'pop']
for Genre in Genres:
print(Genre)
```
</details>
<hr>
Write a for loop that prints out the following list: <code>squares=\['red', 'yellow', 'green', 'purple', 'blue']</code>
```
# Write your code below and press Shift+Enter to execute
squares=['red', 'yellow', 'green', 'purple', 'blue']
for square in squares:
print(square)
```
<details><summary>Click here for the solution</summary>
```python
squares=['red', 'yellow', 'green', 'purple', 'blue']
for square in squares:
print(square)
```
</details>
<hr>
Write a while loop to display the values of the Rating of an album playlist stored in the list <code>PlayListRatings</code>. If the score is less than 6, exit the loop. The list <code>PlayListRatings</code> is given by: <code>PlayListRatings = \[10, 9.5, 10, 8, 7.5, 5, 10, 10]</code>
```
# Write your code below and press Shift+Enter to execute
PlayListRatings = [10, 9.5, 10, 8, 7.5, 5, 10, 10]
i = 0
Rating = PlayListRatings[0]
while(i < len(PlayListRatings) and Rating >= 6):
Rating = PlayListRatings[i]
print(Rating)
i = i + 1
```
<details><summary>Click here for the solution</summary>
```python
PlayListRatings = [10, 9.5, 10, 8, 7.5, 5, 10, 10]
i = 0
Rating = PlayListRatings[0]
while(i < len(PlayListRatings) and Rating >= 6):
Rating = PlayListRatings[i]
print(Rating)
i = i + 1
```
</details>
<hr>
Write a while loop to copy the strings <code>'orange'</code> of the list <code>squares</code> to the list <code>new_squares</code>. Stop and exit the loop if the value on the list is not <code>'orange'</code>:
```
# Write your code below and press Shift+Enter to execute
squares = ['orange', 'orange', 'purple', 'blue ', 'orange']
new_squares = []
squares = ['orange', 'orange', 'purple', 'blue ', 'orange']
new_squares = []
i = 0
while(i < len(squares) and squares[i] == 'orange'):
new_squares.append(squares[i])
i = i + 1
print (new_squares)
```
<details><summary>Click here for the solution</summary>
```python
squares = ['orange', 'orange', 'purple', 'blue ', 'orange']
new_squares = []
i = 0
while(i < len(squares) and squares[i] == 'orange'):
new_squares.append(squares[i])
i = i + 1
print (new_squares)
```
</details>
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01" target="_blank">this article</a> to learn how to share your work.
<hr>
## Author
<a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01" target="_blank">Joseph Santarcangelo</a>
## Other contributors
<a href="https://www.linkedin.com/in/jiahui-mavis-zhou-a4537814a?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01">Mavis Zhou</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ---------------------------------- |
| 2020-08-26 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
<hr/>
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
|
github_jupyter
|
# Amazon Comprehend Custom Classification - Lab
This notebook will serve as a template for the overall process of taking a text dataset and integrating it into [Amazon Comprehend Custom Classification](https://docs.aws.amazon.com/comprehend/latest/dg/how-document-classification.html) and perform NLP for custom classification.
## Overview
1. [Introduction to Amazon Comprehend Custom Classification](#Introduction)
1. [Obtaining Your Data](#data)
1. [Pre-processing data](#preprocess)
1. [Building Custom Classification model](#build)
1. [Evaluate Custom Classification model](#evaluate)
1. [Cleanup](#cleanup)
## Introduction to Amazon Comprehend Custom Classification <a class="anchor" id="Introduction"/>
If you are not familiar with Amazon Comprehend Custom Classification you can learn more about this tool on these pages:
* [Product Page](https://aws.amazon.com/comprehend/)
* [Product Docs](https://docs.aws.amazon.com/comprehend/latest/dg/how-document-classification.html)
## Bring Your Own Data <a class="anchor" id="data"/>
We will be using Multi-Class mode in Amazon Comprehend Custom Classifier. Multi-class mode specifies a single class for each document. The individual classes are mutually exclusive, this part is important. If we have an overlapping classes, it is best to set expectaion that our model will learn and try predict same overlapping classes and accuracy might be impacted.
We are going to upload custom dataset. We ensure that dataset is a .csv and the format of the file must be one class and document per line. For example:
```
CLASS,Text of document 1
CLASS,Text of document 2
CLASS,Text of document 3
```
if we dont have the file in above fomat, we will convert it to above format.
To begin the cell below will complete the following:
1. Create a directory for the data files.
1. Upload the file manually to the nlp_data folder.
```
!mkdir nlp_data
```
With the data downloaded, now we will import the Pandas library as well as a few other data science tools in order to inspect the information.
```
import boto3
from time import sleep
import os
import subprocess
import pandas as pd
import json
import time
import pprint
import numpy as np
import seaborn as sn
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter
import matplotlib.dates as mdates
import secrets
import string
import datetime
import random
# run this only once
! pip install tqdm
from tqdm import tqdm
tqdm.pandas()
```
Please use the credentials that were part of initial login screen. Set the env variables.
```
import os
os.environ['AWS_DEFAULT_REGION'] = "us-east-1"
os.environ['AWS_ACCESS_KEY_ID'] = "<AWS_ACCESS_KEY_ID>"
os.environ['AWS_SECRET_ACCESS_KEY'] = "<AWS_SECRET_ACCESS_KEY>"
os.environ['AWS_SESSION_TOKEN'] = "<AWS_SESSION_TOKEN>"
```
Test previous aws configure is set properly by running following command
```
!echo $AWS_SESSION_TOKEN
```
Lets load the data in to dataframe and look at the data we uploaded. Examine the number of columns that are present. Look at few samples to see the content of the data.
```
raw_data = pd.read_csv('nlp_data/raw_data.csv')
raw_data.head()
raw_data['CATEGORY_NAME'] = raw_data['CATEGORY_NAME'].astype(str)
raw_data.groupby('CATEGORY_NAME')['CASE_SUBJECT_FULL'].count()
```
To convert data to the format that is required by Amazon Comprehend Custom Classifier,
```
CLASS,Text of document 1
CLASS,Text of document 2
CLASS,Text of document 3
```
We will identify the column which are class and which have the text content we would like to train on, we can create a new dataframe with selected columns.
```
selected_columns = ['CATEGORY_NAME', 'CASE_SUBJECT_FULL', 'CASE_DESCRIPTION_FULL']
# Select the columns we are interested in
selected_data = raw_data[selected_columns]
selected_data = selected_data[selected_data['CATEGORY_NAME']!='Not Known']
selected_data.shape
selected_data.groupby('CATEGORY_NAME')['CASE_SUBJECT_FULL'].count()
```
As we might be interested in finding outt he accuracy level of the model compared to known labels, we want to held out 10% dataset for later use to infer from the comdel, generate performanace matrix to asses the model. We want to stratify split data based on 'CATEGORY_NAME'
```
from sklearn.model_selection import train_test_split
train_data, test_data = train_test_split(selected_data, test_size=0.1, random_state=0,
stratify=selected_data[['CATEGORY_NAME']])
train_data_df = train_data.copy()
test_data_df = test_data.copy()
```
## Pre-processing data<a class="anchor" id="preprocess"/>
For training, the file format must conform with the [following](https://docs.aws.amazon.com/comprehend/latest/dg/how-document-classification-training.html):
- File must contain one label and one text per line – 2 columns
- No header
- Format UTF-8, carriage return “\n”.
Labels “must be uppercase, can be multitoken, have whitespace, consist of multiple words connect by underscores or hyphens or may even contain a comma in it, as long as it is correctly escaped.”
Here are the proposed labels:
| Index | Original | For training |
| --- | --- | --- |
| 1 | Company | COMPANY |
| 2 | EducationalInstitution | EDUCATIONALINSTITUTION |
| 3 | Artist | ARTIST |
| 4 | Athlete | ATHLETE |
| 5 | OfficeHolder | OFFICEHOLDER |
| 6 | MeanOfTransportation | MEANOFTRANSPORTATION |
| 7 | Building | BUILDING |
| 8 | NaturalPlace | NATURALPLACE |
| 9 | Village | VILLAGE |
| 10 | Animal | ANIMAL |
| 11 | Plant | PLANT |
| 12 | Album | ALBUM |
| 13 | Film | FILM |
| 14 | WrittenWork | WRITTENWORK |
For the inference part of it - when you want your custom model to determine which label corresponds to a given text -, the file format must conform with the following:
- File must contain text per line
- No header
- Format UTF-8, carriage return “\n”.
```
labels_dict = {'Company':'COMPANY',
'EducationalInstitution':'EDUCATIONALINSTITUTION',
'Artist':'ARTIST',
'Athlete':'ATHLETE',
'OfficeHolder':'OFFICEHOLDER',
'MeanOfTransportation':'MEANOFTRANSPORTATION',
'Building':'BUILDING',
'NaturalPlace':'NATURALPLACE',
'Village':'VILLAGE',
'Animal':'ANIMAL',
'Plant':'PLANT',
'Album':'ALBUM',
'Film':'FILM',
'WrittenWork':'WRITTENWORK'
}
import re
def remove_between_square_brackets(text):
return re.sub('\[[^]]*\]', '', text)
def denoise_text(text):
text = remove_between_square_brackets(text)
return text
def preprocess_text(document):
document = denoise_text(document)
# Remove all the special characters
document = re.sub(r'\W', ' ', str(document))
# remove all single characters
document = re.sub(r'\s+[a-zA-Z]\s+', ' ', document)
# Remove single characters from the start
document = re.sub(r'\^[a-zA-Z]\s+', ' ', document)
# Substituting multiple spaces with single space
document = re.sub(r'\s+', ' ', document, flags=re.I)
# Removing prefixed 'b'
document = re.sub(r'^b\s+', '', document)
return document
def process_data(df):
df['CATEGORY_NAME'] = df['CATEGORY_NAME'].apply(labels_dict.get)
df['document'] = df[df.columns[1:]].progress_apply(
lambda x: ' '.join(x.dropna().astype(str)),
axis=1
)
df.drop(['CASE_SUBJECT_FULL' ,'CASE_DESCRIPTION_FULL'], axis=1, inplace=True)
df.columns = ['class', 'text']
df['text'] = df['text'].progress_apply(preprocess_text)
return df
train_data_df = process_data(train_data_df)
test_data_df = process_data(test_data_df)
```
At this point we have all the data the 2 needed files.
### Building The Target Train and Test Files
With all of the above spelled out the next thing to do is to build 2 distinct files:
1. `comprehend-train.csv` - A CSV file containing 2 columns without header, first column class, second column text.
1. `comprehend-test.csv` - A CSV file containing 1 column of text without header.
```
DSTTRAINFILE='nlp_data/comprehend-train.csv'
DSTVALIDATIONFILE='nlp_data/comprehend-test.csv'
train_data_df.to_csv(path_or_buf=DSTTRAINFILE,
header=False,
index=False,
escapechar='\\',
doublequote=False,
quotechar='"')
validattion_data_df = test_data_df.copy()
validattion_data_df.drop(['class'], axis=1, inplace=True)
validattion_data_df.to_csv(path_or_buf=DSTVALIDATIONFILE,
header=False,
index=False,
escapechar='\\',
doublequote=False,
quotechar='"')
```
## Getting Started With Amazon Comprehend
Now that all of the required data to get started exists, we can start working on Comprehend Custom Classfier.
The custom classifier workload is built in two steps:
1. Training the custom model – no particular machine learning or deep learning knowledge is necessary
1. Classifying new data
Lets follow below steps for Training the custom model:
1. Create a bucket that will host training data
1. Create a bucket that will host training data artifacts and production results. That can be the same
1. Configure an IAM role allowing Comprehend to [access newly created buckets](https://docs.aws.amazon.com/comprehend/latest/dg/access-control-managing-permissions.html#auth-role-permissions)
1. Prepare data for training
1. Upload training data in the S3 bucket
1. Launch a “Train Classifier” job from the console: “Amazon Comprehend” > “Custom Classification” > “Train Classifier”
1. Prepare data for classification (one text per line, no header, same format as training data). Some more details [here](https://docs.aws.amazon.com/comprehend/latest/dg/how-class-run.html)
Now using the metada stored on this instance of a SageMaker Notebook determine the region we are operating in. If you are using a Jupyter Notebook outside of SageMaker simply define `region` as the string that indicates the region you would like to use for Forecast and S3.
```
with open('/opt/ml/metadata/resource-metadata.json') as notebook_info:
data = json.load(notebook_info)
resource_arn = data['ResourceArn']
region = resource_arn.split(':')[3]
print(region)
```
Configure your AWS APIs
```
session = boto3.Session(region_name=region)
comprehend = session.client(service_name='comprehend')
```
Lets create a s3 bucket that will host training data and test data.
```
# Now perform the join
print(region)
s3 = boto3.client('s3')
prefix = 'ComprehendBYODPediaClassification'
account_id = boto3.client('sts').get_caller_identity().get('Account')
bucket_name = account_id + "-comprehend-byod-classification-{}".format(''.join(
secrets.choice(string.ascii_lowercase + string.digits) for i in range(8)))
print(bucket_name)
if region != "us-east-1":
s3.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={'LocationConstraint': region})
else:
s3.create_bucket(Bucket=bucket_name)
```
### Uploading the data
```
boto3.Session().resource('s3').Bucket(bucket_name).Object(prefix+'/'+DSTTRAINFILE).upload_file(DSTTRAINFILE)
boto3.Session().resource('s3').Bucket(bucket_name).Object(prefix+'/'+DSTVALIDATIONFILE).upload_file(DSTVALIDATIONFILE)
```
### Configure an IAM role
In order to authorize Amazon Comprehend to perform bucket reads and writes during the training or during the inference, we must grant Amazon Comprehend access to the Amazon S3 bucket that we created.
We are going to create a data access role in our account to trust the Amazon Comprehend service principal.
```
iam = boto3.client("iam")
role_name = "ComprehendBucketAccessRole-{}".format(''.join(
secrets.choice(string.ascii_lowercase + string.digits) for i in range(8)))
assume_role_policy_document = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "comprehend.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
create_role_response = iam.create_role(
RoleName = role_name,
AssumeRolePolicyDocument = json.dumps(assume_role_policy_document)
)
policy_arn = "arn:aws:iam::aws:policy/ComprehendFullAccess"
iam.attach_role_policy(
RoleName = role_name,
PolicyArn = policy_arn
)
# Now add S3 support
iam.attach_role_policy(
PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess',
RoleName=role_name
)
time.sleep(60) # wait for a minute to allow IAM role policy attachment to propagate
role_arn = create_role_response["Role"]["Arn"]
print(role_arn)
```
## Building Custom Classification model <a class="anchor" id="#build"/>
Launch the classifier training:
```
s3_train_data = 's3://{}/{}/{}'.format(bucket_name, prefix, DSTTRAINFILE)
s3_output_job = 's3://{}/{}/{}'.format(bucket_name, prefix, 'output/train_job')
print('training data location: ',s3_train_data, "output location:", s3_output_job)
id = str(datetime.datetime.now().strftime("%s"))
training_job = comprehend.create_document_classifier(
DocumentClassifierName='BYOD-Custom-Classifier-'+ id,
DataAccessRoleArn=role_arn,
InputDataConfig={
'S3Uri': s3_train_data
},
OutputDataConfig={
'S3Uri': s3_output_job
},
LanguageCode='en'
)
jobArn = training_job['DocumentClassifierArn']
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
describe_custom_classifier = comprehend.describe_document_classifier(
DocumentClassifierArn = jobArn
)
status = describe_custom_classifier["DocumentClassifierProperties"]["Status"]
print("Custom classifier: {}".format(status))
if status == "TRAINED" or status == "IN_ERROR":
break
time.sleep(60)
```
## Trained model confusion matrix
When a custom classifier model is trained, Amazon Comprehend creates a confusion matrix that provides metrics on how well the model performed in training. This enables you to assess how well the classifier will perform when run. This matrix shows a matrix of labels as predicted by the model compared to actual labels and is created using 10 to 20 percent of the documents submitted to test the trained model.
```
#Retrieve the S3URI from the model output and create jobkey variable.
job_output = describe_custom_classifier["DocumentClassifierProperties"]["OutputDataConfig"]["S3Uri"]
path_prefix = 's3://{}/'.format(bucket_name)
job_key = os.path.relpath(job_output, path_prefix)
#Download the model metrics
boto3.Session().resource('s3').Bucket(bucket_name).download_file(job_key, './output.tar.gz')
!ls -ltr
#Unpack the gzip file
!tar xvzf ./output.tar.gz
import json
with open('output/confusion_matrix.json') as f:
comprehend_cm = json.load(f)
cm_array = comprehend_cm['confusion_matrix']
def plot_confusion_matrix(cm_array, labels):
df_cm = pd.DataFrame(cm_array, index = [i for i in labels],
columns = [i for i in labels])
#sn.set(font_scale=1.4) # for label size
plt.figure(figsize = (15,13))
sn.heatmap(df_cm, annot=True) # font size
plt.show()
plot_confusion_matrix(cm_array, labels = comprehend_cm['labels'])
from sklearn.metrics import confusion_matrix
import numpy as np
cm = np.array(comprehend_cm['confusion_matrix'])
cols = ['label','precision', 'recall','f1_score','type']
models_report = pd.DataFrame(columns = cols)
def precision(label, confusion_matrix):
col = confusion_matrix[:, label]
return confusion_matrix[label, label] / col.sum()
def recall(label, confusion_matrix):
row = confusion_matrix[label, :]
return confusion_matrix[label, label] / row.sum()
def precision_macro_average(confusion_matrix):
rows, columns = confusion_matrix.shape
sum_of_precisions = 0
for label in range(rows):
sum_of_precisions += precision(label, confusion_matrix)
return sum_of_precisions / rows
def recall_macro_average(confusion_matrix):
rows, columns = confusion_matrix.shape
sum_of_recalls = 0
for label in range(columns):
sum_of_recalls += recall(label, confusion_matrix)
return sum_of_recalls / columns
def f1_score(precision, recall):
return (2 * (precision * recall) / (precision + recall))
def accuracy(confusion_matrix):
diagonal_sum = confusion_matrix.trace()
sum_of_all_elements = confusion_matrix.sum()
return diagonal_sum / sum_of_all_elements
def display_confusion_matrix(cm, labels, matrix_type, models_report):
#print("label precision recall f1score")
count = 0
for label in labels:
p = precision(count, cm)
r = recall(count, cm)
f1 = f1_score(p, r)
#print(f"{labels_dict.get(label)} {p:2.4f} {r:2.4f} {f1:2.4f}")
tmp = pd.Series({'label': label,\
'precision' : p,\
'recall': r,\
'f1_score': f1,\
'type': matrix_type
})
models_report = models_report.append(tmp, ignore_index = True)
count += 1
#print(models_report)
p_total = precision_macro_average(cm)
print(f"precision total: {p_total:2.4f}")
r_total = recall_macro_average(cm)
print(f"recall total: {r_total:2.4f}")
a_total = accuracy(cm)
print(f"accuracy total: {a_total:2.4f}")
f1_total = f1_score(p_total, r_total)
print(f"f1 total: {f1_total:2.4f}")
return models_report
training_model_report = display_confusion_matrix(cm, comprehend_cm['labels'], 'training_matrix', models_report)
training_model_report.sort_values(by=['f1_score'], inplace=True, ascending=False)
print(training_model_report.to_string(index=False))
```
## Evaluate Custom Classification model <a class="anchor" id="evaluate"/>
We will use custom classifier jobs to Evaluate on the test data we have.
```
model_arn = describe_custom_classifier["DocumentClassifierProperties"]["DocumentClassifierArn"]
print(model_arn)
s3_test_data = 's3://{}/{}/{}'.format(bucket_name, prefix, DSTVALIDATIONFILE)
print(s3_test_data)
id = str(datetime.datetime.now().strftime("%s"))
start_response = comprehend.start_document_classification_job(
JobName = 'BYOD-Custom-Classifier-Inference'+ id,
InputDataConfig={
'S3Uri': s3_test_data,
'InputFormat': 'ONE_DOC_PER_LINE'
},
OutputDataConfig={
'S3Uri': s3_output_job
},
DataAccessRoleArn=role_arn,
DocumentClassifierArn=model_arn
)
print("Start response: %s\n", start_response)
# Check the status of the job
describe_response = comprehend.describe_document_classification_job(JobId=start_response['JobId'])
print("Describe response: %s\n", describe_response)
# List all classification jobs in account
list_response = comprehend.list_document_classification_jobs()
print("List response: %s\n", list_response)
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
describe_response = comprehend.describe_document_classification_job(JobId=start_response['JobId'])
status = describe_response["DocumentClassificationJobProperties"]["JobStatus"]
print("Custom classifier job status : {}".format(status))
if status == "COMPLETED" or status == "FAILED" or status == "STOP_REQUESTED" or status== "STOPPED":
break
time.sleep(30)
inference_s3uri = describe_response["DocumentClassificationJobProperties"]["OutputDataConfig"]["S3Uri"]
path_prefix = 's3://{}/'.format(bucket_name)
inference_job_key = os.path.relpath(inference_s3uri, path_prefix)
boto3.Session().resource('s3').Bucket(bucket_name).download_file(inference_job_key, './inference_output.tar.gz')
#Unpack the gzip file
!tar xvzf ./inference_output.tar.gz
def load_jsonl(input_path) -> list:
"""
Read list of objects from a JSON lines file.
"""
data = []
with open(input_path, 'r', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line.rstrip('\n|\r')))
print('Loaded {} records from {}'.format(len(data), input_path))
return data
inference_data = load_jsonl('predictions.jsonl')
test_data_df.shape
inferred_class = []
for line in inference_data:
predicted_class = sorted(line['Classes'], key=lambda x: x['Score'], reverse=True)[0]['Name']
inferred_class.append(predicted_class)
test_data_df["predicted_class"] = inferred_class
test_data_df.head()
```
Lets generate confusion metrix and other evaluation metrix for inferred results
```
import sklearn
print('The scikit-learn version is {}.'.format(sklearn.__version__))
from sklearn.metrics import confusion_matrix
y_true = test_data_df['class']
y_pred = test_data_df['predicted_class']
labels = comprehend_cm['labels']
cm_inference = confusion_matrix(y_true, y_pred,labels=labels)
plot_confusion_matrix(cm_inference, labels = labels)
inference_model_report = display_confusion_matrix(cm_inference, labels, 'inference_matrix', models_report)
inference_model_report.sort_values(by=['f1_score'], inplace=True, ascending=False)
print(inference_model_report.to_string(index=False))
%store bucket_name
%store region
%store jobArn
%store role_arn
```
## Cleanup <a class="anchor" id="cleanup"/>
Run [clean up notebook](./Cleanup.ipynb) to clean all the resources
|
github_jupyter
|
```
from time import sleep
from tm1640 import TM1640
```
# Simple and elegant usage
```
with TM1640(clk_pin=24, din_pin=23) as d:
d.brightness = 0
d.write_text('HELLO')
for i in [1,1,1,1,1, -1,-1,-1,-1,-1]:
sleep(1)
d.brightness += i
```
# Global object for the purposes of this notebook
```
disp = TM1640(clk_pin=24, din_pin=23)
disp.brightness = 1
```
# Showing some garbage
```
disp.write_bytes(b'0123456789abcdef')
disp.write_bytes([0xff, 0xef, 0b10000000, 0x63])
disp.write_bytes([0, 0x63, 0x5c, 0], 8)
import random
for i in range(64):
disp.write_bytes(bytes(random.randrange(256) for b in range(16)))
sleep(1 / (i+1))
```
# Testing different characters
```
disp.write_text('.01.02..03')
disp.write_text('0123456789 yYzZ')
disp.write_text('aAbBcCdDeEfFgGhH')
disp.write_text('iIjJkKlLmMnNoOpP')
disp.write_text('qQrRsStTuUvVwWxX')
disp.write_text('~!@#$%^&*()[]{}')
disp.write_text('-_¯\'`"+=,./\\:;')
disp.write_text('🯰🯱🯲🯳🯴🯵🯶🯷🯸🯹⁐ニ≡‾|‖')
disp.write_text('⌈⌉⌊⌋⎾⏋⎿⏌⌜⌝⌞⌟⌌⌍⌎⌏')
disp.write_text('⊦⊢⊣⎡⎢⎣⎤⎥⎦')
disp.write_text('⊏⊑⊐⊒⊓⊔⋂⋃Πμ')
```
# Some progress bar simulations
```
import textwrap
def animate(textarea):
for i in textwrap.dedent(textarea).strip().splitlines():
disp.write_text(i)
sleep(1)
animate('''
[ ]
[⁐ ]
[⁐⁐ ]
[⁐⁐⁐]
''')
animate('''
[ ]
[. ]
[.. ]
[... ]
[....]
''')
animate('''
⌈ ⌉
[ ⌉
[. ⌉
[._ ⌉
[._. ⌉
[._._ ⌉
[._._. ⌉
[._._._⌉
[._._._.⌉
[._._._.]
''')
animate('''
[ ]
E ]
8 ]
8| ]
8E ]
88 ]
88|]
88E]
888]
8880
8888
''')
animate('''
⎢ ⎥
‖ ⎥
‖⎢ ⎥
‖‖ ⎥
‖‖⎢ ⎥
‖‖‖ ⎥
‖‖‖⎢⎥
‖‖‖‖⎥
‖‖‖‖‖
''')
from collections import namedtuple
PBS = namedtuple('PBS', 'left middle right half full') # PBS = Progress Bar Style
progress_bar_styles = [
# ⌊ _ _ _ ⌋
PBS(0b_0011000, 0b_0001000, 0b_0001100, 0b0100000, 0b0100010),
# ⌊._._._.⌋
PBS(0b10011000, 0b10001000, 0b_0001100, 0b0100000, 0b0100010),
# ⌈ ¯ ¯ ¯ ⌉
PBS(0b_0100001, 0b_0000001, 0b_0000011, 0b0010000, 0b0010100),
# ‖.‖.‖. . .
PBS(0b10000000, 0b10000000, 0b10000000, 0b0110000, 0b0110110),
# ‖.‖.‖._._.
PBS(0b10001000, 0b10001000, 0b10001000, 0b0110000, 0b0110110),
]
def progress_bar(total, filled, theme):
assert total >= 1
assert isinstance(total, int)
bits = total * 2
marks = round(filled * 2)
buffer = [0] * total
buffer[0] |= theme.left
buffer[-1] |= theme.right
for i in range(total):
if i > 0 and i + 1 < total:
buffer[i] |= theme.middle
if i * 2 + 1 < marks:
buffer[i] |= theme.full
elif i * 2 < marks:
buffer[i] |= theme.half
return bytes(buffer)
def byteanimate(iterable, delay=1.0):
disp.write_text('')
for i in iterable:
disp.write_bytes(i)
sleep(delay)
for total in [1, 2, 3, 4, 8]:
for theme in progress_bar_styles:
byteanimate(
(progress_bar(total, i / 2, theme) for i in range(0, 2 * total + 1)),
1 / total
)
# This looks like a stereo VU meter.
def double_progress_bar(total, top, bottom):
assert total >= 1
assert isinstance(total, int)
bits = total * 2
tops = round(top * 2)
bots = round(bottom * 2)
buffer = [0] * total
for i in range(total):
if i * 2 + 1 < tops:
buffer[i] |= 0b0100010
elif i * 2 < tops:
buffer[i] |= 0b0100000
if i * 2 + 1 < bots:
buffer[i] |= 0b0010100
elif i * 2 < bots:
buffer[i] |= 0b0010000
return bytes(buffer)
import math
for total in [1, 2, 3, 8]:
for theme in progress_bar_styles:
byteanimate(
(double_progress_bar(
total,
(math.sin(2 * i * math.tau / 64) + 1) / 2 * total,
(math.cos(2 * i * math.tau / 64) + 1) / 2 * total
) for i in range(0, 64)),
1 / 64
)
```
|
github_jupyter
|
# Homework: Decipherment
```
from collections import defaultdict, Counter
import collections
import pprint
import math
import bz2
pp = pprint.PrettyPrinter(width=45, compact=True)
```
First let us read in the cipher text from the `data` directory:
```
def read_file(filename):
if filename[-4:] == ".bz2":
with bz2.open(filename, 'rt') as f:
content = f.read()
f.close()
else:
with open(filename, 'r') as f:
content = f.read()
f.close()
return content
cipher = read_file("data/cipher.txt")
print(cipher)
```
## Default Solution
For the default solution we need to compute statistics like length, number of symbols/letters,
unique occurences, frequencies and relative frequencies of a given file. This is done in the function `get_statistics` below.
While using `get_statistics`, make sure that `cipher=True` is set when the input is a ciphertext.
```
def get_statistics(content, cipher=True):
stats = {}
content = list(content)
split_content = [x for x in content if x != '\n' and x!=' ']
length = len(split_content)
symbols = set(split_content)
uniq_sym = len(list(symbols))
freq = collections.Counter(split_content)
rel_freq = {}
for sym, frequency in freq.items():
rel_freq[sym] = (frequency/length)*100
if cipher:
stats = {'content':split_content, 'length':length, 'vocab':list(symbols), 'vocab_length':uniq_sym, 'frequencies':freq, 'relative_freq':rel_freq}
else:
stats = {'length':length, 'vocab':list(symbols), 'vocab_length':uniq_sym, 'frequencies':freq, 'relative_freq':rel_freq}
return stats
cipher_desc = get_statistics(cipher, cipher=True)
pp.pprint(cipher_desc)
```
The default solution matches the frequency of symbols in the cipher text with frequency of letters in the plaintext language (in this case, English). Note that this is just some text in English used to compute letter frequencies. We do not have access to the real plaintext in this homework.
In order to do compute plaintext frequencies, we use an English dataset has no punctuation or spaces and all characters are lowercase.
```
# plaintext description
plaintxt = read_file("data/default.wiki.txt.bz2")
plaintxt_desc = get_statistics(plaintxt, cipher=False)
pp.pprint(plaintxt_desc)
```
We have all the tools we need to describe the default solution to this homework.
We use a simple frequency matching heuristic to map cipher symbols to English letters.
We match the frequencies using the function $f(\cdot)$ of each cipher symbol $c$ with each English letter $e$:
$$h_{c,e} = | \log(\frac{f(c)}{f(e)})) | $$
For each cipher text symbol $c$ we then compute the most likely plain text symbol $e$ by sorting based on the above score.
```
"""
default : frequency matching heuristic
Notice how the candidate mappings, a.k.a hypotheses, are first scored with a measure of quality and,
then, the best scoring hypothesis is chosen as the winner.
The plaintext letters from the winner are then mapped to the respective ciphertext symbols.
"""
def find_mappings(ciphertext, plaintext):
mappings = defaultdict(dict)
hypotheses = defaultdict(dict)
# calculate alignment scores
for symbol in ciphertext['vocab']:
for letter in plaintext['vocab']:
hypotheses[symbol][letter] = abs(math.log((ciphertext['relative_freq'][symbol]/plaintext['relative_freq'][letter])))
# find winner
for sym in hypotheses.keys():
#mappings[sym] = min(lemma_alignment[sym], key=lemma_alignment[sym].get)
winner = sorted(hypotheses[sym].items(), key=lambda kv: kv[1])
mappings[sym] = winner[1][0]
return mappings
```
Using this scoring function we map the cipher symbol `∆` to `v` in English
```
mapping = find_mappings(cipher_desc, plaintxt_desc)
print("∆ maps to {}\n".format(mapping['∆']))
print(mapping)
```
The default solution to this decipherment problem is to take each cipher symbol and map it to the most likely English letter as provided by the `find_mappings` function above.
```
english_text = []
for symbol in cipher_desc['content']:
english_text.append(mapping[symbol])
decipherment = ('').join(english_text)
print(decipherment)
```
Notice that the default solution provides a very bad decipherment. Your job is to make it better!
## Grading
Ignore the following cells. They are for grading against the reference decipherment. Based on the clues provided in the decipherment homework description, you can easily find a reasonable reference text online for this cipher text.
```
"""
ATTENTION!
For grading purposes only. Don't bundle with the assignment.
Make sure '_ref.txt' is removed from the 'data' directory before publishing.
"""
def read_gold(gold_file):
with open(gold_file) as f:
gold = f.read()
f.close()
gold = list(gold.strip())
return gold
def symbol_error_rate(dec, _gold):
gold = read_gold(_gold)
correct = 0
if len(gold) == len(dec):
for (d,g) in zip(dec, gold):
if d==g:
correct += 1
wrong = len(gold)-correct
error = wrong/len(gold)
return error
# gold decipherment
gold_file = "data/_ref.txt"
ser = symbol_error_rate(decipherment, gold_file)
print('Error: ', ser*100, 'Accuracy: ', (1-ser)*100)
```
|
github_jupyter
|
The first thing we need to do is to download the dataset from Kaggle. We use the [Enron dataset](https://www.kaggle.com/wcukierski/enron-email-dataset), which is the biggest public email dataset available.
To do so we will use GDrive and download the dataset within a Drive folder to be used by Colab.
```
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/My Drive/Kaggle"
%cd /content/gdrive/My Drive/Kaggle
```
We can download the dataset from Kaggle and save it in the GDrive folder. This needs to be done only the first time.
```
# !kaggle datasets download -d wcukierski/enron-email-dataset
# unzipping the zip files
# !unzip \*.zip
```
Now we are finally ready to start working with the dataset, accessible as a CSV file called `emails.csv`.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import email
import re
if not 'emails_df' in locals():
emails_df = pd.read_csv('./emails.csv')
# Use only a subpart of the whole dataset to avoid exceeding RAM
# emails_df = emails_df[:10000]
print(emails_df.shape)
emails_df.head()
print(emails_df['message'][0])
# Convert to message objects from the message strings
messages = list(map(email.message_from_string, emails_df['message']))
def get_text_from_email(msg):
parts = []
for part in msg.walk():
if part.get_content_type() == 'text/plain':
parts.append( part.get_payload() )
text = ''.join(parts)
return text
emails = pd.DataFrame()
# Parse content from emails
emails['content'] = list(map(get_text_from_email, messages))
import gc
# Remove variables from memory
del messages
del emails_df
gc.collect()
def normalize_text(text):
text = text.lower()
# creating a space between a word and the punctuation following it to separate words
# and compact repetition of punctuation
# eg: "he is a boy.." => "he is a boy ."
text = re.sub(r'([.,!?]+)', r" \1 ", text)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",", "'")
text = re.sub(r"[^a-zA-Z?.!,']+", " ", text)
# Compact spaces
text = re.sub(r'[" "]+', " ", text)
# Remove forwarded messages
text = text.split('forwarded by')[0]
text = text.strip()
return text
emails['content'] = list(map(normalize_text, emails['content']))
# Drop samples with empty content text after normalization
emails['content'].replace('', np.nan, inplace=True)
emails.dropna(subset=['content'], inplace=True)
pd.set_option('max_colwidth', -1)
emails.head(50)
```
In the original paper, the dataset is built with 8billion emails where the context is provided by the email date, subject and previous message if the user is replying. Unfortunately, in the Enron dataset it's not possible to build the reply relationship between emails. In order thus to generate the context of a sentence, we train the sequence-to-sequence model to predict the sentence completion from pairs of split sentences.
For instance, the sentence `here is our forecast` is split in the following pairs within the dataset:
```
[
('<start> here is <end>', '<start> our forecast <end>'),
('<start> here is our <end>', '<start> forecast <end>')
]
```
```
# Skip long sentences, which increase maximum length a lot when padding
# and make the number of parameters to train explode
SENTENCE_MAX_WORDS = 20
def generate_dataset (emails):
contents = emails['content']
output = []
vocabulary_sentences = []
for content in contents:
# Skip emails longer than one sentence
if (len(content) > SENTENCE_MAX_WORDS * 5):
continue
sentences = content.split(' . ')
for sentence in sentences:
# Remove user names from start or end of sentence. This is just an heuristic
# but it's more efficient than compiling the list of names and removing all of them
sentence = re.sub("(^\w+\s,\s)|(\s,\s\w+$)", "", sentence)
words = sentence.split(' ')
if ((len(words) > SENTENCE_MAX_WORDS) or (len(words) < 2)):
continue
vocabulary_sentences.append('<start> ' + sentence + ' <end>')
for i in range(1, len(words) - 1):
input_data = '<start> ' + ' '.join(words[:i+1]) + ' <end>'
output_data = '<start> ' + ' '.join(words[i+1:]) + ' <end>'
data = (input_data, output_data)
output.append(data)
return output, vocabulary_sentences
pairs, vocabulary_sentences = generate_dataset(emails)
print(len(pairs))
print(len(vocabulary_sentences))
print(*pairs[:10], sep='\n')
print(*vocabulary_sentences[:10], sep='\n')
```
This is where the fun begins. The dataset is finally available and we start working on the analysis by using [Keras](https://keras.io/) and [TensorFlow](https://www.tensorflow.org/).
```
import tensorflow as tf
from tensorflow import keras
np.random.seed(42)
```
We need to transform the text corpora into sequences of integers (each integer being the index of a token in a dictionary) by using keras `Tokenizer`. We also limit to the 10k most frequent words, deleting uncommon words from sentences.
Normally we would use two tokenizers, one for the input strings and a different one for the output text, but in this case we are predicting the same vocabulary in both cases. All the words in the output texts are available also in the input texts because of how dataset pairs are generated.
Also since we will apply the "teacher forcing" technique during training, we need both the target data and the (target + 1 timestep) data.
```
vocab_max_size = 10000
def tokenize(text):
tokenizer = keras.preprocessing.text.Tokenizer(filters='', num_words=vocab_max_size)
tokenizer.fit_on_texts(text)
return tokenizer
input = [pair[0] for pair in pairs]
output = [pair[1] for pair in pairs]
tokenizer = tokenize(vocabulary_sentences)
encoder_input = tokenizer.texts_to_sequences(input)
decoder_input = tokenizer.texts_to_sequences(output)
decoder_target = [
[decoder_input[seqN][tokenI + 1]
for tokenI in range(len(decoder_input[seqN]) - 1)]
for seqN in range(len(decoder_input))]
# Convert to np.array
encoder_input = np.array(encoder_input)
decoder_input = np.array(decoder_input)
decoder_target = np.array(decoder_target)
from sklearn.model_selection import train_test_split
encoder_input_train, encoder_input_test, decoder_input_train, decoder_input_test, decoder_target_train, decoder_target_test = train_test_split(encoder_input, decoder_input, decoder_target, test_size=0.2)
print(encoder_input_train.shape, encoder_input_test.shape)
print(decoder_input_train.shape, decoder_input_test.shape)
print(decoder_target_train.shape, decoder_target_test.shape)
def max_length(t):
return max(len(i) for i in t)
max_length_in = max_length(encoder_input)
max_length_out = max_length(decoder_input)
encoder_input_train = keras.preprocessing.sequence.pad_sequences(encoder_input_train, maxlen=max_length_in, padding="post")
decoder_input_train = keras.preprocessing.sequence.pad_sequences(decoder_input_train, maxlen=max_length_out, padding="post")
decoder_target_train = keras.preprocessing.sequence.pad_sequences(decoder_target_train, maxlen=max_length_out, padding="post")
encoder_input_test = keras.preprocessing.sequence.pad_sequences(encoder_input_test, maxlen=max_length_in, padding="post")
decoder_input_test = keras.preprocessing.sequence.pad_sequences(decoder_input_test, maxlen=max_length_out, padding="post")
decoder_target_test = keras.preprocessing.sequence.pad_sequences(decoder_target_test, maxlen=max_length_out, padding="post")
print(max_length_in, max_length_out)
# Shuffle the data in unison
p = np.random.permutation(len(encoder_input_train))
encoder_input_train = encoder_input_train[p]
decoder_input_train = decoder_input_train[p]
decoder_target_train = decoder_target_train[p]
q = np.random.permutation(len(encoder_input_test))
encoder_input_test = encoder_input_test[q]
decoder_input_test = decoder_input_test[q]
decoder_target_test = decoder_target_test[q]
import math
batch_size = 128
vocab_size = vocab_max_size if len(tokenizer.word_index) > vocab_max_size else len(tokenizer.word_index)
# Rule of thumb of embedding size: vocab_size ** 0.25
# https://stackoverflow.com/questions/48479915/what-is-the-preferred-ratio-between-the-vocabulary-size-and-embedding-dimension
embedding_dim = math.ceil(vocab_size ** 0.25)
latent_dim = 192 # Latent dimensionality of the encoding space.
print(vocab_size, embedding_dim)
```
Here we define the RNN models. We start with the Encoder-Decoder model used in training which leverages the "teacher forcing technique". Therefore, it will receive as input `encoder_input` and `decoder_input` datasets.
Then the second model is represented by the inference Decoder which will receive as input the encoded states of the input sequence and the predicted token of the previous time step.
Both models use GRU units to preserve the context state, which have been shown to be more accurate than LSTM units and simpler to use since they have only one state.
```
# GRU Encoder
encoder_in_layer = keras.layers.Input(shape=(max_length_in,))
encoder_embedding = keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim)
encoder_bi_gru = keras.layers.Bidirectional(keras.layers.GRU(units=latent_dim, return_sequences=True, return_state=True))
# Discard the encoder output and use hidden states (h) and memory cells states (c)
# for forward (f) and backward (b) layer
encoder_out, fstate_h, bstate_h = encoder_bi_gru(encoder_embedding(encoder_in_layer))
state_h = keras.layers.Concatenate()([fstate_h, bstate_h])
# GRUDecoder
decoder_in_layer = keras.layers.Input(shape=(None,))
decoder_embedding = keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim)
decoder_gru = keras.layers.GRU(units=latent_dim * 2, return_sequences=True, return_state=True)
# Discard internal states in training, keep only the output sequence
decoder_gru_out, _ = decoder_gru(decoder_embedding(decoder_in_layer), initial_state=state_h)
decoder_dense_1 = keras.layers.Dense(128, activation="relu")
decoder_dense = keras.layers.Dense(vocab_size, activation="softmax")
decoder_out_layer = decoder_dense(keras.layers.Dropout(rate=0.2)(decoder_dense_1(keras.layers.Dropout(rate=0.2)(decoder_gru_out))))
# Define the model that uses the Encoder and the Decoder
model = keras.models.Model([encoder_in_layer, decoder_in_layer], decoder_out_layer)
def perplexity(y_true, y_pred):
return keras.backend.exp(keras.backend.mean(keras.backend.sparse_categorical_crossentropy(y_true, y_pred)))
model.compile(optimizer='adam', loss="sparse_categorical_crossentropy", metrics=[perplexity])
model.summary()
keras.utils.plot_model(model, "encoder-decoder.png", show_shapes=True)
epochs = 10
history = model.fit([encoder_input_train, decoder_input_train], decoder_target_train,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
def plot_history(history):
plt.plot(history.history['loss'], label="Training loss")
plt.plot(history.history['val_loss'], label="Validation loss")
plt.legend()
plot_history(history)
scores = model.evaluate([encoder_input_test[:1000], decoder_input_test[:1000]], decoder_target_test[:1000])
print("%s: %.2f" % (model.metrics_names[1], scores[1]))
# Inference Decoder
encoder_model = keras.models.Model(encoder_in_layer, state_h)
state_input_h = keras.layers.Input(shape=(latent_dim * 2,))
inf_decoder_out, decoder_h = decoder_gru(decoder_embedding(decoder_in_layer), initial_state=state_input_h)
inf_decoder_out = decoder_dense(decoder_dense_1(inf_decoder_out))
inf_model = keras.models.Model(inputs=[decoder_in_layer, state_input_h],
outputs=[inf_decoder_out, decoder_h])
keras.utils.plot_model(encoder_model, "encoder-model.png", show_shapes=True)
keras.utils.plot_model(inf_model, "inference-model.png", show_shapes=True)
def tokenize_text(text):
text = '<start> ' + text.lower() + ' <end>'
text_tensor = tokenizer.texts_to_sequences([text])
text_tensor = keras.preprocessing.sequence.pad_sequences(text_tensor, maxlen=max_length_in, padding="post")
return text_tensor
# Reversed map from a tokenizer index to a word
index_to_word = dict(map(reversed, tokenizer.word_index.items()))
# Given an input string, an encoder model (infenc_model) and a decoder model (infmodel),
def decode_sequence(input_tensor):
# Encode the input as state vectors.
state = encoder_model.predict(input_tensor)
target_seq = np.zeros((1, 1))
target_seq[0, 0] = tokenizer.word_index['<start>']
curr_word = "<start>"
decoded_sentence = ''
i = 0
while curr_word != "<end>" and i < (max_length_out - 1):
output_tokens, h = inf_model.predict([target_seq, state])
curr_token = np.argmax(output_tokens[0, 0])
if (curr_token == 0):
break;
curr_word = index_to_word[curr_token]
decoded_sentence += ' ' + curr_word
target_seq[0, 0] = curr_token
state = h
i += 1
return decoded_sentence
def tokens_to_seq(tokens):
words = list(map(lambda token: index_to_word[token] if token != 0 else '', tokens))
return ' '.join(words)
```
Let's test the inference model with some inputs.
```
texts = [
'here is',
'have a',
'please review',
'please call me',
'thanks for',
'let me',
'Let me know',
'Let me know if you',
'this sounds',
'is this call going to',
'can you get',
'is it okay',
'it should',
'call if there\'s',
'gave her a',
'i will let',
'i will be',
'may i get a copy of all the',
'how is our trade',
'this looks like a',
'i am fine with the changes',
'please be sure this'
]
output = list(map(lambda text: (text, decode_sequence(tokenize_text(text))), texts))
output_df = pd.DataFrame(output, columns=["input", "output"])
output_df.head(len(output))
```
The predicted outputs are actually quite good. The grammar is correct and have a logical sense. Some predictions also show that the predictions are personalized based on the Enron dataset, for instance in the case of `here is - the latest version of the presentation`.
The `please review - the attached outage report` also shows personalized prediction. This is consistent with the goal the task.
Save the Tokenizer and the Keras model for usage within the browser.
```
import json
with open( 'word_dict-final.json' , 'w' ) as file:
json.dump( tokenizer.word_index , file)
encoder_model.save('./encoder-model-final.h5')
inf_model.save('./inf-model-final.h5')
```
|
github_jupyter
|
## 1. The brief
<p>Imagine working for a digital marketing agency, and the agency is approached by a massive online retailer of furniture. They want to test our skills at creating large campaigns for all of their website. We are tasked with creating a prototype set of keywords for search campaigns for their sofas section. The client says that they want us to generate keywords for the following products: </p>
<ul>
<li>sofas</li>
<li>convertible sofas</li>
<li>love seats</li>
<li>recliners</li>
<li>sofa beds</li>
</ul>
<p><strong>The brief</strong>: The client is generally a low-cost retailer, offering many promotions and discounts. We will need to focus on such keywords. We will also need to move away from luxury keywords and topics, as we are targeting price-sensitive customers. Because we are going to be tight on budget, it would be good to focus on a tightly targeted set of keywords and make sure they are all set to exact and phrase match.</p>
<p>Based on the brief above we will first need to generate a list of words, that together with the products given above would make for good keywords. Here are some examples:</p>
<ul>
<li>Products: sofas, recliners</li>
<li>Words: buy, prices</li>
</ul>
<p>The resulting keywords: 'buy sofas', 'sofas buy', 'buy recliners', 'recliners buy',
'prices sofas', 'sofas prices', 'prices recliners', 'recliners prices'.</p>
<p>As a final result, we want to have a DataFrame that looks like this: </p>
<table>
<thead>
<tr>
<th>Campaign</th>
<th>Ad Group</th>
<th>Keyword</th>
<th>Criterion Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>Campaign1</td>
<td>AdGroup_1</td>
<td>keyword 1a</td>
<td>Exact</td>
</tr>
<tr>
<td>Campaign1</td>
<td>AdGroup_1</td>
<td>keyword 1a</td>
<td>Phrase</td>
</tr>
<tr>
<td>Campaign1</td>
<td>AdGroup_1</td>
<td>keyword 1b</td>
<td>Exact</td>
</tr>
<tr>
<td>Campaign1</td>
<td>AdGroup_1</td>
<td>keyword 1b</td>
<td>Phrase</td>
</tr>
<tr>
<td>Campaign1</td>
<td>AdGroup_2</td>
<td>keyword 2a</td>
<td>Exact</td>
</tr>
<tr>
<td>Campaign1</td>
<td>AdGroup_2</td>
<td>keyword 2a</td>
<td>Phrase</td>
</tr>
</tbody>
</table>
<p>The first step is to come up with a list of words that users might use to express their desire in buying low-cost sofas.</p>
```
# List of words to pair with products
words = ['buy', 'discount', 'promotion', 'cheap', 'offer', 'purchase', 'sale']
# Print list of words
print(words)
```
## 2. Combine the words with the product names
<p>Imagining all the possible combinations of keywords can be stressful! But not for us, because we are keyword ninjas! We know how to translate campaign briefs into Python data structures and can imagine the resulting DataFrames that we need to create.</p>
<p>Now that we have brainstormed the words that work well with the brief that we received, it is now time to combine them with the product names to generate meaningful search keywords. We want to combine every word with every product once before, and once after, as seen in the example above.</p>
<p>As a quick reminder, for the product 'recliners' and the words 'buy' and 'price' for example, we would want to generate the following combinations: </p>
<p>buy recliners<br>
recliners buy<br>
price recliners<br>
recliners price<br>
... </p>
<p>and so on for all the words and products that we have.</p>
```
products = ['sofas', 'convertible sofas', 'love seats', 'recliners', 'sofa beds']
# Create an empty list
keywords_list = []
# Loop through products
for product in products:
# Loop through words
for word in words:
# Append combinations
keywords_list.append([product, product + ' ' + word])
keywords_list.append([product, word + ' ' + product])
# Inspect keyword list
print(keywords_list)
```
## 3. Convert the list of lists into a DataFrame
<p>Now we want to convert this list of lists into a DataFrame so we can easily manipulate it and manage the final output.</p>
```
# Load library
# ... YOUR CODE FOR TASK 3 ...
import pandas as pd
# Create a DataFrame from list
keywords_df = pd.DataFrame.from_records(keywords_list)
# Print the keywords DataFrame to explore it
# ... YOUR CODE FOR TASK 3 ...
print(keywords_df.head())
```
## 4. Rename the columns of the DataFrame
<p>Before we can upload this table of keywords, we will need to give the columns meaningful names. If we inspect the DataFrame we just created above, we can see that the columns are currently named <code>0</code> and <code>1</code>. <code>Ad Group</code> (example: "sofas") and <code>Keyword</code> (example: "sofas buy") are much more appropriate names.</p>
```
# Rename the columns of the DataFrame
keywords_df.columns = ['Ad Group', 'Keyword']
```
## 5. Add a campaign column
<p>Now we need to add some additional information to our DataFrame.
We need a new column called <code>Campaign</code> for the campaign name. We want campaign names to be descriptive of our group of keywords and products, so let's call this campaign 'SEM_Sofas'.</p>
```
# Add a campaign column
# ... YOUR CODE FOR TASK 5 ...
keywords_df['Campaign'] = 'SEM_Sofas'
```
## 6. Create the match type column
<p>There are different keyword match types. One is exact match, which is for matching the exact term or are close variations of that exact term. Another match type is broad match, which means ads may show on searches that include misspellings, synonyms, related searches, and other relevant variations.</p>
<p>Straight from Google's AdWords <a href="https://support.google.com/google-ads/answer/2497836?hl=en">documentation</a>:</p>
<blockquote>
<p>In general, the broader the match type, the more traffic potential that keyword will have, since your ads may be triggered more often. Conversely, a narrower match type means that your ads may show less often—but when they do, they’re likely to be more related to someone’s search.</p>
</blockquote>
<p>Since the client is tight on budget, we want to make sure all the keywords are in exact match at the beginning.</p>
```
# Add a criterion type column
# ... YOUR CODE FOR TASK 6 ...
keywords_df['Criterion Type'] = 'Exact'
```
## 7. Duplicate all the keywords into 'phrase' match
<p>The great thing about exact match is that it is very specific, and we can control the process very well. The tradeoff, however, is that: </p>
<ol>
<li>The search volume for exact match is lower than other match types</li>
<li>We can't possibly think of all the ways in which people search, and so, we are probably missing out on some high-quality keywords.</li>
</ol>
<p>So it's good to use another match called <em>phrase match</em> as a discovery mechanism to allow our ads to be triggered by keywords that include our exact match keywords, together with anything before (or after) them.</p>
<p>Later on, when we launch the campaign, we can explore with modified broad match, broad match, and negative match types, for better visibility and control of our campaigns.</p>
```
# Make a copy of the keywords DataFrame
keywords_phrase = keywords_df.copy()
# Change criterion type match to phrase
# ... YOUR CODE FOR TASK 7 ...
keywords_phrase['Criterion Type'] = 'Phrase'
# Append the DataFrames
keywords_df_final = keywords_df.append(keywords_phrase)
```
## 8. Save and summarize!
<p>To upload our campaign, we need to save it as a CSV file. Then we will be able to import it to AdWords editor or BingAds editor. There is also the option of pasting the data into the editor if we want, but having easy access to the saved data is great so let's save to a CSV file!</p>
<p>To wrap up our campaign work, it is good to look at a summary of our campaign structure. We can do that by grouping by ad group and criterion type and counting by keyword.</p>
```
# Save the final keywords to a CSV file
# ... YOUR CODE FOR TASK 8 ...
keywords_df_final.to_csv('keywords.csv', index=False)
# View a summary of our campaign work
summary = keywords_df_final.groupby(['Ad Group', 'Criterion Type'])['Keyword'].count()
print(summary)
```
|
github_jupyter
|
```
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from numpy import load
from numpy import asarray
from numpy import savez_compressed
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.optimizers import Adam
from keras.metrics import RootMeanSquaredError
from keras.models import load_model
from keras.callbacks import *
%matplotlib inline
###################################### loading new data
camera2 = load('/content/drive/My Drive/datasets/camera2_cleaned.npz')
log2 = pd.read_csv('/content/drive/My Drive/datasets/nedlog2_clea.csv')
def camera_processing(camera_file, file_name):
# camera file
camera = camera_file.f.arr_0
camera = camera.astype('float32')
camera = camera/255
camera = camera.reshape(camera.shape[0], camera.shape[1], camera.shape[2], 1)
savez_compressed(f'/content/drive/My Drive/datasets/{file_name}_train', camera)
return print('Done')
def log_processing(log_file, file_name):
log_file['steering_avg_radian'] = log_file['steering_avg'] * np.pi / 180
log_file.to_csv(f'/content/drive/My Drive/datasets/{file_name}_train.csv')
return print('Done')
camera_processing(camera2, 'camera2')
log_processing(log2, 'log2')
def train_split(camera_file_name, log_file_name):
# load camera file
X = load(f'/content/drive/My Drive/datasets/{camera_file_name}_train.npz')
X = X.f.arr_0
# load log file
log = pd.read_csv(f'/content/drive/My Drive/datasets/{log_file_name}_train.csv')
y = log['steering_avg_radian']
y = y.to_numpy()
y = y.reshape(y.shape[0], 1)
# train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True)
# save them into individual file doing so due to ram management
savez_compressed(f'/content/drive/My Drive/datasets/{camera_file_name}_X_train', X_train)
savez_compressed(f'/content/drive/My Drive/datasets/{camera_file_name}_X_test', X_test)
savez_compressed(f'/content/drive/My Drive/datasets/{camera_file_name}_y_train', y_train)
savez_compressed(f'/content/drive/My Drive/datasets/{camera_file_name}_y_test', y_test)
return print('Done')
train_split('camera2', 'log2')
# # log file
# log_file['steering_avg_radian'] = log_file['steering_avg'] * np.pi / 180
# y = log_file['steering_avg_radian']
# y = y.to_numpy
############################# end of loading new data
X = X.f.arr_0
X.shape
log1 = pd.read_csv('/content/drive/My Drive/log1_full.csv')
log1.head()
# convert the angle from degree to radian
log1['steering_avg_radian'] = log1['steering_avg'] * np.pi / 180
log1.head()
log1.to_csv('/content/drive/My Drive/log1_train.csv')
log1 = pd.read_csv('/content/drive/My Drive/log1_train.csv')
y = log1['steering_avg_radian']
y = y.to_numpy()
y.shape
y = y.reshape(y.shape[0], 1)
y.shape
from sklearn.model_selection import train_test_split
# split it so that the validation set is the last 20% of the dataset as I want sequential data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
########################### start of train with camera8 data for model 5 epochs = 30
camera8 = load('/content/drive/My Drive/datasets/camera8_cleaned.npz')
log8 = pd.read_csv('/content/drive/My Drive/datasets/log8_cleaned.csv')
camera_processing(camera8, 'camera8')
log_processing(log1, 'log1')
train_split('camera1', 'log1')
X_train, X_test, y_train, y_test = train_load('camera8')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
model = Sequential()
model.add(Conv2D(16, (8, 8), strides=(4, 4), activation='elu', padding="same"))
model.add(Conv2D(32, (5, 5), strides=(2, 2), activation='elu', padding="same"))
model.add(Conv2D(64, (5, 5), strides=(2, 2), padding="same"))
model.add(Flatten())
model.add(Dropout(.2))
model.add(Dense(512, activation='elu'))
model.add(Dropout(.5))
model.add(Dense(1))
model.compile(loss='mse', optimizer=Adam(lr=1e-04), metrics=[RootMeanSquaredError()])
filepath = "/content/drive/My Drive/epochs/model_2_1_camera8.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
model_2_camera8 = model_history('model_2_1_camera8')
model_3_camera8.head()
#################### end of training camera9 data for model 3
########################### continue training with camera1 data for model 3
from keras.models import load_model
model = load_model('/content/drive/My Drive/epochs/model_5_1_camera8.0004-0.3364.h5')
def train_load(camera_file_name):
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_test.npz" ./y_test.npz
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_load('camera1')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from keras.callbacks import *
filepath = "/content/drive/My Drive/epochs/model_5_2_camera1.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(0, 31, 5)]
labels = [i for i in range(0, 31, 5)]
labels[0] = 1
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera1', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model4_2_camera1.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_3_camera1 = model_history('model_4_2_camera1')
########################### end of train with camera1 data for model 3
########################### star of train with camera9 data for model 3
camera3 = load('/content/drive/My Drive/datasets/camera3_cleaned.npz')
log3 = pd.read_csv('/content/drive/My Drive/datasets/log3_cleaned.csv')
camera_processing(camera3, 'camera3')
log_processing(log3, 'log3')
def train_split(camera_file_name, log_file_name):
# load camera file
X = load(f'/content/drive/My Drive/datasets/{camera_file_name}_train.npz')
X = X.f.arr_0
# load log file
log = pd.read_csv(f'/content/drive/My Drive/datasets/{log_file_name}_train.csv')
y = log['steering_avg_radian']
y = y.to_numpy()
y = y.reshape(y.shape[0], 1)
# train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True)
# save them into individual file doing so due to ram management
savez_compressed(f'/content/drive/My Drive/datasets/{camera_file_name}_X_train', X_train)
savez_compressed(f'/content/drive/My Drive/datasets/{camera_file_name}_X_test', X_test)
savez_compressed(f'/content/drive/My Drive/datasets/{camera_file_name}_y_train', y_train)
savez_compressed(f'/content/drive/My Drive/datasets/{camera_file_name}_y_test', y_test)
return print('Done')
train_split('camera3', 'log3')
"""
new data workflow
camera2 = load('/content/drive/My Drive/datasets/camera2_cleaned.npz')
log2 = pd.read_csv('/content/drive/My Drive/datasets/log2_cleaned.csv')
log_processing(log2, 'log2')
train_split('camera2', 'log2')
"""
from keras.models import load_model
model = load_model('/content/drive/My Drive/epochs/model_5_2_camera1.0006-0.2219.h5')
def train_load(camera_file_name):
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_test.npz" ./y_test.npz
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_load('camera9')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from keras.callbacks import *
filepath = "/content/drive/My Drive/epochs/model_5_3_camera9.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(10)]
labels = [i for i in range(1, 11)]
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera9', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model5_3_camera9.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_3_camera9 = model_history('model_5_3_camera9')
model_3_camera9.head()
#################### end of training camera9 data for model 3
########################### start of train with camera2 data for model 3
camera4 = load('/content/drive/My Drive/datasets/camera4_cleaned.npz')
log4 = pd.read_csv('/content/drive/My Drive/datasets/log4_cleaned.csv')
camera_processing(camera4, 'camera4')
log_processing(log4, 'log4')
train_split('camera4', 'log4')
"""
new data workflow
camera2 = load('/content/drive/My Drive/datasets/camera2_cleaned.npz')
log2 = pd.read_csv('/content/drive/My Drive/datasets/log2_cleaned.csv')
camera_processing(camera2, 'camera2')
log_processing(log2, 'log2')
train_split('camera2', 'log2')
"""
model = load_model('/content/drive/My Drive/epochs/model_5_3_camera9.0003-0.0526.h5')
def train_load(camera_file_name):
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_test.npz" ./y_test.npz
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_load('camera2')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
filepath = "/content/drive/My Drive/epochs/model_5_4_camera2.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(10)]
labels = [i for i in range(0, 11)]
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera2', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model5_4_camera2.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_3_camera2 = model_history('model_5_4_camera2')
model_2_camera4.head()
#################### end of training camera2 data for model 3
########################### start of train with camera3 data for model 3
camera5 = load('/content/drive/My Drive/datasets/camera5_cleaned.npz')
log5 = pd.read_csv('/content/drive/My Drive/datasets/log5_cleaned.csv')
camera_processing(camera5, 'camera5')
log_processing(log5, 'log5')
train_split('camera5', 'log5')
"""
new data workflow
camera2 = load('/content/drive/My Drive/datasets/camera2_cleaned.npz')
log2 = pd.read_csv('/content/drive/My Drive/datasets/log2_cleaned.csv')
camera_processing(camera2, 'camera2')
log_processing(log2, 'log2')
train_split('camera2', 'log2')
"""
model = load_model('/content/drive/My Drive/epochs/model_5_4_camera2.0004-0.0382.h5')
def train_load(camera_file_name):
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_test.npz" ./y_test.npz
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_load('camera3')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
filepath = "/content/drive/My Drive/epochs/model_5_5_camera3.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(0, 31, 5)]
labels = [i for i in range(0, 31, 5)]
labels[0] = 1
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera3', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model5_5_camera3.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_3_camera3 = model_history('model_5_5_camera3')
model_2_camera5.head()
#################### end of training camera3 data for model 3
########################### start of train with camera4 data for model 3
camera6 = load('/content/drive/My Drive/datasets/camera6_cleaned.npz')
log6 = pd.read_csv('/content/drive/My Drive/datasets/log6_cleaned.csv')
camera_processing(camera6, 'camera6')
log_processing(log6, 'log6')
train_split('camera6', 'log6')
"""
new data workflow
camera2 = load('/content/drive/My Drive/datasets/camera2_cleaned.npz')
log2 = pd.read_csv('/content/drive/My Drive/datasets/log2_cleaned.csv')
camera_processing(camera2, 'camera2')
log_processing(log2, 'log2')
train_split('camera2', 'log2')
"""
model = load_model('/content/drive/My Drive/epochs/model_5_5_camera3.0008-0.0464.h5')
def train_load(camera_file_name):
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_test.npz" ./y_test.npz
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_load('camera4')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
filepath = "/content/drive/My Drive/epochs/model_5_6_camera4.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(10)]
labels = [i for i in range(0, 11)]
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera6', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model5_6_camera4.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_3_camera4 = model_history('model_5_6_camera4')
model_2_camera6.head()
#################### end of training camera4 data for model 3
########################### start of train with camera5 data for model 3
camera7 = load('/content/drive/My Drive/datasets/camera7_cleaned.npz')
log7 = pd.read_csv('/content/drive/My Drive/datasets/log7_cleaned.csv')
camera_processing(camera7, 'camera7')
log_processing(log7, 'log7')
train_split('camera7', 'log7')
"""
new data workflow
camera2 = load('/content/drive/My Drive/datasets/camera2_cleaned.npz')
log2 = pd.read_csv('/content/drive/My Drive/datasets/log2_cleaned.csv')
camera_processing(camera2, 'camera2')
log_processing(log2, 'log2')
train_split('camera2', 'log2')
"""
model = load_model('/content/drive/My Drive/epochs/model_5_6_camera4.0004-0.1318.h5')
def train_load(camera_file_name):
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_test.npz" ./y_test.npz
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_load('camera5')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
filepath = "/content/drive/My Drive/epochs/model_5_7_camera5.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(10)]
labels = [i for i in range(1, 11)]
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera5', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model5_7_camera5.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_3_camera5 = model_history('model_5_7_camera5')
model_2_camera7.head()
#################### end of training camera5 data for model 3
########################### start of train with camera6 data for model 3
camera8 = load('/content/drive/My Drive/datasets/camera8_cleaned.npz')
log8 = pd.read_csv('/content/drive/My Drive/datasets/log8_cleaned.csv')
camera_processing(camera8, 'camera8')
log_processing(log8, 'log8')
train_split('camera8', 'log8')
"""
new data workflow
camera2 = load('/content/drive/My Drive/datasets/camera2_cleaned.npz')
log2 = pd.read_csv('/content/drive/My Drive/datasets/log2_cleaned.csv')
camera_processing(camera2, 'camera2')
log_processing(log2, 'log2')
train_split('camera2', 'log2')
"""
model = load_model('/content/drive/My Drive/epochs/model_5_7_camera5.0008-0.1320.h5')
def train_load(camera_file_name):
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_test.npz" ./y_test.npz
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_load('camera6')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
filepath = "/content/drive/My Drive/epochs/model_5_8_camera6.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(0, 101, 10)]
labels = [i for i in range(0, 101, 10)]
labels[0] = 1
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera6', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model5_8_camera6.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_3_camera6 = model_history('model_5_8_camera6')
model_2_camera8.head()
#################### end of training camera6 data for model 3
########################### start of train with camera6 data for model 3
camera9 = load('/content/drive/My Drive/datasets/camera9_cleaned.npz')
log9 = pd.read_csv('/content/drive/My Drive/datasets/log9_cleaned.csv')
camera_processing(camera9, 'camera9')
log_processing(log9, 'log9')
train_split('camera9', 'log9')
"""
new data workflow
camera2 = load('/content/drive/My Drive/datasets/camera2_cleaned.npz')
log2 = pd.read_csv('/content/drive/My Drive/datasets/log2_cleaned.csv')
camera_processing(camera2, 'camera2')
log_processing(log2, 'log2')
train_split('camera2', 'log2')
"""
model = load_model('/content/drive/My Drive/epochs/model_5_8_camera6.0004-0.0703.h5')
def train_load(camera_file_name):
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/datasets/{camera_file_name}_y_test.npz" ./y_test.npz
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_load('camera7')
X_train.shape, X_test.shape, y_train.shape, y_test.shape
filepath = "/content/drive/My Drive/epochs/model_5_9_camera7.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(0, 101, 10)]
labels = [i for i in range(0, 101, 10)]
labels[0] = 1
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera7', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model5_9_camera7.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_3_camera7 = model_history('model_5_9_camera7')
model_2_camera9.head()
#################### end of training camera9 data for model 1
####################### testing new model to see if I'm actually training on the same model
model = Sequential()
model.add(Conv2D(16, (3, 3), input_shape=(80, 160, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(300, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(100, activation='relu'))
model.add(Dropout(.25))
model.add(Dense(20, activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer=Adam(lr=1e-04), metrics=[RootMeanSquaredError()])
filepath = "/content/drive/My Drive/epochs/model_1_camera9_standalone.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
model.compile(loss='mse', optimizer=Adam(lr=1e-04), metrics=[RootMeanSquaredError()])
filepath = "/content/drive/My Drive/epochs/model_1_camera9_standalone.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=100,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(0, 101, 10)]
labels = [i for i in range(0, 101, 10)]
labels[0] = 1
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera9', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18)
plt.savefig('/content/drive/My Drive/images/train_test_loss_model1_camera9_standalone.png');
def model_history(model_name):
model = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model.to_csv(f'/content/drive/My Drive/datasets/{model_name}.csv', index=False)
return model
model_1_camera9 = model_history('model_1_camera9_standalone')
model_1_camera9.head()
#################### end of training camera9 data for model 1
model = Sequential()
model.add(Conv2D(16, (3, 3), input_shape=(80, 160, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(300, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(100, activation='relu'))
model.add(Dropout(.25))
model.add(Dense(20, activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam', metrics=[RootMeanSquaredError()])
from keras.callbacks import *
filepath = "/content/drive/My Drive/model_1_shuffled_redropout.{epoch:03d}-{val_loss:.3f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=15,
verbose=1,
callbacks=callbacks_list)
model = Sequential()
model.add(Conv2D(16, (3, 3), input_shape=(80, 160, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(300, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(100, activation='relu'))
model.add(Dropout(.25))
model.add(Dense(20, activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer=Adam(lr=1e-04), metrics=[RootMeanSquaredError()])
from keras.callbacks import *
filepath = "/content/drive/My Drive/epochs/model_1_camera1_lr:0.0001.{epoch:04d}-{val_loss:.4f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
history = model.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=100,
verbose=1,
callbacks=callbacks_list)
ticks = [i for i in range(0, 101, 10)]
labels = [i for i in range(0, 101, 10)]
labels[0] = 1
labels
ticks
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(20, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch for Camera1', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Mean Squared Error', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18);
print(history.history.keys())
model_1_camera1 = pd.DataFrame({'loss': history.history['loss'],
'root_mean_squared_error': history.history['root_mean_squared_error'],
'val_loss': history.history['val_loss'],
'val_root_mean_squared_error': history.history['val_root_mean_squared_error']},
columns = ['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])
model_1_camera1.to_csv('/content/drive/My Drive/datasets/model_1_camera1.csv', index=False)
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(12, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='#185fad')
plt.plot(test_loss, label='Testing Loss', color='orange')
# Set title
plt.title('Training and Testing Loss by Epoch', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Adam', fontsize = 18)
plt.xticks(ticks, labels)
plt.legend(fontsize = 18);
model_2 = Sequential()
model_2.add(Conv2D(16, (3, 3), input_shape=(80, 160, 1), activation='relu'))
model_2.add(MaxPooling2D(pool_size=(2, 2)))
model_2.add(Dropout(.25))
model_2.add(Conv2D(32, (3, 3), activation='relu'))
model_2.add(MaxPooling2D(pool_size=(2, 2)))
model_2.add(Dropout(.25))
model_2.add(Conv2D(64, (3, 3), activation='relu'))
model_2.add(MaxPooling2D(pool_size=(2, 2)))
model_2.add(Dropout(.25))
model_2.add(Flatten())
model_2.add(Dense(4096, activation='relu'))
model_2.add(Dropout(.5))
model_2.add(Dense(2048, activation='relu'))
model_2.add(Dropout(.5))
model_2.add(Dense(1024, activation='relu'))
model_2.add(Dropout(.5))
model_2.add(Dense(512, activation='relu'))
model_2.add(Dropout(.5))
model_2.add(Dense(1))
model_2.compile(loss='mse', optimizer='adam', metrics=[RootMeanSquaredError()])
from keras.callbacks import *
filepath = "/content/drive/My Drive/epochs/model_2_shuffled.{epoch:03d}-{val_loss:.3f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
history = model_2.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=30,
verbose=1,
callbacks=callbacks_list)
model_3 = Sequential()
model_3.add(Conv2D(16, (3, 3), input_shape=(80, 160, 1), activation='relu'))
model_3.add(MaxPooling2D(pool_size=(2, 2)))
model_3.add(Dropout(.25))
model_3.add(Conv2D(32, (3, 3), activation='relu'))
model_3.add(MaxPooling2D(pool_size=(2, 2)))
model_3.add(Dropout(.25))
model_3.add(Conv2D(64, (3, 3), activation='relu'))
model_3.add(MaxPooling2D(pool_size=(2, 2)))
model_3.add(Dropout(.25))
model_3.add(Conv2D(128, (3, 3), activation='relu'))
model_3.add(MaxPooling2D(pool_size=(2, 2)))
model_3.add(Dropout(.25))
model_3.add(Flatten())
model_3.add(Dense(4096, activation='relu'))
model_3.add(Dropout(.5))
model_3.add(Dense(2048, activation='relu'))
model_3.add(Dropout(.5))
model_3.add(Dense(1024, activation='relu'))
model_3.add(Dropout(.5))
model_3.add(Dense(512, activation='relu'))
model_3.add(Dropout(.5))
model_3.add(Dense(1))
model_3.compile(loss='mse', optimizer='adam', metrics=[RootMeanSquaredError()])
from keras.callbacks import *
filepath = "/content/drive/My Drive/epochs/model_3_shuffled.{epoch:03d}-{val_loss:.3f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
history = model_3.fit(X_train,
y_train,
batch_size=64,
validation_data=(X_test, y_test),
epochs=15,
verbose=1,
callbacks=callbacks_list)
####### loading code
X = load('/content/drive/My Drive/camera1_train.npz')
X = X.f.arr_0
log1 = pd.read_csv('/content/drive/My Drive/log1_train.csv')
y = log1['steering_avg_radian']
y = y.to_numpy()
y = y.reshape(y.shape[0], 1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)
####### end of loading code
!cp -r "/content/drive/My Drive/camera1_train.npz" ./camera1_train.npz
X = load('./camera1_train.npz')
X = X.f.arr_0
log1 = pd.read_csv('/content/drive/My Drive/log1_train.csv')
y = log1['steering_avg_radian']
y = y.to_numpy()
y = y.reshape(y.shape[0], 1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True)
savez_compressed('/content/drive/My Drive/X_train_shuffled', X_train)
savez_compressed('/content/drive/My Drive/X_test_shuffled', X_test)
savez_compressed('/content/drive/My Drive/y_train_shuffled', y_train)
savez_compressed('/content/drive/My Drive/y_test_shuffled', y_test)
!nvidia-smi
####### loading code from drive
X_train = load('/content/drive/My Drive/X_train.npz')
X_train = X_train.f.arr_0
X_test = load('/content/drive/My Drive/X_test.npz')
X_test = X_test.f.arr_0
y_train = load('/content/drive/My Drive/y_train.npz')
y_train = y_train.f.arr_0
y_test = load('/content/drive/My Drive/y_test.npz')
y_test = y_test.f.arr_0
####### end of loading code
!cp -r "/content/drive/My Drive/X_train.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/X_test.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/y_train.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/y_test.npz" ./y_test.npz
####### loading code from vm
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
####### end of loading code
# for shuffled data
!cp -r "/content/drive/My Drive/X_train_shuffled.npz" ./X_train.npz
!cp -r "/content/drive/My Drive/X_test_shuffled.npz" ./X_test.npz
!cp -r "/content/drive/My Drive/y_train_shuffled.npz" ./y_train.npz
!cp -r "/content/drive/My Drive/y_test_shuffled.npz" ./y_test.npz
# for shuffled data
####### loading code from vm
X_train = load('./X_train.npz')
X_train = X_train.f.arr_0
X_test = load('./X_test.npz')
X_test = X_test.f.arr_0
y_train = load('./y_train.npz')
y_train = y_train.f.arr_0
y_test = load('./y_test.npz')
y_test = y_test.f.arr_0
####### end of loading code
```
|
github_jupyter
|
# LSTM - Long Short Term Memory
- From [v1] Lecture 60
- LSTM, another variation of RNN
## Study Links
- [An empirical exploration of recurrent network architectures](https://dl.acm.org/citation.cfm?id=3045367)
- https://dblp.uni-trier.de/db/journals/corr/corr1506.html
- [A Critical Review of Recurrent Neural Networks for Sequence Learning](https://arxiv.org/pdf/1506.00019.pdf)
- [Deep Learning](https://web.cs.hacettepe.edu.tr/~aykut/classes/spring2018/cmp784/slides/lec7-recurrent-neural-nets.pdf)
## Problems with Vanilla RNN
- The component of the gradient in directions that correspond to long-term dependencies is [small](https://dl.acm.org/citation.cfm?id=3045118.3045367)
- From state $t$ to state $0$
- The components of the gradient in directions that correspond to short-term dependencies are large
- As a result, RNNs can easily learn the short-term but not the long-term dependencies
## LSTM
- In LSTM network, the network is the same as a standard RNN, except that the summation units in the hidden layer are replaced by memory blocks
- This will be done in $\large s_t$
- The multiplicative gates allow LSTM memory cells to store and access information over periods of time, thereby mitigating the [vanishing gradient problem](https://dblp.uni-trier.de/db/journals/corr/corr1506.html)
- LSTM will have multiple gates that allows cells to keep some information or loose some information
- By doing this we want to acheive the long term dependency in the network
- At the same time, solving the problem of Vanishing Gradient Problem [37]
- Along with the hidden state vector $\large h_t$, LSTM maintains a memory vector $\large C_t$
- $\large \large h_{t} = \text{tanh} \left(Uh_{t-1}+ W {x_{t}} \right)$ going to be replaced with $\large C_t$
- $\large C_t$ is going to tell us how to condition the values of $\large U$, so that _vanishing gradient problem disappears_
- At each time step, the LSTM can choose to read from, write to, or reset the cell using explicit gating mechanisms
- A small computer kind of logic exist inside, which is able to read, write and reset operations, so that $\large h$ values are very well conditioned
- LSTM computes well behaved gradients by controlling the values using the gates
## LSTM Cell
- See [Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)](https://www.youtube.com/watch?v=WCUNPb-5EYI&list=PLVZqlMpoM6kaJX_2lLKjEhWI0NlqHfqzp&index=5&t=0s) and [A friendly introduction to Recurrent Neural Networks](https://www.youtube.com/watch?v=UNmqTiOnRfg) to get very good intuition of how LSTM works
- Below diagram shows how does a LSTM Cell look like
- $\large h_t$ is replaced by this cell
- The operations depicted in below diagram are performed during Forward Pass

- $\large C_t$ $\Rightarrow$ Previous memory state
- $\large C_t$ is a vector
- $\large h_t$ $\Rightarrow$ Previous state of the hidden unit
- $\large W$ $Rightarrpw$ denotes the weight vector
- $\large q_t$ $\Rightarrow$ netsum of Input vector with weight vector $\large W$
- $\large f_t$ $\Rightarrow$ is the forget gate, computed using vector $\large W_f$, having a $Sigmoid$ as activation function
- $\large i_t$ $\Rightarrow$ is the input cell, computed using vector $\large W_i$, having a $Sigmoid$ as activation function
- $\large \widetilde{C_t}$ $\Rightarrow$ new computed memory, computed using vector $\large W_{\widetilde{C_t}}$, having a $tanh$ activation function
- $\large O_t$is the output gate, computed using vector $\large W_{O_t}$, having a $Sigmoid$ activation function
- $\large \otimes$ refers to element wise multiplication
- $\large \oplus$ refers to element wise addition
## LSTM - Forward Pass

|
github_jupyter
|
<a href="https://colab.research.google.com/github/abhisheksuran/Atari_DQN/blob/master/Multi_Worker_Actor_Critic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import tensorflow as tf
import gym
import tensorflow_probability as tfp
from multiprocessing import Process, Queue, Barrier, Lock
import tensorflow.keras.losses as kls
!pip3 install box2d-py
env= gym.make("CartPole-v0")
low = env.observation_space.low
high = env.observation_space.high
class critic(tf.keras.Model):
def __init__(self):
super().__init__()
self.d1 = tf.keras.layers.Dense(128,activation='relu')
#self.d2 = tf.keras.layers.Dense(32,activation='relu')
self.v = tf.keras.layers.Dense(1, activation = None)
def call(self, input_data):
x = self.d1(input_data)
#x = self.d2(x)
v = self.v(x)
return v
class actor(tf.keras.Model):
def __init__(self):
super().__init__()
self.d1 = tf.keras.layers.Dense(128,activation='relu')
#self.d2 = tf.keras.layers.Dense(32,activation='relu')
self.a = tf.keras.layers.Dense(2,activation='softmax')
def call(self, input_data):
x = self.d1(input_data)
#x = self.d2(x)
a = self.a(x)
return a
class agent():
def __init__(self, gamma = 0.99):
self.gamma = gamma
self.a_opt = tf.keras.optimizers.RMSprop(learning_rate=7e-3)
self.c_opt = tf.keras.optimizers.RMSprop(learning_rate=7e-3)
self.actor = actor()
self.critic = critic()
def act(self,state):
prob = self.actor(np.array([state]))
prob = prob.numpy()
dist = tfp.distributions.Categorical(probs=prob, dtype=tf.float32)
action = dist.sample()
return int(action.numpy()[0])
def actor_loss(self, probs, actions, td):
probability = []
log_probability= []
for pb,a in zip(probs,actions):
dist = tfp.distributions.Categorical(probs=pb, dtype=tf.float32)
log_prob = dist.log_prob(a)
prob = dist.prob(a)
probability.append(prob)
log_probability.append(log_prob)
# print(probability)
# print(log_probability)
p_loss= []
e_loss = []
td = td.numpy()
#print(td)
for pb, t, lpb in zip(probability, td, log_probability):
t = tf.constant(t)
policy_loss = tf.math.multiply(lpb,t)
entropy_loss = tf.math.negative(tf.math.multiply(pb,lpb))
p_loss.append(policy_loss)
e_loss.append(entropy_loss)
p_loss = tf.stack(p_loss)
e_loss = tf.stack(e_loss)
p_loss = tf.reduce_mean(p_loss)
e_loss = tf.reduce_mean(e_loss)
# print(p_loss)
# print(e_loss)
loss = -p_loss - 0.0001 * e_loss
#print(loss)
return loss
def learn(self, states, actions, discnt_rewards):
discnt_rewards = tf.reshape(discnt_rewards, (len(discnt_rewards),))
with tf.GradientTape() as tape1, tf.GradientTape() as tape2:
p = self.actor(states, training=True)
v = self.critic(states,training=True)
v = tf.reshape(v, (len(v),))
td = tf.math.subtract(discnt_rewards, v)
# print(discnt_rewards)
# print(v)
#print(td.numpy())
a_loss = self.actor_loss(p, actions, td)
c_loss = 0.5*kls.mean_squared_error(discnt_rewards, v)
grads1 = tape1.gradient(a_loss, self.actor.trainable_variables)
grads2 = tape2.gradient(c_loss, self.critic.trainable_variables)
self.a_opt.apply_gradients(zip(grads1, self.actor.trainable_variables))
self.c_opt.apply_gradients(zip(grads2, self.critic.trainable_variables))
return a_loss, c_loss
def preprocess1(states, actions, rewards, gamma, s_queue, a_queue, r_queue, lock):
discnt_rewards = []
sum_reward = 0
rewards.reverse()
for r in rewards:
sum_reward = r + gamma*sum_reward
discnt_rewards.append(sum_reward)
discnt_rewards.reverse()
states = np.array(states, dtype=np.float32)
actions = np.array(actions, dtype=np.int32)
discnt_rewards = np.array(discnt_rewards, dtype=np.float32)
#exp = np.array([states, actions,discnt_rewards])
lock.acquire()
s_queue.put(states)
a_queue.put(actions)
r_queue.put(discnt_rewards)
lock.release()
def preprocess2(s_queue, a_queue, r_queue):
states = []
while not s_queue.empty():
states.append(s_queue.get())
actions = []
while not a_queue.empty():
actions.append(a_queue.get())
dis_rewards = []
while not r_queue.empty():
dis_rewards.append(r_queue.get())
state_batch = np.concatenate(*(states,), axis=0)
action_batch = np.concatenate(*(actions,), axis=None)
reward_batch = np.concatenate(*(dis_rewards,), axis=None)
# exp = np.transpose(exp)
return state_batch, action_batch, reward_batch
def runner(barrier, lock, s_queue, a_queue, r_queue):
tf.random.set_seed(360)
agentoo7 = agent()
steps = 2000
ep_reward = []
total_avgr = []
for s in range(steps):
done = False
state = env.reset()
total_reward = 0
all_aloss = []
all_closs = []
rewards = []
states = []
actions = []
while not done:
action = agentoo7.act(state)
next_state, reward, done, _ = env.step(action)
rewards.append(reward)
states.append(state)
#actions.append(tf.one_hot(action, 2, dtype=tf.int32).numpy().tolist())
actions.append(action)
state = next_state
total_reward += reward
if done:
ep_reward.append(total_reward)
avg_reward = np.mean(ep_reward[-100:])
total_avgr.append(avg_reward)
print("total reward after {} steps is {} and avg reward is {}".format(s, total_reward, avg_reward))
preprocess1(states, actions, rewards, 1, s_queue, a_queue, r_queue, lock)
b = barrier.wait()
if b == 0:
if (s_queue.qsize() == 10) & (a_queue.qsize() == 10) & (r_queue.qsize() == 10):
print(s_queue.qsize())
print(a_queue.qsize())
print(r_queue.qsize())
state_batch, action_batch, reward_batch = preprocess2(s_queue, a_queue, r_queue)
# print(state_batch)
# print(action_batch)
# print(reward_batch)
al,cl = agentoo7.learn(state_batch, action_batch, reward_batch)
all_aloss.append(al)
all_closs.append(cl)
print(f"al{al}")
print(f"cl{cl}")
barrier.wait()
barrier = Barrier(10)
s_queue = Queue()
a_queue = Queue()
r_queue = Queue()
lock = Lock()
processes = []
for i in range(10):
worker = Process(target=runner, args=(barrier, lock, s_queue, a_queue, r_queue))
processes.append(worker)
worker.start()
for process in processes:
process.join()
```
|
github_jupyter
|
```
# Formulate an algorithm to check for a student has passed in exam or not?
y = float(input("Enter the minimum marks required to pass in the exam : "))
x = float(input('Enter the marks scored by student in the exam : '))
if (x >= y):
print('This student is passed in the exam')
else:
print('This student is failed in the exam')
# Tree Traversal Algorithms. It is of 3 types : In, Pre, Post - order traveral.
# creating a Node class
class Node:
def __init__(self,val):
self.childleft = None
self.childright = None
self.nodedata = val
# creating an instance of the Node class to construct the tree
root = Node(1)
root.childleft = Node(2)
root.childright = Node(3)
root.childleft.childleft = Node(4)
root.childleft.childright = Node(5)
# perform In-order traversal : left-root-right
def InOrd(root):
if root:
InOrd(root.childleft)
print(root.nodedata)
InOrd(root.childright)
InOrd(root)
# Pre-order traversal : root-left-right
def PreOrd(root):
if root:
print(root.nodedata)
PreOrd(root.childleft)
PreOrd(root.childright)
PreOrd(root)
# Post-order traversal : left-right-root
def PostOrd(root):
if root:
PostOrd(root.childleft)
PostOrd(root.childright)
print(root.nodedata)
PostOrd(root)
# Sorting Algorithms
# Merge sort : Divide and Conquer algorithm
# Python program for implementation of MergeSort
def mergeSort(arr):
if len(arr) >1:
mid = len(arr)//2 # Finding the mid of the array
L = arr[:mid] # Dividing the array elements
R = arr[mid:] # into 2 halves
mergeSort(L) # Sorting the first half
mergeSort(R) # Sorting the second half
i = j = k = 0
# Copy data to temp arrays L[] and R[]
while i < len(L) and j < len(R):
if L[i] < R[j]:
arr[k] = L[i]
i+= 1
else:
arr[k] = R[j]
j+= 1
k+= 1
# Checking if any element was left
while i < len(L):
arr[k] = L[i]
i+= 1
k+= 1
while j < len(R):
arr[k] = R[j]
j+= 1
k+= 1
# Code to print the list
def printList(arr):
for i in range(len(arr)):
print(arr[i], end =" ")
print()
# driver code to test the above code
if __name__ == '__main__':
arr = [12, 11, 13, 5, 6, 7]
print ("Given array is", end ="\n")
printList(arr)
mergeSort(arr)
print("Sorted array is: ", end ="\n")
printList(arr)
```
|
github_jupyter
|
```
import pymongo
from bs4 import BeautifulSoup
import requests
import pandas as pd
from flask import Flask
# acquire full html contents to search through
url = 'https://mars.nasa.gov/news/'
response = requests.get(url)
# response.text
soup = BeautifulSoup(response.text,'html.parser')
# print(soup.prettify())
#find all <div class=xx>, then take only the latest result + title, paragraph text
title = soup.find_all('div',class_='content_title')[0].text
# print(title)
paragraph = soup.find_all('div',class_='rollover_description_inner')[0].text
# print(paragraph)
from splinter import Browser
from webdriver_manager.chrome import ChromeDriverManager
#open Splinter
executable_path = {"executable_path": ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(response.text,'html.parser')
### I can't get this line to work but I'm pretty sure it's accurate
# grab image text (see following line) and remove unnecessary text before adding to final url
image_url = soup.find('article')['style'].replace('background-image: url(','').replace(');', '')[1:-1]
# background-image: url(' /spaceimages/images/wallpaper/PIA14293-1920x1200.jpg ');
base_url = 'https://www.jpl.nasa.gov'
featured_image_url = base_url + image_url
print(featured_image_url)
url = 'https://space-facts.com/mars/'
#grab table
table = pd.read_html(url)
# table
#drop non-mars info
mars_table_df = table[0]
mars_table_html = mars_table_df.to_html(classes='table table-striped')
print(mars_table_html)
executable_path = {"executable_path": ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
hemispheres = soup.find_all('div', class_='item')
hemispheres_list = []
hemispheres
base_url = 'https://astrogeology.usgs.gov'
for data in hemispheres:
title = data.find('h3').text
#link to hemisphere page
hemisphere_url = data.find('a', class_='itemLink product-item')['href']
browser.visit(base_url + hemisphere_url)
page2_html = browser.html
soup = BeautifulSoup(page2_html, 'html.parser')
image_url = soup.find('img',class_='wide-image')['src']
full_image_url = base_url + image_url
hemispheres_list.append({"title":title,"img_url":full_image_url})
hemispheres_list
```
|
github_jupyter
|
# US Treasury Interest Rates / Yield Curve Data
---
A look at the US Treasury yield curve, according to interest rates published by the US Treasury.
```
import pandas as pd
import altair as alt
import numpy as np
url = 'https://www.treasury.gov/resource-center/data-chart-center/interest-rates/pages/TextView.aspx?data=yieldYear&year={year}'
def fetchRates(year):
df = pd.read_html(url.format(year=year), skiprows=0, attrs={ "class": "t-chart" })[0]
df['Date'] = pd.to_datetime(df.Date)
return df.set_index('Date').resample('1m').last().reset_index()
fetchTsRates = lambda years: pd.concat(map(fetchRates, years))
#fetchRates(2019).head()
```
## How do the interest rates look for the past 4 years (by instrument)?
```
years = range(2016, 2022)
fields = ['Date', '3 mo', '1 yr', '2 yr', '7 yr', '10 yr']
dfm = fetchTsRates(years)[fields].melt(id_vars='Date', var_name='Maturity')
alt.Chart(dfm).mark_line().encode(
alt.X('Date:T', axis=alt.Axis(title='')),
alt.Y('value:Q',
axis=alt.Axis(title='Interest Rate [%]'),
scale=alt.Scale(domain=[np.floor(dfm['value'].apply(float).min()), np.ceil(dfm['value'].apply(float).max())])),
alt.Color('Maturity:N', sort=fields[1:]),
tooltip=[alt.Tooltip('Date:T', format='%b %Y'), alt.Tooltip('Maturity:N'), alt.Tooltip('value:Q')]
).properties(
title='U.S. Treasury Yields from {y1} to {y2}'.format(y1=min(years), y2=max(years)),
height=450,
width=700,
background='white'
)
```
### Same chart as above, just a different mix of instruments
```
years = range(2016, 2022)
fields = ['Date', '6 mo', '2 yr', '3 yr', '10 yr', '30 yr']
dfm = fetchTsRates(years)[fields].melt(id_vars='Date', var_name='Maturity')
c = alt.Chart(dfm).mark_line().encode(
alt.X('Date:T', axis=alt.Axis(title='')),
alt.Y('value:Q',
axis=alt.Axis(title='Interest Rate [%]'),
scale=alt.Scale(domain=[np.floor(dfm['value'].apply(float).min()), np.ceil(dfm['value'].apply(float).max())])),
alt.Color('Maturity:N', sort=fields[1:]),
tooltip=[alt.Tooltip('Date:T', format='%b %Y'), alt.Tooltip('Maturity:N'), alt.Tooltip('value:Q')]
).properties(
title='U.S. Treasury Yields from {y1} to {y2}'.format(y1=min(years), y2=max(years)),
height=450,
width=700,
background='white'
)
c.save('us-treasury-rates.png')
c.display()
```
## How did that chart look for the 4 years before 2008?
```
years = range(2004, 2010)
fields = ['Date', '6 mo', '2 yr', '3 yr', '10 yr', '30 yr']
dfm2 = fetchTsRates(years)[fields].melt(id_vars='Date', var_name='Maturity')
alt.Chart(dfm2).mark_line().encode(
alt.X('Date:T', axis=alt.Axis(title='', format='%b %Y')),
alt.Y('value:Q',
axis=alt.Axis(title='Interest Rate [%]'),
scale=alt.Scale(domain=[np.floor(dfm2['value'].apply(float).min()), np.ceil(dfm2['value'].apply(float).max())])),
alt.Color('Maturity:N', sort=fields[1:]),
tooltip=[alt.Tooltip('Date:T', format='%b %Y'), alt.Tooltip('Maturity:N'), alt.Tooltip('value:Q')]
).properties(
title='U.S. Treasury Yields from {y1} to {y2}'.format(y1=min(years), y2=max(years)),
height=450,
width=700,
background='white'
)
year = 2019
alt.Chart(fetchRates(year).melt(id_vars='Date', var_name='Maturity')).mark_line().encode(
alt.X('Date:T', axis=alt.Axis(title='')),
alt.Y('value:Q', axis=alt.Axis(title='Interest Rate [%]'), scale=alt.Scale(zero=False)),
alt.Color('Maturity:N',
sort=['1 mo', '2 mo', '3 mo', '6 mo', '1 yr', '2 yr', '3 yr', '5 yr', '7 yr', '10 yr', '20 yr', '30 yr']),
tooltip=[alt.Tooltip('Date:T', format='%b %Y'), alt.Tooltip('Maturity:N'), alt.Tooltip('value:Q')]
).properties(
title='U.S. Treasury Yields for {year}'.format(year=year),
height=450,
width=700
).interactive()
year = 2007
alt.Chart(fetchRates(year).melt(id_vars='Date', var_name='Maturity')).mark_line().encode(
alt.X('Date:T', axis=alt.Axis(title='')),
alt.Y('value:Q', axis=alt.Axis(title='Interest Rate [%]'), scale=alt.Scale(zero=False)),
alt.Color('Maturity:N',
sort=['1 mo', '2 mo', '3 mo', '6 mo', '1 yr', '2 yr', '3 yr', '5 yr', '7 yr', '10 yr', '20 yr', '30 yr']),
tooltip=[alt.Tooltip('Date:T', format='%b %Y'), alt.Tooltip('Maturity:N'), alt.Tooltip('value:Q')]
).properties(
title='U.S. Treasury Yields for {year}'.format(year=year),
height=450,
width=700
).interactive()
year = 1996
alt.Chart(fetchRates(year).melt(id_vars='Date', var_name='Maturity')).mark_line().encode(
alt.X('Date:T', axis=alt.Axis(title='')),
alt.Y('value:Q', axis=alt.Axis(title='Interest Rate [%]'), scale=alt.Scale(zero=False)),
alt.Color('Maturity:N'),
tooltip=[alt.Tooltip('Date:T', format='%b %Y'), alt.Tooltip('Maturity:N'), alt.Tooltip('value:Q')]
).properties(
title='U.S. Treasury Yields for {year}'.format(year=year),
height=450,
width=700
).interactive()
```
## Visualizing the "yield curve" of US Treasuries
```
years = range(2004, 2009)
instruments = {
0.25: '3 Month T-bill',
0.5: '6 Month T-bill',
2: '2 Year Note',
10: '10 Year Note',
30: '30 Year Bond'
}
fieldsToYears = {'3 mo': 0.25, '6 mo': 0.5, '2 yr': 2, '10 yr': 10, '30 yr': 30}
fields = [i for i in fieldsToYears.keys()]
dfm2 = fetchTsRates(years)[fields + ['Date']].melt(id_vars='Date', var_name='Maturity')
dfm2["Year"] = dfm2.Date.apply(lambda v: v.year)
alt.Chart(dfm2.groupby(["Maturity", "Year"]).agg({ "value": "mean" }).reset_index()).mark_line().encode(
alt.X('Maturity:O', axis=alt.Axis(title='Maturity', labelAngle=0), sort=fields),
alt.Y('value:Q', axis=alt.Axis(title='Interest Rate [%]')),
alt.Color('Year:N'),
tooltip=[alt.Tooltip('Date:T', format='%b %Y'), alt.Tooltip('Maturity:N'), alt.Tooltip('value:Q')]
).properties(
title='U.S. Treasury Yield comparison [{y1} to {y2}]'.format(y1=min(years), y2=max(years)),
height=450,
width=700
)
years = range(2016, 2022)
instruments = {
0.25: '3 Month T-bill',
0.5: '6 Month T-bill',
2: '2 Year Note',
10: '10 Year Note',
30: '30 Year Bond'
}
fieldsToYears = {'3 mo': 0.25, '6 mo': 0.5, '2 yr': 2, '10 yr': 10, '30 yr': 30}
fields = [i for i in fieldsToYears.keys()]
dfm2 = fetchTsRates(years)[fields + ['Date']].melt(id_vars='Date', var_name='Maturity')
dfm2["Year"] = dfm2.Date.apply(lambda v: v.year)
alt.Chart(dfm2.groupby(["Maturity", "Year"]).agg({ "value": "mean" }).reset_index()).mark_line().encode(
alt.X('Maturity:O', axis=alt.Axis(title='Maturity', labelAngle=0), sort=fields),
alt.Y('value:Q', axis=alt.Axis(title='Interest Rate [%]')),
alt.Color('Year:N'),
tooltip=[alt.Tooltip('Date:T', format='%b %Y'), alt.Tooltip('Maturity:N'), alt.Tooltip('value:Q')]
).properties(
title='Yearly Average U.S. Treasury Yield comparison [{y1} to {y2}]'.format(y1=min(years), y2=max(years)),
height=450,
width=700
)
```
|
github_jupyter
|
This notebook is part of the $\omega radlib$ documentation: https://docs.wradlib.org.
Copyright (c) $\omega radlib$ developers.
Distributed under the MIT License. See LICENSE.txt for more info.
# Supported radar data formats
The binary encoding of many radar products is a major obstacle for many potential radar users. Often, decoder software is not easily available. In case formats are documented, the implementation of decoders is a major programming effort. This tutorial provides an overview of the data formats currently supported by $\omega radlib$. We seek to continuously enhance the range of supported formats, so this document is only a snapshot. If you need a specific file format to be supported by $\omega radlib$, please [raise an issue](https://github.com/wradlib/wradlib/issues/new) of type *enhancement*. You can provide support by adding documents which help to decode the format, e.g. format reference documents or software code in other languages for decoding the format.
At the moment, *supported format* means that the radar format can be read and further processed by wradlib. Normally, wradlib will return a numpy array of data values and a dictionary of metadata - if the file contains any.
<div class="alert alert-warning">
**Note** <br>
Due to recent developments in major data science packages (eg. [xarray](https://xarray.pydata.org)) wradlib supports as of version 1.10 reading of ``ODIM``, ``GAMIC`` and ``CfRadial`` (1 and 2) datasets into an `xarray` based data structure. Output to ``ODIM_H5`` and ``CfRadial2`` like data files as well as standard netCDF4 data files is easily possible.
</div>
In the following, we will provide an overview of file formats which can be currently read by $\omega radlib$.
Reading weather radar files is done via the [wradlib.io](https://docs.wradlib.org/en/latest/io.html) module. There you will find a complete function reference.
```
import wradlib as wrl
import warnings
#warnings.filterwarnings('ignore')
import matplotlib.pyplot as pl
import numpy as np
try:
get_ipython().magic("matplotlib inline")
except:
pl.ion()
```
## German Weather Service: DX format
The German Weather Service uses the DX file format to encode local radar sweeps. DX data are in polar coordinates. The naming convention is as follows: <pre>raa00-dx_<location-id>-<YYMMDDHHMM>-<location-abreviation>---bin</pre> or <pre>raa00-dx_<location-id>-<YYYYMMDDHHMM>-<location-abreviation>---bin</pre>
[Read and plot DX radar data from DWD](wradlib_reading_dx.ipynb) provides an extensive introduction into working with DX data. For now, we would just like to know how to read the data:
```
fpath = 'dx/raa00-dx_10908-0806021655-fbg---bin.gz'
f = wrl.util.get_wradlib_data_file(fpath)
data, metadata = wrl.io.read_dx(f)
```
Here, ``data`` is a two dimensional array of shape (number of azimuth angles, number of range gates). This means that the number of rows of the array corresponds to the number of azimuth angles of the radar sweep while the number of columns corresponds to the number of range gates per ray.
```
print(data.shape)
print(metadata.keys())
fig = pl.figure(figsize=(10, 10))
ax, im = wrl.vis.plot_ppi(data, fig=fig, proj='cg')
```
## German Weather Service: RADOLAN (quantitative) composit
The quantitative composite format of the DWD (German Weather Service) was established in the course of the [RADOLAN project](https://www.dwd.de/DE/leistungen/radolan/radolan.html). Most quantitative composite products from the DWD are distributed in this format, e.g. the R-series (RX, RY, RH, RW, ...), the S-series (SQ, SH, SF, ...), and the E-series (European quantitative composite, e.g. EZ, EH, EB). Please see the [composite format description](https://www.dwd.de/DE/leistungen/radolan/radolan_info/radolan_radvor_op_komposit_format_pdf.pdf?__blob=publicationFile&v=5) for a full reference and a full table of products (unfortunately only in German language). An extensive section covering many RADOLAN aspects is here: [RADOLAN](../radolan.ipynb)
Currently, the RADOLAN composites have a spatial resolution of 1km x 1km, with the national composits (R- and S-series) being 900 x 900 grids, and the European composits 1500 x 1400 grids. The projection is [polar-stereographic](../radolan/radolan_grid.ipynb#Polar-Stereographic-Projection). The products can be read by the following function:
<div class="alert alert-warning">
**Note** <br>
Since $\omega radlib$ version 1.10 a ``RADOLAN`` reader [wradlib.io.radolan.open_radolan_dataset()](https://docs.wradlib.org/en/latest/generated/wradlib.io.radolan.open_radolan_dataset.html) based on [Xarray](https://xarray.pydata.org) is available. Please read the more indepth notebook [wradlib_radolan_backend](wradlib_radolan_backend.ipynb).
</div>
```
fpath = 'radolan/misc/raa01-rw_10000-1408102050-dwd---bin.gz'
f = wrl.util.get_wradlib_data_file(fpath)
data, metadata = wrl.io.read_radolan_composite(f)
```
Here, ``data`` is a two dimensional integer array of shape (number of rows, number of columns). Different product types might need different levels of postprocessing, e.g. if the product contains rain rates or accumulations, you will normally have to divide data by factor 10. ``metadata`` is again a dictionary which provides metadata from the files header section, e.g. using the keys *producttype*, *datetime*, *intervalseconds*, *nodataflag*.
```
print(data.shape)
print(metadata.keys())
```
Masking the NoData (or missing) values can be done by:
```
maskeddata = np.ma.masked_equal(data,
metadata["nodataflag"])
fig = pl.figure(figsize=(10, 8))
# get coordinates
radolan_grid_xy = wrl.georef.get_radolan_grid(900, 900)
x = radolan_grid_xy[:, :, 0]
y = radolan_grid_xy[:, :, 1]
# create quick plot with colorbar and title
pl.figure(figsize=(10, 8))
pl.pcolormesh(x, y, maskeddata)
```
## HDF5
### OPERA HDF5 (ODIM_H5)
[HDF5](https://www.hdfgroup.org/solutions/hdf5/) is a data model, library, and file format for storing and managing data. The [OPERA program](https://www.eumetnet.eu/activities/observations-programme/current-activities/opera/) developed a convention (or information model) on how to store and exchange radar data in hdf5 format. It is based on the work of [COST Action 717](https://e-services.cost.eu/files/domain_files/METEO/Action_717/final_report/final_report-717.pdf) and is used e.g. in real-time operations in the Nordic European countries. The OPERA Data and Information Model (ODIM) is documented e.g. in this [report](https://www.eol.ucar.edu/system/files/OPERA_2008_03_WP2.1b_ODIM_H5_v2.1.pdf). Make use of these documents in order to understand the organization of OPERA hdf5 files!
<div class="alert alert-warning">
**Note** <br>
Since $\omega radlib$ version 1.10 an ``Odim_H5`` reader [wradlib.io.open_odim_dataset()](https://docs.wradlib.org/en/latest/generated/wradlib.io.hdf.open_odim_dataset.html) based on [Xarray](https://xarray.pydata.org) is available. Please read the more indepth notebook [wradlib_odim_backend](wradlib_odim_backend.ipynb).
Former `xarray`-based implementations will be deprecated in future versions.
</div>
The hierarchical nature of HDF5 can be described as being similar to directories, files, and links on a hard-drive. Actual metadata are stored as so-called *attributes*, and these attributes are organized together in so-called *groups*. Binary data are stored as so-called *datasets*. As for ODIM_H5, the ``root`` (or top level) group contains three groups of metadata: these are called ``what`` (object, information model version, and date/time information), ``where`` (geographical information), and ``how`` (quality and optional/recommended metadata). For a very simple product, e.g. a CAPPI, the data is organized in a group called ``dataset1`` which contains another group called ``data1`` where the actual binary data are found in ``data``. In analogy with a file system on a hard-disk, the HDF5 file containing this simple product is organized like this:
```
/
/what
/where
/how
/dataset1
/dataset1/data1
/dataset1/data1/data
```
The philosophy behind the $\omega radlib$ interface to OPERA's data model is very straightforward: $\omega radlib$ simply translates the complete file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the "directory trees" shown above. Each key ending with ``/data`` points to a Dataset (i.e. a numpy array of data). Each key ending with ``/what``, ``/where`` or ``/how`` points to another dictionary of metadata. The entire output can be obtained by:
```
fpath = 'hdf5/knmi_polar_volume.h5'
f = wrl.util.get_wradlib_data_file(fpath)
fcontent = wrl.io.read_opera_hdf5(f)
```
The user should inspect the output obtained from his or her hdf5 file in order to see how access those items which should be further processed. In order to get a readable overview of the output dictionary, one can use the pretty printing module:
```
# which keyswords can be used to access the content?
print(fcontent.keys())
# print the entire content including values of data and metadata
# (numpy arrays will not be entirely printed)
print(fcontent['dataset1/data1/data'])
```
Please note that in order to experiment with such datasets, you can download hdf5 sample data from the [OPERA](https://www.eumetnet.eu/activities/observations-programme/current-activities/opera/) or use the example data provided with the [wradlib-data](https://github.com/wradlib/wradlib-data/) repository.
```
fig = pl.figure(figsize=(10, 10))
im = wrl.vis.plot_ppi(fcontent['dataset1/data1/data'], fig=fig, proj='cg')
```
### GAMIC HDF5
GAMIC refers to the commercial [GAMIC Enigma MURAN software](https://www.gamic.com) which exports data in hdf5 format. The concept is quite similar to the above [OPERA HDF5 (ODIM_H5)](#OPERA-HDF5-(ODIM_H5)) format.
<div class="alert alert-warning">
**Note** <br>
Since $\omega radlib$ version 1.10 an ``GAMIC`` reader [wradlib.io.hdf.open_gamic_dataset()](https://docs.wradlib.org/en/latest/generated/wradlib.io.hdf.open_gamic_dataset.html) based on [Xarray](https://xarray.pydata.org) is available. Please read the more indepth notebook [wradlib_gamic_backend](wradlib_gamic_backend.ipynb).
Former `xarray`-based implementations will be deprecated in future versions.
</div>
Such a file (typical ending: *.mvol*) can be read by:
```
fpath = 'hdf5/2014-08-10--182000.ppi.mvol'
f = wrl.util.get_wradlib_data_file(fpath)
data, metadata = wrl.io.read_gamic_hdf5(f)
```
While metadata represents the usual dictionary of metadata, the data variable is a dictionary which might contain several numpy arrays with the keywords of the dictionary indicating different moments.
```
print(metadata.keys())
print(metadata['VOL'])
print(metadata['SCAN0'].keys())
print(data['SCAN0'].keys())
print(data['SCAN0']['PHIDP'].keys())
print(data['SCAN0']['PHIDP']['data'].shape)
fig = pl.figure(figsize=(10, 10))
im = wrl.vis.plot_ppi(data['SCAN0']['ZH']['data'], fig=fig, proj='cg')
```
### Generic HDF5
This is a generic hdf5 reader, which will read any hdf5 structure.
```
fpath = 'hdf5/2014-08-10--182000.ppi.mvol'
f = wrl.util.get_wradlib_data_file(fpath)
fcontent = wrl.io.read_generic_hdf5(f)
print(fcontent.keys())
print(fcontent['where'])
print(fcontent['how'])
print(fcontent['scan0/moment_3'].keys())
print(fcontent['scan0/moment_3']['attrs'])
print(fcontent['scan0/moment_3']['data'].shape)
fig = pl.figure(figsize=(10, 10))
im = wrl.vis.plot_ppi(fcontent['scan0/moment_3']['data'], fig=fig, proj='cg')
```
## NetCDF
The NetCDF format also claims to be self-describing. However, as for all such formats, the developers of netCDF also admit that "[...] the mere use of netCDF is not sufficient to make data self-describing and meaningful to both humans and machines [...]" (see [here](https://www.unidata.ucar.edu/software/netcdf/documentation/historic/netcdf/Conventions.html). Different radar operators or data distributors will use different naming conventions and data hierarchies (i.e. "data models") that the reading program might need to know about.
$\omega radlib$ provides two solutions to address this challenge. The first one ignores the concept of data models and just pulls all data and metadata from a NetCDF file ([wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html). The second is designed for a specific data model used by the EDGE software ([wradlib.io.read_edge_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_edge_netcdf.html)).
<div class="alert alert-warning">
**Note** <br>
Since $\omega radlib$ version 1.10 a ``Cf/Radial1`` reader [wradlib.io.xarray.open_cfradial1_dataset()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.open_cfradial1_dataset.html) and a ``Cf/Radial2`` reader [wradlib.io.xarray.open_cfradial2_dataset()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.open_cfradial2_dataset.html) for CF versions 1.X and 2 based on [Xarray](https://xarray.pydata.org/en/stable/) are available. Please read the more indepth notebooks [wradlib_cfradial1_backend](wradlib_cfradial1_backend.ipynb) and [wradlib_cfradial2_backend](wradlib_cfradial2_backend.ipynb).
</div>
### Generic NetCDF reader (includes CfRadial)
$\omega radlib$ provides a function that will virtually read any NetCDF file irrespective of the data model: [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html). It is built upon Python's [netcdf4](https://unidata.github.io/netcdf4-python/) library. [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) will return only one object, a dictionary, that contains all the contents of the NetCDF file corresponding to the original file structure. This includes all the metadata, as well as the so called "dimensions" (describing the dimensions of the actual data arrays) and the "variables" which will contains the actual data. Users can use this dictionary at will in order to query data and metadata; however, they should make sure to consider the documentation of the corresponding data model. [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) has been shown to work with a lot of different data models, most notably **CfRadial** (see [here](https://ncar.github.io/CfRadial/) for details). A typical call to [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) would look like:
```
fpath = 'netcdf/example_cfradial_ppi.nc'
f = wrl.util.get_wradlib_data_file(fpath)
outdict = wrl.io.read_generic_netcdf(f)
for key in outdict.keys():
print(key)
```
Please see [this example notebook](wradlib_generic_netcdf_example.ipynb) to get started.
### EDGE NetCDF
EDGE is a commercial software for radar control and data analysis provided by the Enterprise Electronics Corporation. It allows for netCDF data export. The resulting files can be read by [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html), but $\omega radlib$ also provides a specific function, [wradlib.io.read_edge_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_edge_netcdf.html) to return metadata and data as seperate objects:
```
fpath = 'netcdf/edge_netcdf.nc'
f = wrl.util.get_wradlib_data_file(fpath)
data, metadata = wrl.io.read_edge_netcdf(f)
print(data.shape)
print(metadata.keys())
```
## Gematronik Rainbow
Rainbow refers to the commercial [RAINBOW®5 APPLICATION SOFTWARE](https://www.leonardogermany.com/en/products/rainbow-5) which exports data in an XML flavour, which due to binary data blobs violates XML standard. Gematronik provided python code for implementing this reader in $\omega radlib$, which is very much appreciated.
The philosophy behind the $\omega radlib$ interface to Gematroniks data model is very straightforward: $\omega radlib$ simply translates the complete xml file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the "xml nodes" and "xml attributes". Each ``data`` key points to a Dataset (i.e. a numpy array of data). Such a file (typical ending: *.vol* or *.azi*) can be read by:
```
fpath = 'rainbow/2013070308340000dBuZ.azi'
f = wrl.util.get_wradlib_data_file(fpath)
fcontent = wrl.io.read_rainbow(f)
```
The user should inspect the output obtained from his or her Rainbow file in order to see how access those items which should be further processed. In order to get a readable overview of the output dictionary, one can use the pretty printing module:
```
# which keyswords can be used to access the content?
print(fcontent.keys())
# print the entire content including values of data and metadata
# (numpy arrays will not be entirely printed)
print(fcontent['volume']['sensorinfo'])
```
You can check this [example notebook](wradlib_load_rainbow_example.ipynb) for getting a first impression.
## Vaisala Sigmet IRIS
[IRIS](https://www.vaisala.com/en/products/instruments-sensors-and-other-measurement-devices/weather-radar-products/iris-focus) refers to the commercial Vaisala Sigmet **I**nteractive **R**adar **I**nformation **S**ystem. The Vaisala Sigmet Digital Receivers export data in a [well documented](ftp://ftp.sigmet.com/outgoing/manuals/IRIS_Programmers_Manual.pdf) binary format.
The philosophy behind the $\omega radlib$ interface to the IRIS data model is very straightforward: $\omega radlib$ simply translates the complete binary file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the Sigmet Data Structures.
<div class="alert alert-warning">
**Note** <br>
Since $\omega radlib$ version 1.12 an ``IRIS`` reader [wradlib.io.iris.open_iris_dataset()](https://docs.wradlib.org/en/latest/generated/wradlib.io.iris.open_iris_dataset.html) based on [Xarray](https://xarray.pydata.org) is available. Please read the more indepth notebook [wradlib_iris_backend](wradlib_iris_backend.ipynb).
At the same time the reader has changed with respect to performance. So the ray metadata is only read once per sweep and is only included once in the output of `read_iris`. Currently we keep backwards compatibility, but this behaviour is deprecated and will be changed in a future version. See the two examples below.
</div>
Such a file (typical ending: *.RAWXXXX) can be read by:
```
fpath = 'sigmet/cor-main131125105503.RAW2049'
f = wrl.util.get_wradlib_data_file(fpath)
fcontent = wrl.io.read_iris(f, keep_old_sweep_data=False)
# which keywords can be used to access the content?
print(fcontent.keys())
# print the entire content including values of data and
# metadata of the first sweep
# (numpy arrays will not be entirely printed)
print(fcontent['data'][1].keys())
print()
print(fcontent['data'][1]['ingest_data_hdrs'].keys())
print(fcontent['data'][1]['ingest_data_hdrs']['DB_DBZ'])
print()
print(fcontent['data'][1]['sweep_data'].keys())
print(fcontent['data'][1]['sweep_data']['DB_DBZ'])
fig = pl.figure(figsize=(10, 10))
swp = fcontent['data'][1]['sweep_data']
ax, im = wrl.vis.plot_ppi(swp["DB_DBZ"], fig=fig, proj='cg')
fpath = 'sigmet/cor-main131125105503.RAW2049'
f = wrl.util.get_wradlib_data_file(fpath)
fcontent = wrl.io.read_iris(f, keep_old_sweep_data=True)
# which keywords can be used to access the content?
print(fcontent.keys())
# print the entire content including values of data and
# metadata of the first sweep
# (numpy arrays will not be entirely printed)
print(fcontent['data'][1].keys())
print()
print(fcontent['data'][1]['ingest_data_hdrs'].keys())
print(fcontent['data'][1]['ingest_data_hdrs']['DB_DBZ'])
print()
print(fcontent['data'][1]['sweep_data'].keys())
print(fcontent['data'][1]['sweep_data']['DB_DBZ'])
fig = pl.figure(figsize=(10, 10))
swp = fcontent['data'][1]['sweep_data']
ax, im = wrl.vis.plot_ppi(swp["DB_DBZ"]["data"], fig=fig, proj='cg')
```
## OPERA BUFR
**WARNING** $\omega radlib$ does currently not support the BUFR format!
The Binary Universal Form for the Representation of meteorological data (BUFR) is a binary data format maintained by the World Meteorological Organization (WMO).
The BUFR format was adopted by [OPERA](https://www.eumetnet.eu/activities/observations-programme/current-activities/opera/) for the representation of weather radar data.
A BUFR file consists of a set of *descriptors* which contain all the relevant metadata and a data section.
The *descriptors* are identified as a tuple of three integers. The meaning of these tupels is described in the so-called BUFR tables. There are generic BUFR tables provided by the WMO, but it is also possible to define so called *local tables* - which was done by the OPERA consortium for the purpose of radar data representation.
If you want to use BUFR files together with $\omega radlib$, we recommend that you check out the [OPERA webpage](https://www.eumetnet.eu/activities/observations-programme/current-activities/opera/) where you will find software for BUFR decoding. In particular, you might want to check out [this tool](https://eumetnet.eu/wp-content/uploads/2017/04/bufr_opera_mf.zip) which seems to support the conversion of OPERA BUFR files to ODIM_H5 (which is supported by $\omega radlib$). However, you have to build it yourself.
It would be great if someone could add a tutorial on how to use OPERA BUFR software together with $\omega radlib$!
|
github_jupyter
|
```
import os
import pandas as pd
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM,Dropout
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.optimizers import RMSprop,Adam
from keras.layers import Dense, Input, TimeDistributed, Masking
from keras.models import Model
import sys
random.seed(49297)
mimic3_path="/home/rafiparvez1706/mimic"
variable_map_file='../resources/itemid_to_variable_map.csv'
variable_ranges_file='../resources/variable_ranges.csv'
channel_info_file ='../resources/channel_info.json'
output_path ="../data/root"
train_dir = os.path.join(output_path, 'train')
train_id_dirs_orig = [os.path.join(train_dir, subdir) for subdir in os.listdir(train_dir)]
valid_dir = os.path.join(output_path, 'valid')
valid_id_dirs_orig = [os.path.join(valid_dir, subdir) for subdir in os.listdir(valid_dir)]
maxlen=0
lens=[]
train_id_dirs=train_id_dirs_orig[0:1000]
for cnt, folder in enumerate(train_id_dirs):
Xfname = os.path.join(folder,'cleaned_timeseries.csv')
df_X = pd.read_csv(Xfname)
X_train=df_X.loc[:, df_X.columns != 'Hours'].values
lens.append(X_train.shape[0])
maxlen=max(maxlen,X_train.shape[0])
#maxlen=500
print(maxlen)
maxlen=200
```
## Preparing Sequential Data
```
def transform_data(folder,maxlen):
Xfname = os.path.join(folder,'cleaned_timeseries.csv')
df_X = pd.read_csv(Xfname)
X_train=df_X.loc[:, df_X.columns != 'Hours'].values
X_train = sequence.pad_sequences(X_train.T, dtype='float32', maxlen=maxlen, padding='post', truncating='post')
X_train=X_train.T
X_train= X_train.reshape(1, X_train.shape[0],X_train.shape[1])
X_train=X_train.astype(np.float32)
yfname = os.path.join(folder,'stays.csv')
df_y = pd.read_csv(yfname)
IsReadmitted = df_y.IsReadmitted.values[0].astype(np.float32)
y_train=np.empty(len(df_X))
y_train.fill(IsReadmitted)
y_train = y_train.astype(np.float32)
y_train = [y_train]
y_train = sequence.pad_sequences(y_train, dtype='float32', maxlen=maxlen, padding='post', truncating='post')
y_train=y_train.reshape(y_train.shape[0],y_train.shape[1],1)
return X_train,y_train
X_train,y_train=transform_data(train_id_dirs[0], maxlen)
X_train.shape
y_train.shape
```
## Defining Model
```
model = Sequential()
nb_samples=len(train_id_dirs)
input_dim=58
model.add(LSTM(output_dim=256, input_shape=(maxlen, 58), return_sequences=True))
model.add(Dropout(0.10))
model.add(LSTM(output_dim=128, return_sequences=True))
model.add(Dropout(0.10))
model.add(TimeDistributed(Dense(1, activation='sigmoid')))
optimizer=Adam(lr=0.001, beta_1=0.5)
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
# print layer shapes and model parameters
model.summary()
train_id_dirs=random.sample(train_id_dirs_orig, 5000)
Total_count = len(train_id_dirs)
def generate_batches(files, batch_size):
counter = 0
while True:
counter = (counter + 1) % len(files)
folder = files[counter]
X_train,y_train=transform_data(folder, maxlen)
for cbatch in range(0, X_train.shape[0], batch_size):
yield (X_train[cbatch:(cbatch + batch_size),:,:], y_train[cbatch:(cbatch + batch_size)])
for cbatch in range(0, X_train.shape[0], batch_size):
yield (X_train[cbatch:(cbatch + batch_size),:,:], y_train[cbatch:(cbatch + batch_size)])
#train_files = [train_bundle_loc + "bundle_" + cb.__str__() for cb in range(nb_train_bundles)]
batch_size=8
samples_per_epoch=len(train_id_dirs)
num_epoch=4
gen = generate_batches(files=train_id_dirs, batch_size=batch_size)
history = model.fit_generator(gen, samples_per_epoch=samples_per_epoch, nb_epoch=num_epoch,verbose=1)
model.save_weights("lstm_model.h5")
```
## Model Evaluation
```
import pandas as pd
df = pd.DataFrame(train_id_dirs, columns=["trains"])
df.to_csv('train_id_dirs_3.csv', index=False)
cv_id_dirs=random.sample(valid_id_dirs_orig, 2000)
len(cv_id_dirs)
list_pred=[]
list_lbl=[]
for cv_folder in cv_id_dirs:
X_cv, y_cv = transform_data(cv_folder, maxlen)
preds = model.predict(X_cv)
#print(preds[0][:20])
label = y_cv[:, 0, :].squeeze();
prediction = preds[:, -1, :].squeeze()
#print(label,prediction)
list_lbl.append(label)
list_pred.append(prediction)
from sklearn.metrics import roc_curve, auc
# compute ROC curve for predictions
rnn_roc = roc_curve(list_lbl,list_pred)
# compute the area under the curve of prediction ROC
rnn_auc = auc(rnn_roc[0], rnn_roc[1])
plt.figure(figsize=(7, 5))
line_kwargs = {'linewidth': 4, 'alpha': 0.8}
plt.plot(rnn_roc[0], rnn_roc[1], label='LSTM AUROC: %0.3f' % rnn_auc, color='#6AA84F', **line_kwargs)
plt.legend(loc='lower right', fontsize=20)
```
|
github_jupyter
|
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import torch
import matplotlib.pyplot as plt
import torchvision
import csv
import glob
import cv2
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
from random import shuffle
import glob
#from torchsummary import summary
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
%matplotlib inline
# Any results you write to the current directory are saved as output.
#classes = ('plane', 'car', 'bird', 'cat',
# 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 50, 5)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(50, 100, 7)
self.pool2 = nn.MaxPool2d(2,2)
self.fc1 = nn.Linear(100 * 12 * 12, 120)
self.fc2 = nn.Linear(120, 100)
self.fc3 = nn.Linear(100, 2)
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = x.view(-1, 100 * 12 * 12)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.sigmoid(self.fc3(x))
return x
net = Net()
print(net)
'''
#alternate way to create a list of file name and labels
import numpy as np
import os
PATH = '../input/'
fnames = np.array([f'train/{f}' for f in sorted(os.listdir(f'{PATH}train'))])
labels = np.array([(0 if 'cat' in fname else 1) for fname in fnames])
print(fnames[0:100] , labels[0:100])
'''
shuffle_data = True # shuffle the addresses before saving
cat_dog_train_path = '../input/train/*.jpg'
# read addresses and labels from the 'train' folder
addrs = glob.glob(cat_dog_train_path)
labels = [ [1,0] if 'cat' in addr else [0,1] for addr in addrs] # 1 = Cat, 0 = Dog
# to shuffle data
if shuffle_data:
c = list(zip(addrs, labels))
shuffle(c)
addrs, labels = zip(*c)
print(labels[0:10])
# Divide the hata into 60% train, 20% validation, and 20% test
train_addrs = addrs[0:int(0.6*len(addrs))]
train_labels = labels[0:int(0.6*len(labels))]
#train_addrs.size
val_addrs = addrs[int(0.6*len(addrs)):int(0.8*len(addrs))]
val_labels = labels[int(0.6*len(addrs)):int(0.8*len(addrs))]
test_addrs = addrs[int(0.8*len(addrs)):]
test_labels = labels[int(0.8*len(labels)):]
# loop over train addresses
train_data = []
for i in range(len(train_addrs[:1000])):
# print how many images are saved every 10 images
if i % 1000 == 0 and i > 1:
print ('Train data: {}/{}'.format(i, len(train_addrs)))
# read an image and resize to (64, 64)
# cv2 load images as BGR, convert it to RGB
addr = train_addrs[i]
img = cv2.imread(addr)
img = cv2.resize(img, (64, 64), interpolation=cv2.INTER_CUBIC)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
train_data.append([np.array(img), np.array(train_labels[i])])
shuffle(train_data)
np.save('train_data.npy', train_data)
# loop over test addresses
#creating test data
test_data = []
for i in range(len(test_addrs[:1000])):
# print how many images are saved every 1000 images
if i % 1000 == 0 and i > 1:
print ('Test data: {}/{}'.format(i, len(test_addrs)))
# read an image and resize to (64, 64)
# cv2 load images as BGR, convert it to RGB
addr = test_addrs[i]
img = cv2.imread(addr)
img = cv2.resize(img, (64, 64), interpolation=cv2.INTER_CUBIC)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
test_data.append([np.array(img), np.array(labels[i])])
shuffle(test_data)
np.save('test_data.npy', test_data)
# loop over val addresses
val_data = []
for i in range(len(val_addrs[:1000])):
# print how many images are saved every 1000 images
if i % 1000 == 0 and i > 1:
print ('Val data: {}/{}'.format(i, len(val_addrs)))
# read an image and resize to (64, 64)
# cv2 load images as BGR, convert it to RGB
addr = val_addrs[i]
img = cv2.imread(addr)
img = cv2.resize(img, (64, 64), interpolation=cv2.INTER_CUBIC)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
val_data.append([np.array(img), np.array(labels[i])])
shuffle(val_data)
np.save('val_data.npy', val_data)
#print(val_data[1])
from torch.autograd import Variable
X = np.array([i[0] for i in train_data]).reshape(-1,64,64,3)
X = Variable(torch.Tensor(X))
X = X.reshape(-1,64,64,3)
X = X.permute(0,3,1,2)
print(X.shape)
#Y = Variable(torch.Tensor(Y))
Y = np.array([i[1] for i in train_data])
target = Variable(torch.Tensor(Y))
target = target.type(torch.LongTensor)
print(target.shape)
#print(target)
criterian = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr = 0.0001, momentum = 0.9)
for epoch in range(100):
running_loss = 0.0
optimizer.zero_grad() #zero the parameter gradients
output = net(X)
loss = criterian(output, torch.max(target, 1)[1])
loss.backward()
optimizer.step()
running_loss += loss.item()
print(epoch, ':', running_loss)
test = np.array([i[0] for i in test_data]).reshape(-1,64,64,3)
test = Variable(torch.Tensor(test))
test = test.reshape(-1,64,64,3)
test = test.permute(0,3,1,2)
print(test.shape)
#Y = Variable(torch.Tensor(Y))
tlabels = np.array([i[1] for i in test_data])
tlabels = Variable(torch.Tensor(tlabels))
tlabels = tlabels.type(torch.long)
print(tlabels.shape)
print(tlabels)
correct = 0
total = 0
with torch.no_grad():
for data in zip(X,target):
images, labels = data
images = images.reshape(1,3,64,64)
outputs = net(images)
_, predicted = torch.max(outputs, 1)
#total += labels.size(0)
if((predicted == 0 and labels[0] == 1) or (predicted == 1 and labels[1]==1) ):
correct+=1
#correct += (predicted == labels).sum().item()
#print(outputs,labels)
total = X.shape[0]
print('Train accuracy of the network on the' + str(total) + 'train images: %f %%' % (
100 * (correct*1.0) / total) )
print(correct, total)
correct = 0
total = 0
with torch.no_grad():
for data in zip(test,tlabels):
images, labels = data
images = images.reshape(1,3,64,64)
outputs = net(images)
_, predicted = torch.max(outputs, 1)
#total += labels.size(0)
if((predicted == 0 and labels[0] == 1) or (predicted == 1 and labels[1]==1) ):
correct += 1
total = test.shape[0]
print('Test accuracy of the network on the ' + str(total) + ' test images: %f %%' % (
100 * (correct*1.0) / total) )
print(correct, total)
```
|
github_jupyter
|
# Exercices
With each exercice will teach you one aspect of deep learning. The process of machine learning can be decompose in 7 steps :
* Data preparation
* Model definition
* Model training
* Model evaluation
* Hyperparameter tuning
* Prediction
## 3 - Model training
- 3.1 Metrics : evaluate model
- 3.2 Loss function (mean square error, cross entropy)
- 3.3 Optimizer function (stochastic gradient descent)
- 3.4 Batch size, epoch number
### Load dataset
```
import torchvision.datasets as dset
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
data_path = './data'
#trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
trans = transforms.Compose([transforms.Resize((32, 32)), transforms.ToTensor()])
# if not exist, download mnist dataset
train_set = dset.MNIST(root=data_path, train=True, transform=trans, download=True)
test_set = dset.MNIST(root=data_path, train=False, transform=trans, download=True)
batch = 4
data_train_loader = DataLoader(train_set, batch_size=batch, shuffle=True, num_workers=8)
data_test_loader = DataLoader(test_set, batch_size=batch, num_workers=8)
classes = ('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')
```
### Define the network architecture
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
leNet = Net()
print(leNet)
```
### Define loss criterion and optimizer
```
import torch.optim as optim
criterion = nn.MSELoss()
optimizer = optim.SGD(leNet.parameters(), lr=0.01)
```
### Training loop
```
for epoch in range(3): # loop over the dataset multiple times
leNet.train()
running_loss = 0.0
for i, (images, labels) in enumerate(data_train_loader):
optimizer.zero_grad()
output = leNet(images)
# align vectors labels <=> outputs
label_vect = torch.zeros(4, 10, dtype=torch.float)
for j in range(0, len(labels)):
label_vect[j, labels[j]] = 1.0
loss = criterion(output, label_vect)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
print('[{:d}] loss: {:.5f}'.format(epoch + 1, running_loss / (batch*len(data_train_loader))))
print('Finished Training')
```
### Test the model
```
import matplotlib.pyplot as plt
import numpy as np
def imshow(images, labels):
npimg = images.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.title("Ground Truth: {}".format(labels))
import torchvision
dataiter = iter(data_test_loader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images), labels)
outputs = leNet(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
```
### Saving leNet
```
torch.save({
'epoch': 1,
'model_state_dict': leNet.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
}, 'checkpoint-MKTD-pytorch-3.last')
```
|
github_jupyter
|
```
%run setup.ipynb
from scipy.stats import dirichlet
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import traceback
import logging
logger = logging.getLogger('ag1000g-phase2')
logger.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
# create formatter and add it to the handlers
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
#generate counts
Fpos = 2422652
Spos = 2422651
callset = phase2_ar1.callset_pass_biallelic['2L']
g = allel.GenotypeChunkedArray(callset['calldata']['genotype'])
pos = allel.SortedIndex(callset['variants']['POS'])
df_meta = pd.read_csv('../phase2.AR1/samples/samples.meta.txt', sep='\t')
Fb = pos.values == Fpos
Sb = pos.values == Spos
def het_pop(pop):
FSb = Fb + Sb
popbool = np.array(df_meta.population == pop)
popg = g.compress(popbool, axis=1)
popgr = popg.compress(FSb, axis=0)
a = np.asarray(popgr.to_n_alt())
return a
gagam = het_pop('GAgam')
cagam = het_pop('CMgam')
np.save('../data/gabon_n_alt.npy', gagam)
np.save('../data/cameroon_n_alt.npy', cagam)
def run_fs_het_analysis(path, ns=1_000_000):
ac = np.load(path).T
logger.info(f"Loaded {path}")
# assuming col1 is F, col2 is S
assert ac.sum(axis=1).max() == 2
tot_alleles = ac.shape[0] * 2
n_samples = ac.shape[0]
logger.info(f"{n_samples} samples found")
wt_alleles = tot_alleles - ac.sum()
wt_alleles
f_alleles = ac[:, 0].sum()
s_alleles = ac[:, 1].sum()
alpha = [1 + wt_alleles, 1 + f_alleles, 1 + s_alleles]
logger.info(f"Dirichlet alpha set to {alpha}")
diric = dirichlet(alpha)
wt, f, s = diric.mean()
logger.info(
f"Mean of dirichlet- wt: {wt:.2f}, f:{f:.2f}, s:{s:.2f}")
# this is what we observed
is_het = (ac[:, 0] == ac[:, 1]) & (ac.sum(axis=1) == 2)
tot_fs_hets = is_het.sum()
logger.info(
f"In the AC data we observe {tot_fs_hets} F-S hets")
logger.info(f"Beginning monte carlo analysis, n={ns}")
# draw 1m dirichlet observations of allele frequency
v = np.random.dirichlet(alpha, size=ns)
# for each of the 1m, sample n_samples,
# and count how many "F/S" hets we observe
o = np.zeros(ns, dtype="int")
for i in range(v.shape[0]):
x = np.random.multinomial(2, v[i], size=n_samples)
o[i] = np.sum((x[:, 1] == 1) & (x[:, 2] == 1))
fig, ax = plt.subplots(figsize=(4, 4))
bins = np.arange(0, max(o.max(), tot_fs_hets) + 5, 1)
count, bins, patches = ax.hist(
o, bins=bins, density=True)
ymin, ymax = ax.get_ylim()
ax.vlines([tot_fs_hets], ymin=ymin, ymax=ymax)
sns.despine(ax=ax)
grt = tot_fs_hets >= o
les = tot_fs_hets <= o
logger.info(
"{:.3f} of simulated values are greater than or equal to the observed".format(
1 - np.mean(grt)))
logger.info(
"{:.3f} of simulated values are less than or equal to the observed".format(
1 - np.mean(les)))
run_fs_het_analysis("../data/gabon_n_alt.npy")
run_fs_het_analysis("../data/cameroon_n_alt.npy")
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.